Now showing 1 - 10 of 15
  • Publication
    COVID-19 Infection Percentage Estimation from Computed Tomography Scans: Results and Insights from the International Per-COVID-19 Challenge
    (2024)
    Bougourzi, Fares
    ;
    Distante, Cosimo
    ;
    Dornaika, Fadi
    ;
    Taleb-Ahmed, Abdelmalik
    ;
    ;
    Chaudhary, Suman
    ;
    Yang, Wanting
    ;
    Qiang, Yan
    ;
    Anwar, Talha
    ;
    Breaban, Mihaela Elena
    ;
    Hsu, Chih-Chung
    ;
    Tai, Shen-Chieh
    ;
    Chen, Shao-Ning
    ;
    Tricarico, Davide
    ;
    Chaudhry, Hafiza Ayesha Hoor
    ;
    Fiandrotti, Attilio
    ;
    Grangetto, Marco
    ;
    Spatafora, Maria Ausilia Napoli
    ;
    Ortis, Alessandro
    ;
    Battiato, Sebastiano
    COVID-19 analysis from medical imaging is an important task that has been intensively studied in the last years due to the spread of the COVID-19 pandemic. In fact, medical imaging has often been used as a complementary or main tool to recognize the infected persons. On the other hand, medical imaging has the ability to provide more details about COVID-19 infection, including its severity and spread, which makes it possible to evaluate the infection and follow-up the patient’s state. CT scans are the most informative tool for COVID-19 infection, where the evaluation of COVID-19 infection is usually performed through infection segmentation. However, segmentation is a tedious task that requires much effort and time from expert radiologists. To deal with this limitation, an efficient framework for estimating COVID-19 infection as a regression task is proposed. The goal of the Per-COVID-19 challenge is to test the efficiency of modern deep learning methods on COVID-19 infection percentage estimation (CIPE) from CT scans. Participants had to develop an efficient deep learning approach that can learn from noisy data. In addition, participants had to cope with many challenges, including those related to COVID-19 infection complexity and crossdataset scenarios. This paper provides an overview of the COVID-19 infection percentage estimation challenge (Per-COVID-19) held at MIA-COVID-2022. Details of the competition data, challenges, and evaluation metrics are presented. The best performing approaches and their results are described and discussed.
      10
  • Publication
    Deep Learning Techniques for Colorectal Cancer Detection: Convolutional Neural Networks vs Vision Transformers
    (2024)
    Sari, Meriem
    ;
    Moussaoui, Abdelouahab
    ;
    Colorectal cancer (CRC) is one of the most common cancers among humans, its diagnosis is made through the visual analysis of tissue samples by pathologists; artificial intelligence (AI) can automate this analysis based on histological images generated from different tissue samples. In this paper we aim to enhance this digital pathology process by proposing two deep learning (DL) based methods that are extremely accurate and reliable despite several limitations. Our first method is based on Convolutional Neural Networks (CNN) in order to classify different classes of tissues into cancerous and non-cancerous cells based on histological images. Our second method is based on Vision Transformers and also classifies images into cancerous and non cancerous cells. Due to the sensitivity of the problem, the performance of our work will be estimated using accuracy, precision, recall and F -score metrics since they ensure more credibility to the classification results; our models have been tested and evaluated with a dataset collected from LC25000 database containing 10000 images of cancerous and non-cancerous tissues, our models achieved promising results with an overall accuracy of 99.84 % and 98.95 % respectively with precision= 100%, recall= 100% and Fl-score= 100%, we observed that both of our models overcame several state-of-the-art results.
      9
  • Publication
    Deepfakes Signatures Detection in the Handcrafted Features Space
    (2023)
    Hamadene, Assia
    ;
    Ouahabi, Abdeldjalil
    ;
    In the Handwritten Signature Verification (HSV) literature, several synthetic databases have been developed for data-augmentation purposes, where new specimens and new identities were generated using bio-inspired algorithms, neuromotor synthesizers, Generative Adversarial Networks (GANs) as well as several deep learning methods. These synthetic databases contain synthetic genuine and forgeries specimens which are used to train and build signature verification systems. Researches on generative data assume that synthetic data are as close as possible to real data, this is why, they are either used for training systems when used for data augmentation tasks or are used to fake systems as synthetic attacks. It is worth, however, to point out the existence of a relationship between the handwritten signature authenticity and human behavior and brain. Indeed, a genuine signature is characterised by specific features that are related to the owner’s personality. The fact which makes signature verification and authentication achievable. Handcrafted features had demonstrated a high capacity to capture personal traits for authenticating real static signatures. We, therefore, Propose in this paper, a handcrafted feature based Writer-Independent (WI) signature verification system to detect synthetic writers and signatures through handcrafted features. We also aim to assess how realistic are synthetic signatures as well as their impact on HSV system’s performances. Obtained results using 4000 synthetic writers of GPDS synthetic database show that the proposed handcrafted features have considerable ability to detect synthetic signatures vs. two widely used real individuals signatures databases, namely CEDAR and GPDS-300, which reach 98.67% and 94.05% of successful synthetic detection rates respectively.
      14Scopus© Citations 1
  • Publication
    Driver's Facial Expression Recognition Using Global Context Vision Transformer
    (2023)
    Saadi, Ibtissam
    ;
    Cunningham, Douglas W
    ;
    Abdelmalik, Taleb-Ahmed
    ;
    ;
    El Hillali, Yassin
    Driver's facial expression recognition plays a critical role in enhancing driver safety, comfort, and overall driving experience by proactively mitigating potential road risks. While most existing works in this domain rely on CNN - based approaches, this paper proposes a novel method for driver facial expression recognition using Global Context Vision Transformer (DFER-GCViT). With its inherent capabilities of transformer-based architectures and global context modeling, the proposed method handles challenges commonly encountered in real-world driving scenarios, including occlusions, head pose variations, and illumination conditions. Our method consists of three modules: preprocessing for face detection and data augmentation, facial feature extraction of local and global features, and expression classification using a modified GC-ViT classifier. To evaluate the performance of DFER-GCViT, extensive experiments are conducted on two benchmarking datasets namely the KMU-FED driver facial expression dataset and FER2013 general facial expression dataset. The experimental results demonstrate the superiority of DFER-GCViT in accurately recognizing driver's facial expressions, achieving an average accuracy of 98.27 % on the KMU-FED dataset and 73.78% on the FER2013 dataset, outperforming several state-of-the-art methods on these two benchmarking datasets.
    Scopus© Citations 1  4
  • Publication
    Driver’s facial expression recognition: A comprehensive survey
    (2024)
    Saadi, Ibtissam
    ;
    Cunningham, Douglas W.
    ;
    Taleb-Ahmed, Abdelmalik
    ;
    ;
    El Hillali, Yassin
    Driving is an integral part of daily life for millions of people worldwide, and it has a profound impact on road safety and human health. The emotional state of the driver, including feelings of anger, happiness, or fear, can significantly affect their ability to make safe driving decisions. Recognizing the facial expressions of drivers(DFER) has emerged as a promising technique for improving road safety and can provide valuable information about their emotions, This information can be used by intelligent transportation systems (ITS), like advanced driver assistance systems (ADAS) to take appropriate decision, such as alerting the driver or intervening in the driving process, to prevent the potential risks. This survey paper presents a comprehensive survey of recent studies that focus on the problem of recognizing the facial expression of driver recognition in the driving context from 2018 to March 2023. Specifically, we examine studies that address the recognition of the driver's emotion using facial expressions and explore the challenges that exist in this field, such as illumination conditions, occlusion, and head poses. Our survey includes an analysis of different techniques and methods used to identify and categorize specific expressions or emotions of the driver. We begin by reviewing and comparing available datasets and summarizing state-of-the-art methods, including machine learning-based methods, deep learning-based methods, and hybrid methods. We also identify limitations and potential areas for improvement. Overall, our survey highlights the importance of recognizing driver facial expressions in improving road safety and provides valuable insights into recent developments and future research directions in this field.
    Scopus© Citations 1  13
  • Publication
    Evaluation of Pre-Trained CNN Models for Geographic Fake Image Detection
    (2022) ;
    Fezza, Sid
    ;
    Ouis, Mohammed
    ;
    Kaddar, Bachir
    ;
    Hamidouche, Wassim
    Thanks to the remarkable advances in generative adversarial networks (GANs), it is becoming increasingly easy to generate/manipulate images. The existing works have mainly focused on deepfake in face images and videos. However, we are currently witnessing the emergence of fake satellite images, which can be misleading or even threatening to national security. Consequently, there is an urgent need to develop detection methods capable of distinguishing between real and fake satellite images. To advance the field, in this paper, we explore the suitability of several convolutional neural network (CNN) architectures for fake satellite image detection. Specifically, we benchmark four CNN models by conducting extensive experiments to evaluate their performance and robustness against various image distortions. This work allows the establishment of new baselines and may be useful for the development of CNN-based methods for fake satellite image detection.
      30  40
  • Publication
    Face Sketch Synthesis using Generative Adversarial Networks
    (2022) ;
    Mahfoud, Sami
    ;
    Daamouche, Abdelhamid
    ;
    Bengherabi, Messaoud
    ;
    Boutellaa, Elhocine
    Face Sketch Synthesis is crucial for a wide range of practical applications, including digital entertainment and law enforcement. Recent approaches based on Generative Adversarial Networks (GANs) have shown compelling results in image-to-image translation as well as face photo-sketch synthesis. However, these methods still have considerable limitations as some noise appears in synthesized sketches which leads to poor perceptual quality and poor preserving fidelity. To tackle this issue, in this paper, we propose a Face Sketch Synthesis technique using conditional GAN to generate facial sketches from facial photographs named cGAN-FSS. Our cGAN-FSS framework generates high perceptual quality of face sketch synthesis while maintaining high identity recognition accuracy. Image Quality Assessment metrics and Face Recognition experiments confirm our proposed framework's performs better than the state-of-the-art methods.
      38  38
  • Publication
    Hand-drawn face sketch recognition using rank-level fusion of image quality assessment metrics
    (2022) ;
    Mahfoud, Sami
    ;
    Daamouche, Abdelhamid
    ;
    Bengherabi, Messaoud
    Face Sketch Recognition (FSR) presents a severe challenge to conventional recognition paradigms developed basically to match face photos. This challenge is mainly due to the large texture discrepancy between face sketches, characterized by shape exaggeration, and face photos. In this paper, we propose a training-free synthesized face sketch recognition method based on the rank-level fusion of multiple Image Quality Assessment (IQA) metrics. The advantages of IQA metrics as a recognition engine are combined with the rank level fusion to boost the final recognition accuracy. By integrating multiple IQA metrics into the face sketch recognition framework, the proposed method simultaneously performs face-sketch matching application and evaluates the performance of face sketch synthesis methods. To test the performance of the recognition framework, five synthesized face sketch methods are used to generate sketches from face photos. We use the Borda count approach to fuse four IQA metrics, namely, structured similarity index metric, feature similarity index metric, visual information fidelity and gradient magnitude similarity deviation at the rank-level. Experimental results and comparison with the state-of-the-art methods illustrate the competitiveness of the proposed synthesized face sketch recognition framework.
      140  1Scopus© Citations 2
  • Publication
    Kinship recognition from faces using deep learning with imbalanced data
    (2022) ;
    Othmani, Alice
    ;
    Han, Duqing
    ;
    Gao, Xin
    ;
    Ye, Runpeng
    Kinship verification from faces aims to determine whether two person share some family relationship based only on the visual facial patterns. This has attracted a significant interests among the scientific community due to its potential applications in social media mining and finding missing children. In this work, We propose a novel pattern analysis technique for kinship verification based on a new deep learning-based approach. More specifically, given a pair of face images, we first use Resnet50 to extract deep features from each image. Then, feature distances between each pair of images are computed. Importantly, to overcome the problem of unbalanced data, One Hot Encoding for labels is utilised. The distances finally are fed to a deep neural networks to determine the kinship relation. Extensive experiments are conducted on FIW dataset containing 11 classes of kinship relationships. The experiments showed very promising results and pointed out the importance of balancing the training dataset. Moreover, our approach showed interesting ability of generalization. Results show that our approach performs better than all existing approaches on grandparents-grandchildren type of kinship. To support the principle of open and reproducible research, we are soon making our code publicly available to the research community: github.com/Steven-HDQ/Kinship-Recognition.
      33Scopus© Citations 3
  • Publication
    Knowledge-based Deep Learning for Modeling Chaotic Systems
    (2022)
    Elabid, Zakaria
    ;
    ;
    Deep Learning has received increased attention due to its unbeatable success in many fields, such as computer vision, natural language processing, recommendation systems, and most recently in simulating multiphysics problems and predicting nonlinear dynamical systems. However, modeling and forecasting the dynamics of chaotic systems remains an open research problem since training deep learning models requires big data, which is not always available in many cases. Such deep learners can be trained from additional information obtained from simulated results and by enforcing the physical laws of the chaotic systems. This paper considers extreme events and their dynamics and proposes elegant models based on deep neural networks, called knowledge-based deep learning (KDL). Our proposed KDL can learn the complex patterns governing chaotic systems by jointly training on real and simulated data directly from the dynamics and their differential equations. This knowledge is transferred to model and forecast real-world chaotic events exhibiting extreme behavior. We validate the efficiency of our model by assessing it on three real-world benchmark datasets: El Niño sea surface temperature, San Juan Dengue viral infection, and Bjørnøya daily precipitation, all governed by extreme events' dynamics. Using prior knowledge of extreme events and physics-based loss functions to lead the neural network learning, we ensure physically consistent, generalizable, and accurate forecasting, even in a small data regime. Index Terms-Chaotic systems, long short-term memory, deep learning, extreme event modeling.
      36  3