Now showing 1 - 10 of 11
  • Publication
    Deepfakes Signatures Detection in the Handcrafted Features Space
    (2023)
    Hamadene, Assia
    ;
    Ouahabi, Abdeldjalil
    ;
    In the Handwritten Signature Verification (HSV) literature, several synthetic databases have been developed for data-augmentation purposes, where new specimens and new identities were generated using bio-inspired algorithms, neuromotor synthesizers, Generative Adversarial Networks (GANs) as well as several deep learning methods. These synthetic databases contain synthetic genuine and forgeries specimens which are used to train and build signature verification systems. Researches on generative data assume that synthetic data are as close as possible to real data, this is why, they are either used for training systems when used for data augmentation tasks or are used to fake systems as synthetic attacks. It is worth, however, to point out the existence of a relationship between the handwritten signature authenticity and human behavior and brain. Indeed, a genuine signature is characterised by specific features that are related to the owner’s personality. The fact which makes signature verification and authentication achievable. Handcrafted features had demonstrated a high capacity to capture personal traits for authenticating real static signatures. We, therefore, Propose in this paper, a handcrafted feature based Writer-Independent (WI) signature verification system to detect synthetic writers and signatures through handcrafted features. We also aim to assess how realistic are synthetic signatures as well as their impact on HSV system’s performances. Obtained results using 4000 synthetic writers of GPDS synthetic database show that the proposed handcrafted features have considerable ability to detect synthetic signatures vs. two widely used real individuals signatures databases, namely CEDAR and GPDS-300, which reach 98.67% and 94.05% of successful synthetic detection rates respectively.
      8
  • Publication
    Evaluation of Pre-Trained CNN Models for Geographic Fake Image Detection
    (2022) ;
    Fezza, Sid
    ;
    Ouis, Mohammed
    ;
    Kaddar, Bachir
    ;
    Hamidouche, Wassim
    Thanks to the remarkable advances in generative adversarial networks (GANs), it is becoming increasingly easy to generate/manipulate images. The existing works have mainly focused on deepfake in face images and videos. However, we are currently witnessing the emergence of fake satellite images, which can be misleading or even threatening to national security. Consequently, there is an urgent need to develop detection methods capable of distinguishing between real and fake satellite images. To advance the field, in this paper, we explore the suitability of several convolutional neural network (CNN) architectures for fake satellite image detection. Specifically, we benchmark four CNN models by conducting extensive experiments to evaluate their performance and robustness against various image distortions. This work allows the establishment of new baselines and may be useful for the development of CNN-based methods for fake satellite image detection.
      26  27
  • Publication
    Face Sketch Synthesis using Generative Adversarial Networks
    (2022) ;
    Mahfoud, Sami
    ;
    Daamouche, Abdelhamid
    ;
    Bengherabi, Messaoud
    ;
    Boutellaa, Elhocine
    Face Sketch Synthesis is crucial for a wide range of practical applications, including digital entertainment and law enforcement. Recent approaches based on Generative Adversarial Networks (GANs) have shown compelling results in image-to-image translation as well as face photo-sketch synthesis. However, these methods still have considerable limitations as some noise appears in synthesized sketches which leads to poor perceptual quality and poor preserving fidelity. To tackle this issue, in this paper, we propose a Face Sketch Synthesis technique using conditional GAN to generate facial sketches from facial photographs named cGAN-FSS. Our cGAN-FSS framework generates high perceptual quality of face sketch synthesis while maintaining high identity recognition accuracy. Image Quality Assessment metrics and Face Recognition experiments confirm our proposed framework's performs better than the state-of-the-art methods.
      37  25
  • Publication
    Hand-drawn face sketch recognition using rank-level fusion of image quality assessment metrics
    (2022) ;
    Mahfoud, Sami
    ;
    Daamouche, Abdelhamid
    ;
    Bengherabi, Messaoud
    Face Sketch Recognition (FSR) presents a severe challenge to conventional recognition paradigms developed basically to match face photos. This challenge is mainly due to the large texture discrepancy between face sketches, characterized by shape exaggeration, and face photos. In this paper, we propose a training-free synthesized face sketch recognition method based on the rank-level fusion of multiple Image Quality Assessment (IQA) metrics. The advantages of IQA metrics as a recognition engine are combined with the rank level fusion to boost the final recognition accuracy. By integrating multiple IQA metrics into the face sketch recognition framework, the proposed method simultaneously performs face-sketch matching application and evaluates the performance of face sketch synthesis methods. To test the performance of the recognition framework, five synthesized face sketch methods are used to generate sketches from face photos. We use the Borda count approach to fuse four IQA metrics, namely, structured similarity index metric, feature similarity index metric, visual information fidelity and gradient magnitude similarity deviation at the rank-level. Experimental results and comparison with the state-of-the-art methods illustrate the competitiveness of the proposed synthesized face sketch recognition framework.
      106  1Scopus© Citations 2
  • Publication
    Kinship recognition from faces using deep learning with imbalanced data
    (2022) ;
    Othmani, Alice
    ;
    Han, Duqing
    ;
    Gao, Xin
    ;
    Ye, Runpeng
    Kinship verification from faces aims to determine whether two person share some family relationship based only on the visual facial patterns. This has attracted a significant interests among the scientific community due to its potential applications in social media mining and finding missing children. In this work, We propose a novel pattern analysis technique for kinship verification based on a new deep learning-based approach. More specifically, given a pair of face images, we first use Resnet50 to extract deep features from each image. Then, feature distances between each pair of images are computed. Importantly, to overcome the problem of unbalanced data, One Hot Encoding for labels is utilised. The distances finally are fed to a deep neural networks to determine the kinship relation. Extensive experiments are conducted on FIW dataset containing 11 classes of kinship relationships. The experiments showed very promising results and pointed out the importance of balancing the training dataset. Moreover, our approach showed interesting ability of generalization. Results show that our approach performs better than all existing approaches on grandparents-grandchildren type of kinship. To support the principle of open and reproducible research, we are soon making our code publicly available to the research community: github.com/Steven-HDQ/Kinship-Recognition.
      31Scopus© Citations 1
  • Publication
    Knowledge-based Deep Learning for Modeling Chaotic Systems
    (2022)
    Elabid, Zakaria
    ;
    ;
    Deep Learning has received increased attention due to its unbeatable success in many fields, such as computer vision, natural language processing, recommendation systems, and most recently in simulating multiphysics problems and predicting nonlinear dynamical systems. However, modeling and forecasting the dynamics of chaotic systems remains an open research problem since training deep learning models requires big data, which is not always available in many cases. Such deep learners can be trained from additional information obtained from simulated results and by enforcing the physical laws of the chaotic systems. This paper considers extreme events and their dynamics and proposes elegant models based on deep neural networks, called knowledge-based deep learning (KDL). Our proposed KDL can learn the complex patterns governing chaotic systems by jointly training on real and simulated data directly from the dynamics and their differential equations. This knowledge is transferred to model and forecast real-world chaotic events exhibiting extreme behavior. We validate the efficiency of our model by assessing it on three real-world benchmark datasets: El Niño sea surface temperature, San Juan Dengue viral infection, and Bjørnøya daily precipitation, all governed by extreme events' dynamics. Using prior knowledge of extreme events and physics-based loss functions to lead the neural network learning, we ensure physically consistent, generalizable, and accurate forecasting, even in a small data regime. Index Terms-Chaotic systems, long short-term memory, deep learning, extreme event modeling.
      26  3
  • Publication
    On the effectiveness of handcrafted features for deepfake video detection
    (2023)
    Kaddar, Bachir
    ;
    Fezza, Sid Ahmed
    ;
    Hamidouche, Wassim
    ;
    Akhtar, Zahid
    ;
    Recent developments in advanced generative deep learning techniques have led to considerable progress in deepfake technology. CNN-based deepfake detection approaches have demonstrated superior performance. The ability to learn meaningful representations generated by convolutional multilayer nonlinear structures is the key to success. However, the black-box nature of such approaches has been a major concern for exploring hidden and complex characteristics as well as potential limitations of CNN-based models. To gain insights into the scope of the deepfake detection task, we investigate the effectiveness of handcrafted feature-based methods for deepfake video detection. First, we experiment with six top-performing handcrafted descriptors to extract the discriminating image features and then train SVMs on the extracted features to learn a suitable model. We also study the effect of selecting specific facial components on the detection performance. Specifically, we consider features extracted from the left eye, right eye, mouth, and entire face. Moreover, we propose a combination of these features and highlight the importance of this combination in terms of detection performance. Experimental results show that the SIFT feature descriptor achieves the best performance on deepfake videos generated by the neural texture technique, with a detection accuracy of 83.50%, which is better than deep learning-based methods. This is in contrast to the conventional understanding that deep learning methods systematically outperform handcrafted feature-based approaches. In addition, the obtained results on the FaceForensics++ dataset highlight the benefit of using some facial components to further boost the detection performance. Moreover, motivated by the effectiveness of the LBPTOP and SIFT in the deepfake detection task, we combined the LBPTOP and SIFT to best characterize the specific spatiotemporal inconsistencies commonly found in fake videos for boosting deepfake detection performance. Finally, we show the strengths and weaknesses of methods based on handcrafted features for deepfake detection and provide directions for future research.
      6
  • Publication
    Probabilistic AutoRegressive Neural Networks for Accurate Long-Range Forecasting
    (2023)
    Panja, Madhurima
    ;
    ;
    Kumar, Uttam
    ;
    Forecasting time series data is a critical area of research with applications spanning from stock prices to early epidemic prediction. While numerous statistical and machine learning methods have been proposed, real-life prediction problems often require hybrid solutions that bridge classical forecasting approaches and modern neural network models. In this study, we introduce a Probabilistic AutoRegressive Neural Network (PARNN), capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns. PARNN is constructed by improving autoregressive neural networks (ARNN) using autoregressive integrated moving average (ARIMA) feedback error. Notably, the PARNN model provides uncertainty quantification through prediction intervals and conformal predictions setting it apart from advanced deep learning tools. Through comprehensive computational experiments, we evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models. Diverse real-world datasets from macroeconomics, tourism, epidemiology, and other domains are employed for short-term, medium-term, and long-term forecasting evaluations. Our results demonstrate the superiority of PARNN across various forecast horizons, surpassing the state-of-the-art forecasters. The proposed PARNN model offers a valuable hybrid solution for accurate long-range forecasting. The ability to quantify uncertainty through prediction intervals further enhances the model’s usefulness in various decision-making processes.
      13
  • Publication
    Vehicular Environment Identification Based on Channel State Information and Deep Learning
    (2022) ;
    Ribouh, Soheyb
    ;
    Sadli, Rahmad
    ;
    Elhillali, Yassin
    ;
    Rivenq, Atika
    This paper presents a novel vehicular environment identification approach based on deep learning. It consists of exploiting the vehicular wireless channel characteristics in the form of Channel State Information (CSI) in the receiver side of a connected vehicle in order to identify the environment type in which the vehicle is driving, without any need to implement specific sensors such as cameras or radars. We consider environment identification as a classification problem, and propose a new convolutional neural network (CNN) architecture to deal with it. The estimated CSI is used as the input feature to train the model. To perform the identification process, the model is targeted for implementation in an autonomous vehicle connected to a vehicular network (VN). The proposed model is extensively evaluated, showing that it can reliably recognize the surrounding environment with high accuracy (96.48%). Our model is compared to related approaches and state-ofthe-art classification architectures. The experiments show that our proposed model yields favorable performance compared to all other considered methods.
      64  57Scopus© Citations 1
  • Publication
    W-Transformers : A Wavelet-based Transformer Framework for Univariate Time Series Forecasting
    Deep learning utilizing transformers has recently achieved a lot of success in many vital areas such as natural language processing, computer vision, anomaly detection, and recommendation systems, among many others. Among several merits of transformers, the ability to capture long-range temporal dependencies and interactions is desirable for time series forecasting, leading to its progress in various time series applications. In this paper, we build a transformer model for non-stationary time series. The problem is challenging yet crucially important. We present a novel framework for univariate time series representation learning based on the wavelet-based transformer encoder architecture and call it W-Transformer. The proposed W-Transformers utilize a maximal overlap discrete wavelet transformation (MODWT) to the time series data and build local transformers on the decomposed datasets to vividly capture the nonstationarity and long-range nonlinear dependencies in the time series. Evaluating our framework on several publicly available benchmark time series datasets from various domains and with diverse characteristics, we demonstrate that it performs, on average, significantly better than the baseline forecasters for short-term and long-term forecasting, even for datasets that consist of only a few hundred training samples.
      47  93Scopus© Citations 8