The performance of face recognition systems can be negatively impacted in the presence of masks and other types of facial coverings that have become prevalent due to the COVID-19 pandemic.
Secondly, we discuss the appropriate spectral bands for face recognition and discuss recent CFR methods, placing emphasis on deep neural networks.
The rapid emergence of airborne platforms and imaging sensors are enabling new forms of aerial surveillance due to their unprecedented advantages in scale, mobility, deployment and covert observation capabilities.
We performed experiments on AMSL face morph, MorGAN, and EMorGAN datasets to demonstrate the effectiveness of the proposed method.
This paper demonstrates the viability of utilizing a sensor with time-series and color-sensing capabilities to improve the robustness of a traditional fingerprint sensor and introduces a comprehensive fingerprint dataset with over 36, 000 image sequences and a state-of-the-art set of spoofing techniques.
In this work, we propose a method to simultaneously perform (i) biometric recognition (i. e., identify the individual), and (ii) device recognition, (i. e., identify the device) from a single biometric image, say, a face image, using a one-shot schema.
Automatic speaker recognition algorithms typically characterize speech audio using short-term spectral features that encode the physiological and anatomical aspects of speech production.
The Styling Network helps the generator to drive the translation of images from a source domain to a reference domain and generate synthetic images with style characteristics of the reference domain.
Unlike the problem of general object recognition, where real-valued neural networks can be used to extract pertinent features, iris recognition depends on the extraction of both phase and magnitude information from the input iris texture in order to better represent its biometric content.
The principle of Photo Response Non Uniformity (PRNU) is often exploited to deduce the identity of the smartphone device whose camera or sensor was used to acquire a certain image.
no code implementations • 1 Sep 2020 • Priyanka Das, Joseph McGrath, Zhaoyuan Fang, Aidan Boyd, Ganghee Jang, Amir Mohammadi, Sandip Purnapatra, David Yambay, Sébastien Marcel, Mateusz Trokielewicz, Piotr Maciejewicz, Kevin Bowyer, Adam Czajka, Stephanie Schuckers, Juan Tapia, Sebastian Gonzalez, Meiling Fang, Naser Damer, Fadi Boutros, Arjan Kuijper, Renu Sharma, Cunjian Chen, Arun Ross
Launched in 2013, LivDet-Iris is an international competition series open to academia and industry with the aim to assess and report advances in iris Presentation Attack Detection (PAD).
Automatic speaker recognition algorithms typically use pre-defined filterbanks, such as Mel-Frequency and Gammatone filterbanks, for characterizing speech audio.
We also evaluate the effect of gender and language on speaker recognition performance, both in spoken and singing voice data.
An iris recognition system is vulnerable to presentation attacks, or PAs, where an adversary presents artifacts such as printed eyes, plastic eyes, or cosmetic contact lenses to circumvent the system.
Further, PrivacyNet allows a person to choose specific attributes that have to be obfuscated in the input face images (e. g., age and race), while allowing for other types of attributes to be extracted (e. g., gender).
The need for reliably determining the identity of a person is critical in a number of different domains ranging from personal smartphones to border security; from autonomous vehicles to e-voting; from tracking child vaccinations to preventing human trafficking; from crime scene investigation to personalization of customer service.
In this work, we present a deep learning approach for pedestrian trajectory forecasting using a single vehicle-mounted camera.
In this regard, Semi-Adversarial Networks (SAN) have recently emerged as a method for imparting soft-biometric privacy to face images.
Designing face recognition systems that are capable of matching face images obtained in the thermal spectrum with those obtained in the visible spectrum is a challenging problem.
Prevailing user authentication schemes on smartphones rely on explicit user interaction, where a user types in a passcode or presents a biometric cue such as face, fingerprint, or iris.
The principle of Photo Response Non-Uniformity (PRNU) is used to link an image with its source, i. e., the sensor that produced it.
Recent research has proposed the use of Semi Adversarial Networks (SAN) for imparting privacy to face images.
Recent research has explored the possibility of automatically deducing information such as gender, age and race of an individual from their biometric data.
In this paper, we present Auto-Tuned Models, or ATM, a distributed, collaborative, scalable system for automated machine learning.
In this paper, we design and evaluate a convolutional autoencoder that perturbs an input face image to impart privacy to a subject.
This paper presents a framework for Biometrics-as-a-Service (BaaS) that performs biometric matching operations in the cloud, while relying on simple and ubiquitous consumer devices such as smartphones.
The proposed method, referred to as Latent Variable Evolution, is based on training a Generative Adversarial Network on a set of real fingerprint images.