Search Results for author: Madina Abdrakhmanova

Found 4 papers, 2 papers with code

Learning Consistent Deep Generative Models from Sparsely Labeled Data

no code implementations pproximateinference AABI Symposium 2022 Gabriel Hope, Madina Abdrakhmanova, Xiaoyin Chen, Michael C Hughes, Erik B. Sudderth

We consider training deep generative models toward two simultaneous goals: discriminative classification and generative modeling using an explicit likelihood.

Image Classification

A Study of Multimodal Person Verification Using Audio-Visual-Thermal Data

1 code implementation23 Oct 2021 Madina Abdrakhmanova, Saniya Abushakimova, Yerbolat Khassanov, Huseyin Atakan Varol

In this paper, we study an approach to multimodal person verification using audio, visual, and thermal modalities.

Learning Consistent Deep Generative Models from Sparse Data via Prediction Constraints

no code implementations12 Dec 2020 Gabriel Hope, Madina Abdrakhmanova, Xiaoyin Chen, Michael C. Hughes, Erik B. Sudderth

We develop a new framework for learning variational autoencoders and other deep generative models that balances generative and discriminative goals.

General Classification Image Classification

SpeakingFaces: A Large-Scale Multimodal Dataset of Voice Commands with Visual and Thermal Video Streams

1 code implementation5 Dec 2020 Madina Abdrakhmanova, Askat Kuzdeuov, Sheikh Jarju, Yerbolat Khassanov, Michael Lewis, Huseyin Atakan Varol

We present SpeakingFaces as a publicly-available large-scale multimodal dataset developed to support machine learning research in contexts that utilize a combination of thermal, visual, and audio data streams; examples include human-computer interaction, biometric authentication, recognition systems, domain transfer, and speech recognition.

speech-recognition Speech Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.