Search Results for author: Andrey V. Savchenko

Found 12 papers, 5 papers with code

HSEmotion Team at the 6th ABAW Competition: Facial Expressions, Valence-Arousal and Emotion Intensity Prediction

no code implementations18 Mar 2024 Andrey V. Savchenko

This article presents our results for the sixth Affective Behavior Analysis in-the-wild (ABAW) competition.

EmotiEffNet Facial Features in Uni-task Emotion Recognition in Video at ABAW-5 competition

1 code implementation16 Mar 2023 Andrey V. Savchenko

In this article, the results of our team for the fifth Affective Behavior Analysis in-the-wild (ABAW) competition are presented.

Action Unit Detection Arousal Estimation +2

HSE-NN Team at the 4th ABAW Competition: Multi-task Emotion Recognition and Learning from Synthetic Images

1 code implementation19 Jul 2022 Andrey V. Savchenko

In the learning from synthetic data challenge, the quality of the original synthetic training set is increased by using the super-resolution techniques, such as Real-ESRGAN.

Emotion Recognition Multi-Task Learning +1

Frame-level Prediction of Facial Expressions, Valence, Arousal and Action Units for Mobile Devices

1 code implementation25 Mar 2022 Andrey V. Savchenko

In this paper, we consider the problem of real-time video-based facial emotion analytics, namely, facial expression recognition, prediction of valence and arousal and detection of action unit points.

Arousal Estimation Emotion Recognition +2

Facial expression and attributes recognition based on multi-task learning of lightweight neural networks

2 code implementations31 Mar 2021 Andrey V. Savchenko

In this paper, the multi-task learning of lightweight convolutional neural networks is studied for face identification and classification of facial attributes (age, gender, ethnicity) trained on cropped faces without margins.

Emotion Classification Emotion Recognition +2

Event Recognition with Automatic Album Detection based on Sequential Processing, Neural Attention and Image Captioning

no code implementations25 Nov 2019 Andrey V. Savchenko

However, it is possible to combine our approach with conventional CNNs in an ensemble to provide the state-of-the-art results for several event datasets.

Clustering Image Captioning

Compression of Recurrent Neural Networks for Efficient Language Modeling

no code implementations6 Feb 2019 Artem M. Grachev, Dmitry I. Ignatov, Andrey V. Savchenko

We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for language modeling.

Language Modelling Quantization

Efficient Facial Representations for Age, Gender and Identity Recognition in Organizing Photo Albums using Multi-output CNN

no code implementations20 Jul 2018 Andrey V. Savchenko

We modified the MobileNet, which is preliminarily trained to perform face recognition, in order to additionally recognize age and gender.

Clustering Face Identification +1

Group-level Emotion Recognition using Transfer Learning from Face Identification

1 code implementation6 Sep 2017 Alexandr G. Rassadin, Alexey S. Gruzdev, Andrey V. Savchenko

In this paper, we describe our algorithmic approach, which was used for submissions in the fifth Emotion Recognition in the Wild (EmotiW 2017) group-level emotion recognition sub-challenge.

Emotion Recognition Face Identification +1

Maximum A Posteriori Estimation of Distances Between Deep Features in Still-to-Video Face Recognition

no code implementations26 Aug 2017 Andrey V. Savchenko, Natalya S. Belova

The paper deals with the still-to-video face recognition for the small sample size problem based on computation of distances between high-dimensional deep bottleneck features.

Face Recognition Video Recognition

Neural Networks Compression for Language Modeling

no code implementations20 Aug 2017 Artem M. Grachev, Dmitry I. Ignatov, Andrey V. Savchenko

In this paper, we consider several compression techniques for the language modeling problem based on recurrent neural networks (RNNs).

Language Modelling Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.