Search Results for author: Viktor Rozgic

Found 8 papers, 0 papers with code

Federated Self-Supervised Learning for Acoustic Event Classification

no code implementations22 Mar 2022 Meng Feng, Chieh-Chi Kao, Qingming Tang, Ming Sun, Viktor Rozgic, Spyros Matsoukas, Chao Wang

Standard acoustic event classification (AEC) solutions require large-scale collection of data from client devices for model optimization.

Classification Continual Learning +2

Sentiment-Aware Automatic Speech Recognition pre-training for enhanced Speech Emotion Recognition

no code implementations27 Jan 2022 Ayoub Ghriss, Bo Yang, Viktor Rozgic, Elizabeth Shriberg, Chao Wang

We pre-train SER model simultaneously on Automatic Speech Recognition (ASR) and sentiment classification tasks to make the acoustic ASR model more ``emotion aware''.

Automatic Speech Recognition Classification +2

Compression of Acoustic Event Detection Models With Quantized Distillation

no code implementations1 Jul 2019 Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang

Acoustic Event Detection (AED), aiming at detecting categories of events based on audio signals, has found application in many intelligent systems.

Event Detection Knowledge Distillation +1

Compression of Acoustic Event Detection Models with Low-rank Matrix Factorization and Quantization Training

no code implementations NIPS Workshop CDNNRIA 2018 Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang

In this paper, we present a compression approach based on the combination of low-rank matrix factorization and quantization training, to reduce complexity for neural network based acoustic event detection (AED) models.

Event Detection Quantization

Learning Spatiotemporal Features for Infrared Action Recognition with 3D Convolutional Neural Networks

no code implementations18 May 2017 Zhuolin Jiang, Viktor Rozgic, Sancar Adali

Experimental results demonstrate that our approach can achieve state-of-the-art average precision (AP) performances on the InfAR dataset: (1) the proposed two-stream 3D CNN achieves the best reported 77. 5% AP, and (2) our 3D CNN model applied to the optical flow fields achieves the best reported single stream 75. 42% AP.

Action Recognition Optical Flow Estimation

Learning Discriminative Features via Label Consistent Neural Network

no code implementations3 Feb 2016 Zhuolin Jiang, Yaming Wang, Larry Davis, Walt Andrews, Viktor Rozgic

Deep Convolutional Neural Networks (CNN) enforces supervised information only at the output layer, and hidden layers are trained by back propagating the prediction error from the output layer without explicit supervision.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.