Search Results for author: Jaesung Huh

Found 15 papers, 7 papers with code

TIM: A Time Interval Machine for Audio-Visual Action Recognition

2 code implementations CVPR 2024 Jacob Chalk, Jaesung Huh, Evangelos Kazakos, Andrew Zisserman, Dima Damen

We address the interplay between the two modalities in long videos by explicitly modelling the temporal extents of audio and visual events.

Action Detection Action Recognition

OxfordVGG Submission to the EGO4D AV Transcription Challenge

1 code implementation18 Jul 2023 Jaesung Huh, Max Bain, Andrew Zisserman

This report presents the technical details of our submission on the EGO4D Audio-Visual (AV) Automatic Speech Recognition Challenge 2023 from the OxfordVGG team.

Automatic Speech Recognition speech-recognition +1

VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge

1 code implementation20 Feb 2023 Jaesung Huh, Andrew Brown, Jee-weon Jung, Joon Son Chung, Arsha Nagrani, Daniel Garcia-Romero, Andrew Zisserman

This paper summarises the findings from the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22), which was held in conjunction with INTERSPEECH 2022.

Speaker Diarization Speaker Recognition +1

Epic-Sounds: A Large-scale Dataset of Actions That Sound

1 code implementation1 Feb 2023 Jaesung Huh, Jacob Chalk, Evangelos Kazakos, Dima Damen, Andrew Zisserman

We introduce EPIC-SOUNDS, a large-scale dataset of audio annotations capturing temporal extents and class labels within the audio stream of the egocentric videos.

Action Recognition Sound Classification

In search of strong embedding extractors for speaker diarisation

no code implementations26 Oct 2022 Jee-weon Jung, Hee-Soo Heo, Bong-Jin Lee, Jaesung Huh, Andrew Brown, Youngki Kwon, Shinji Watanabe, Joon Son Chung

First, the evaluation is not straightforward because the features required for better performance differ between speaker verification and diarisation.

Data Augmentation Speaker Verification

With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition

1 code implementation1 Nov 2021 Evangelos Kazakos, Jaesung Huh, Arsha Nagrani, Andrew Zisserman, Dima Damen

We capitalise on the action's temporal context and propose a method that learns to attend to surrounding actions in order to improve recognition performance.

Action Recognition Language Modelling

Augmentation adversarial training for self-supervised speaker recognition

no code implementations23 Jul 2020 Jaesung Huh, Hee Soo Heo, Jingu Kang, Shinji Watanabe, Joon Son Chung

Since the augmentation simulates the acoustic characteristics, training the network to be invariant to augmentation also encourages the network to be invariant to the channel information in general.

Contrastive Learning Speaker Recognition

Spot the conversation: speaker diarisation in the wild

no code implementations2 Jul 2020 Joon Son Chung, Jaesung Huh, Arsha Nagrani, Triantafyllos Afouras, Andrew Zisserman

Finally, we use this pipeline to create a large-scale diarisation dataset called VoxConverse, collected from 'in the wild' videos, which we will release publicly to the research community.

Speaker Verification

Modeling Musical Onset Probabilities via Neural Distribution Learning

no code implementations10 Feb 2020 Jaesung Huh, Egil Martinsson, Adrian Kim, Jung-Woo Ha

Musical onset detection can be formulated as a time-to-event (TTE) or time-since-event (TSE) prediction task by defining music as a sequence of onset events.

Delving into VoxCeleb: environment invariant speaker recognition

1 code implementation24 Oct 2019 Joon Son Chung, Jaesung Huh, Seongkyu Mun

Research in speaker recognition has recently seen significant progress due to the application of neural network models and the availability of new large-scale datasets.

Speaker Identification Speaker Recognition

Phase-aware Speech Enhancement with Deep Complex U-Net

8 code implementations ICLR 2019 Hyeong-Seok Choi, Jang-Hyun Kim, Jaesung Huh, Adrian Kim, Jung-Woo Ha, Kyogu Lee

Most deep learning-based models for speech enhancement have mainly focused on estimating the magnitude of spectrogram while reusing the phase from noisy speech for reconstruction.

Speech Enhancement valid

Cannot find the paper you are looking for? You can Submit a new open access paper.