speaker-diarization
69 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in speaker-diarization
Libraries
Use these libraries to find speaker-diarization models and implementationsMost implemented papers
End-to-end speaker segmentation for overlap-aware resegmentation
Experiments on multiple speaker diarization datasets conclude that our model can be used with great success on both voice activity detection and overlapped speech detection.
TitaNet: Neural Model for speaker representation with 1D Depth-wise separable convolutions and global context
In this paper, we propose TitaNet, a novel neural network architecture for extracting speaker representations.
BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications
We propose a system that combines SAD and a BERT model to perform speaker change detection and speaker role detection (SRD) by chunking ASR transcripts, i. e., SD with a defined number of speakers together with SRD.
From Simulated Mixtures to Simulated Conversations as Training Data for End-to-End Neural Diarization
However, simulated mixtures do not resemble real conversations in many aspects.
BER: Balanced Error Rate For Speaker Diarization
DER is the primary metric to evaluate diarization performance while facing a dilemma: the errors in short utterances or segments tend to be overwhelmed by longer ones.
Self-supervised Audio Teacher-Student Transformer for Both Clip-level and Frame-level Tasks
In order to tackle both clip-level and frame-level tasks, this paper proposes Audio Teacher-Student Transformer (ATST), with a clip-level version (named ATST-Clip) and a frame-level version (named ATST-Frame), responsible for learning clip-level and frame-level representations, respectively.
DiarizationLM: Speaker Diarization Post-Processing with Large Language Models
In this paper, we introduce DiarizationLM, a framework to leverage large language models (LLM) to post-process the outputs from a speaker diarization system.
Scalable Adaptation of State Complexity for Nonparametric Hidden Markov Models
Bayesian nonparametric hidden Markov models are typically learned via fixed truncations of the infinite state space or local Monte Carlo proposals that make small changes to the state space.
The EURECOM Submission to the First DIHARD Challenge
The first DIHARD challenge aims to promote speaker diarization research and to foster progress in domain robustness.
Fully Supervised Speaker Diarization
In this paper, we propose a fully supervised speaker diarization approach, named unbounded interleaved-state recurrent neural networks (UIS-RNN).