Speaker Diarization
74 papers with code • 12 benchmarks • 11 datasets
Speaker Diarization is the task of segmenting and co-indexing audio recordings by speaker. The way the task is commonly defined, the goal is not to identify known speakers, but to co-index segments that are attributed to the same speaker; in other words, diarization implies finding speaker boundaries and grouping segments that belong to the same speaker, and, as a by-product, determining the number of distinct speakers. In combination with speech recognition, diarization enables speaker-attributed speech-to-text transcription.
Source: Improving Diarization Robustness using Diversification, Randomization and the DOVER Algorithm
Libraries
Use these libraries to find Speaker Diarization models and implementationsDatasets
Most implemented papers
TitaNet: Neural Model for speaker representation with 1D Depth-wise separable convolutions and global context
In this paper, we propose TitaNet, a novel neural network architecture for extracting speaker representations.
BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications
We propose a system that combines SAD and a BERT model to perform speaker change detection and speaker role detection (SRD) by chunking ASR transcripts, i. e., SD with a defined number of speakers together with SRD.
Speaker Embedding-aware Neural Diarization for Flexible Number of Speakers with Textual Information
In this paper, we reformulate this task as a single-label prediction problem by encoding the multi-speaker labels with power set.
From Simulated Mixtures to Simulated Conversations as Training Data for End-to-End Neural Diarization
However, simulated mixtures do not resemble real conversations in many aspects.
PaddleSpeech: An Easy-to-Use All-in-One Speech Toolkit
PaddleSpeech is an open-source all-in-one speech toolkit.
BER: Balanced Error Rate For Speaker Diarization
DER is the primary metric to evaluate diarization performance while facing a dilemma: the errors in short utterances or segments tend to be overwhelmed by longer ones.
Self-supervised Audio Teacher-Student Transformer for Both Clip-level and Frame-level Tasks
In order to tackle both clip-level and frame-level tasks, this paper proposes Audio Teacher-Student Transformer (ATST), with a clip-level version (named ATST-Clip) and a frame-level version (named ATST-Frame), responsible for learning clip-level and frame-level representations, respectively.
Speech Emotion Diarization: Which Emotion Appears When?
Speech Emotion Recognition (SER) typically relies on utterance-level solutions.
DiarizationLM: Speaker Diarization Post-Processing with Large Language Models
In this paper, we introduce DiarizationLM, a framework to leverage large language models (LLM) to post-process the outputs from a speaker diarization system.
Scalable Adaptation of State Complexity for Nonparametric Hidden Markov Models
Bayesian nonparametric hidden Markov models are typically learned via fixed truncations of the infinite state space or local Monte Carlo proposals that make small changes to the state space.