Search Results for author: Tianchi Liu

Found 10 papers, 2 papers with code

Towards Quantifying and Reducing Language Mismatch Effects in Cross-Lingual Speech Anti-Spoofing

no code implementations12 Sep 2024 Tianchi Liu, Ivan Kukanov, Zihan Pan, Qiongqiong Wang, Hardik B. Sailor, Kong Aik Lee

The effects of language mismatch impact speech anti-spoofing systems, while investigations and quantification of these effects remain limited.

Speech Foundation Model Ensembles for the Controlled Singing Voice Deepfake Detection (CtrSVDD) Challenge 2024

1 code implementation3 Sep 2024 Anmol Guragain, Tianchi Liu, Zihan Pan, Hardik B. Sailor, Qiongqiong Wang

This work details our approach to achieving a leading system with a 1. 79% pooled equal error rate (EER) on the evaluation set of the Controlled Singing Voice Deepfake Detection (CtrSVDD).

DeepFake Detection Face Swapping +1

Attentive Merging of Hidden Embeddings from Pre-trained Speech Model for Anti-spoofing Detection

no code implementations12 Jun 2024 Zihan Pan, Tianchi Liu, Hardik B. Sailor, Qiongqiong Wang

Self-supervised learning (SSL) speech representation models, trained on large speech corpora, have demonstrated effectiveness in extracting hierarchical speech embeddings through multiple transformer layers.

Computational Efficiency Self-Supervised Learning

How Do Neural Spoofing Countermeasures Detect Partially Spoofed Audio?

no code implementations4 Jun 2024 Tianchi Liu, Lin Zhang, Rohan Kumar Das, Yi Ma, Ruijie Tao, Haizhou Li

Recent work shows that countermeasures (CMs) trained on partially spoofed audio can effectively detect such spoofing.

Decision Making Sentence

Voice Conversion Augmentation for Speaker Recognition on Defective Datasets

no code implementations1 Apr 2024 Ruijie Tao, Zhan Shi, Yidi Jiang, Tianchi Liu, Haizhou Li

Our experimental results on three created datasets demonstrated that VCA-NN effectively mitigates these dataset problems, which provides a new direction for handling the speaker recognition problems from the data aspect.

Speaker Recognition Voice Conversion

Golden Gemini is All You Need: Finding the Sweet Spots for Speaker Verification

1 code implementation6 Dec 2023 Tianchi Liu, Kong Aik Lee, Qiongqiong Wang, Haizhou Li

We represent the stride space on a trellis diagram, and conduct a systematic study on the impact of temporal and frequency resolutions on the performance and further identify two optimal points, namely Golden Gemini, which serves as a guiding principle for designing 2D ResNet-based speaker verification models.

Speaker Verification

Incorporating Uncertainty from Speaker Embedding Estimation to Speaker Verification

no code implementations23 Feb 2023 Qiongqiong Wang, Kong Aik Lee, Tianchi Liu

We propose a log-likelihood ratio function for the PLDA scoring with the uncertainty propagation.

Speaker Verification

Scoring of Large-Margin Embeddings for Speaker Verification: Cosine or PLDA?

no code implementations8 Apr 2022 Qiongqiong Wang, Kong Aik Lee, Tianchi Liu

The emergence of large-margin softmax cross-entropy losses in training deep speaker embedding neural networks has triggered a gradual shift from parametric back-ends to a simpler cosine similarity measure for speaker verification.

Speaker Verification

MFA: TDNN with Multi-scale Frequency-channel Attention for Text-independent Speaker Verification with Short Utterances

no code implementations3 Feb 2022 Tianchi Liu, Rohan Kumar Das, Kong Aik Lee, Haizhou Li

The time delay neural network (TDNN) represents one of the state-of-the-art of neural solutions to text-independent speaker verification.

Text-Independent Speaker Verification

Speaker-Utterance Dual Attention for Speaker and Utterance Verification

no code implementations20 Aug 2020 Tianchi Liu, Rohan Kumar Das, Maulik Madhavi, ShengMei Shen, Haizhou Li

The proposed SUDA features an attention mask mechanism to learn the interaction between the speaker and utterance information streams.

Speaker Verification

Cannot find the paper you are looking for? You can Submit a new open access paper.