Search Results for author: Ke Ding

Found 18 papers, 5 papers with code

Token and Head Adaptive Transformers for Efficient Natural Language Processing

no code implementations COLING 2022 Chonghan Lee, Md Fahim Faysal Khan, Rita Brugarolas Brufau, Ke Ding, Vijaykrishnan Narayanan

While pre-trained language models like BERT have achieved impressive results on various natural language processing tasks, deploying them on resource-restricted devices is challenging due to their intensive computational cost and memory footprint.

Learning to Maximize Mutual Information for Chain-of-Thought Distillation

no code implementations5 Mar 2024 Xin Chen, Hanxian Huang, Yanjun Gao, Yi Wang, Jishen Zhao, Ke Ding

Knowledge distillation, the technique of transferring knowledge from large, complex models to smaller ones, marks a pivotal step towards efficient AI deployment.

Knowledge Distillation Language Modelling +1

To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now

1 code implementation18 Oct 2023 Yimeng Zhang, Jinghan Jia, Xin Chen, Aochuan Chen, Yihua Zhang, Jiancheng Liu, Ke Ding, Sijia Liu

Our results demonstrate the effectiveness and efficiency merits of UnlearnDiffAtk over the state-of-the-art adversarial prompt generation method and reveal the lack of robustness of current safety-driven unlearning techniques when applied to DMs.

Adversarial Robustness Benchmarking +1

Enhancing Multilingual Speech Recognition through Language Prompt Tuning and Frame-Level Language Adapter

no code implementations18 Sep 2023 Song Li, Yongbin You, Xuezhi Wang, Ke Ding, Guanglu Wan

To further expand the applications of multilingual artificial intelligence assistants and facilitate international communication, it is essential to enhance the performance of multilingual speech recognition, which is a crucial component of speech interaction.

speech-recognition Speech Recognition

CPPF: A contextual and post-processing-free model for automatic speech recognition

no code implementations14 Sep 2023 Lei Zhang, Zhengkun Tian, Xiang Chen, Jiaming Sun, Hongyu Xiang, Ke Ding, Guanglu Wan

To address this issue, we draw inspiration from the multifaceted capabilities of LLMs and Whisper, and focus on integrating multiple ASR text processing tasks related to speech recognition into the ASR model.

Automatic Speech Recognition speech-recognition +1

BatchGNN: Efficient CPU-Based Distributed GNN Training on Very Large Graphs

no code implementations23 Jun 2023 Loc Hoang, Rita Brugarolas Brufau, Ke Ding, Bo Wu

We present BatchGNN, a distributed CPU system that showcases techniques that can be used to efficiently train GNNs on terabyte-sized graphs.

graph partitioning

Learning Reduced-Order Models for Cardiovascular Simulations with Graph Neural Networks

1 code implementation13 Mar 2023 Luca Pegolotti, Martin R. Pfaller, Natalia L. Rubio, Ke Ding, Rita Brugarolas Brufau, Eric Darve, Alison L. Marsden

Our numerical results demonstrate the accuracy and generalizability of our method in physiological geometries comprising a variety of anatomies and boundary conditions.

Text-Visual Prompting for Efficient 2D Temporal Video Grounding

1 code implementation CVPR 2023 Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding

In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video.

Sentence Video Grounding +1

Peak-First CTC: Reducing the Peak Latency of CTC Models by Applying Peak-First Regularization

no code implementations7 Nov 2022 Zhengkun Tian, Hongyu Xiang, Min Li, Feifei Lin, Ke Ding, Guanglu Wan

To reduce the peak latency, we propose a simple and novel method named peak-first regularization, which utilizes a frame-wise knowledge distillation function to force the probability distribution of the CTC model to shift left along the time axis instead of directly modifying the calculation process of CTC loss and gradients.

Knowledge Distillation

Two-Stream UNET Networks for Semantic Segmentation in Medical Images

no code implementations27 Jul 2022 Xin Chen, Ke Ding

Recent advances of semantic image segmentation greatly benefit from deeper and larger Convolutional Neural Network (CNN) models.

Image Segmentation Medical Image Segmentation +3

CUSIDE: Chunking, Simulating Future Context and Decoding for Streaming ASR

1 code implementation31 Mar 2022 Keyu An, Huahuan Zheng, Zhijian Ou, Hongyu Xiang, Ke Ding, Guanglu Wan

The simulation module is jointly trained with the ASR model using a self-supervised loss; the ASR model is optimized with the usual ASR loss, e. g., CTC-CRF as used in our experiments.

Chunking speech-recognition +1

An Empirical Study of Language Model Integration for Transducer based Speech Recognition

no code implementations31 Mar 2022 Huahuan Zheng, Keyu An, Zhijian Ou, Chen Huang, Ke Ding, Guanglu Wan

Based on the DR method, we propose a low-order density ratio method (LODR) by replacing the estimation with a low-order weak language model.

Language Modelling speech-recognition +1

Learning Speaker Embedding with Momentum Contrast

1 code implementation7 Jan 2020 Ke Ding, Xuanji He, Guanglu Wan

Momentum Contrast (MoCo) is a recently proposed unsupervised representation learning framework, and has shown its effectiveness for learning good feature representation for downstream vision tasks.

Representation Learning Speaker Verification

A Note on Kaldi's PLDA Implementation

no code implementations2 Apr 2018 Ke Ding

Some explanations to Kaldi's PLDA implementation to make formula derivation easier to catch.

A CUDA-Based Real Parameter Optimization Benchmark

no code implementations29 Jul 2014 Ke Ding, Ying Tan

Benchmarking is key for developing and comparing optimization algorithms.

Benchmarking

Cannot find the paper you are looking for? You can Submit a new open access paper.