no code implementations • COLING 2022 • Chonghan Lee, Md Fahim Faysal Khan, Rita Brugarolas Brufau, Ke Ding, Vijaykrishnan Narayanan
While pre-trained language models like BERT have achieved impressive results on various natural language processing tasks, deploying them on resource-restricted devices is challenging due to their intensive computational cost and memory footprint.
no code implementations • 26 Jun 2024 • Song Li, Yongbin You, Xuezhi Wang, Zhengkun Tian, Ke Ding, Guanglu Wan
Recently, multilingual artificial intelligence assistants, exemplified by ChatGPT, have gained immense popularity.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 24 May 2024 • Yimeng Zhang, Xin Chen, Jinghan Jia, Yihua Zhang, Chongyu Fan, Jiancheng Liu, Mingyi Hong, Ke Ding, Sijia Liu
The techniques of machine unlearning, also known as concept erasing, have been developed to address these risks.
1 code implementation • 5 Mar 2024 • Xin Chen, Hanxian Huang, Yanjun Gao, Yi Wang, Jishen Zhao, Ke Ding
Knowledge distillation, the technique of transferring knowledge from large, complex models to smaller ones, marks a pivotal step towards efficient AI deployment.
1 code implementation • 18 Oct 2023 • Yimeng Zhang, Jinghan Jia, Xin Chen, Aochuan Chen, Yihua Zhang, Jiancheng Liu, Ke Ding, Sijia Liu
Specifically, we investigated the adversarial robustness of DMs, assessed by adversarial prompts, when eliminating unwanted concepts, styles, and objects.
no code implementations • 18 Sep 2023 • Song Li, Yongbin You, Xuezhi Wang, Ke Ding, Guanglu Wan
To further expand the applications of multilingual artificial intelligence assistants and facilitate international communication, it is essential to enhance the performance of multilingual speech recognition, which is a crucial component of speech interaction.
no code implementations • 14 Sep 2023 • Lei Zhang, Zhengkun Tian, Xiang Chen, Jiaming Sun, Hongyu Xiang, Ke Ding, Guanglu Wan
To address this issue, we draw inspiration from the multifaceted capabilities of LLMs and Whisper, and focus on integrating multiple ASR text processing tasks related to speech recognition into the ASR model.
no code implementations • 31 Aug 2023 • ZhaoXin Huan, Ke Ding, Ang Li, Xiaolu Zhang, Xu Min, Yong He, Liang Zhang, Jun Zhou, Linjian Mo, Jinjie Gu, Zhongyi Liu, Wenliang Zhong, Guannan Zhang
3) AntM$^{2}$C provides 1 billion CTR data with 200 features, including 200 million users and 6 million items.
no code implementations • 23 Jun 2023 • Loc Hoang, Rita Brugarolas Brufau, Ke Ding, Bo Wu
We present BatchGNN, a distributed CPU system that showcases techniques that can be used to efficiently train GNNs on terabyte-sized graphs.
1 code implementation • 13 Mar 2023 • Luca Pegolotti, Martin R. Pfaller, Natalia L. Rubio, Ke Ding, Rita Brugarolas Brufau, Eric Darve, Alison L. Marsden
Our numerical results demonstrate the accuracy and generalizability of our method in physiological geometries comprising a variety of anatomies and boundary conditions.
1 code implementation • CVPR 2023 • Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding
In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video.
no code implementations • 7 Nov 2022 • Zhengkun Tian, Hongyu Xiang, Min Li, Feifei Lin, Ke Ding, Guanglu Wan
To reduce the peak latency, we propose a simple and novel method named peak-first regularization, which utilizes a frame-wise knowledge distillation function to force the probability distribution of the CTC model to shift left along the time axis instead of directly modifying the calculation process of CTC loss and gradients.
no code implementations • 27 Jul 2022 • Xin Chen, Ke Ding
Recent advances of semantic image segmentation greatly benefit from deeper and larger Convolutional Neural Network (CNN) models.
no code implementations • 31 Mar 2022 • Huahuan Zheng, Keyu An, Zhijian Ou, Chen Huang, Ke Ding, Guanglu Wan
Based on the DR method, we propose a low-order density ratio method (LODR) by replacing the estimation with a low-order weak language model.
1 code implementation • 31 Mar 2022 • Keyu An, Huahuan Zheng, Zhijian Ou, Hongyu Xiang, Ke Ding, Guanglu Wan
The simulation module is jointly trained with the ASR model using a self-supervised loss; the ASR model is optimized with the usual ASR loss, e. g., CTC-CRF as used in our experiments.
no code implementations • 18 Nov 2021 • Shira Guskin, Moshe Wasserblat, Ke Ding, Gyuwan Kim
Additionally, a separate model must be trained for each inference scenario with its distinct computational budget.
1 code implementation • 7 Jan 2020 • Ke Ding, Xuanji He, Guanglu Wan
Momentum Contrast (MoCo) is a recently proposed unsupervised representation learning framework, and has shown its effectiveness for learning good feature representation for downstream vision tasks.
no code implementations • WS 2018 • Kaiyin Zhou, Sheng Zhang, Xiangyu Meng, Qi Luo, Yuxing Wang, Ke Ding, Yukun Feng, Mo Chen, Kevin Cohen, Jingbo Xia
Sequence labeling of biomedical entities, e. g., side effects or phenotypes, was a long-term task in BioNLP and MedNLP communities.
no code implementations • 2 Apr 2018 • Ke Ding
Some explanations to Kaldi's PLDA implementation to make formula derivation easier to catch.
no code implementations • 29 Jul 2014 • Ke Ding, Ying Tan
Benchmarking is key for developing and comparing optimization algorithms.