no code implementations • COLING 2022 • Yuchen Liu, Jinming Zhao, Jingwen Hu, Ruichen Li, Qin Jin
Emotion Recognition in Conversation (ERC) has attracted increasing attention in the affective computing research field.
no code implementations • 14 Jun 2025 • Mengyuan Sun, Yu Li, Yuchen Liu, Bo Du, Yunjie Ge
Multimodal contrastive learning models like CLIP have demonstrated remarkable vision-language alignment capabilities, yet their vulnerability to backdoor attacks poses critical security risks.
no code implementations • 19 May 2025 • Yifeng Jiao, Yuchen Liu, Yu Zhang, Xin Guo, Yushuai Wu, Chen Jiang, Jiyang Li, Hongwei Zhang, Limei Han, Xin Gao, Yuan Qi, Yuan Cheng
The advent of single-cell Assay for Transposase-Accessible Chromatin using sequencing (scATAC-seq) offers an innovative perspective for deciphering regulatory mechanisms by assembling a vast repository of single-cell chromatin accessibility data.
1 code implementation • 15 May 2025 • Yihong Dong, Yuchen Liu, Xue Jiang, Zhi Jin, Ge Li
Specifically, RPG first leverages grammar rules to identify repetition problems during code generation, and then strategically decays the likelihood of critical tokens that contribute to repetitions, thereby mitigating them in code generation.
no code implementations • 11 May 2025 • Saad Masrur, Ozgur Ozdemir, Anil Gurses, Ismail Guvenc, Mihail L. Sichitiu, Rudra Dutta, Magreth Mushi, homas Zajkowski, Cole Dickerson, Gautham Reddy, Sergio Vargas Villar, Chau-Wai Wong, Baisakhi Chatterjee, Sonali Chaudhari, zhizhen li, Yuchen Liu, Paul Kudyba, Haijian Sun, Jaya Sravani Mandapaka, Kamesh Namuduri, Weijie Wang, Fraida Fund
For each team, the UGV was placed at three different positions, resulting in a total of 30 datasets, 15 collected in a DT simulation environment and 15 in a physical outdoor testbed.
no code implementations • 11 Apr 2025 • Yongsheng Yu, Haitian Zheng, Zhifei Zhang, Jianming Zhang, Yuqian Zhou, Connelly Barnes, Yuchen Liu, Wei Xiong, Zhe Lin, Jiebo Luo
Recent progress in generative models has significantly improved image restoration capabilities, particularly through powerful diffusion models that offer remarkable recovery of semantic details and local fidelity.
no code implementations • 1 Apr 2025 • Yuchen Liu, Lino Lerch, Luigi Palmieri, Andrey Rudenko, Sebastian Koch, Timo Ropinski, Marco Aiello
In this paper, we present a systematic analysis of applying pre-trained MLLMs for context-aware human behavior prediction.
no code implementations • 21 Mar 2025 • Bhishma Dedhia, David Bourgin, Krishna Kumar Singh, Yuheng Li, Yan Kang, Zhan Xu, Niraj K. Jha, Yuchen Liu
At each diffusion step, VINs encode global semantics from the noisy input of local chunks and the encoded representations, in turn, guide DiTs in denoising chunks in parallel.
no code implementations • 16 Mar 2025 • Jie Dai, Yuchen Liu, Jiakang Zheng, Ruichen Zhang, Jiayi Zhang, Bo Ai
Simulation results demonstrate that movable CF massive MIMO effectively suppresses the negative impact of the Doppler effect in HST communications.
no code implementations • 8 Mar 2025 • Zifan Zhang, Minghong Fang, Dianwei Chen, Xianfeng Yang, Yuchen Liu
This article presents a comprehensive analysis of the synergy of DNTs, FL, and RL techniques, showcasing their collective potential to address critical challenges in 6G networks.
1 code implementation • 28 Feb 2025 • Yichi Zhang, Bohao Lv, Le Xue, Wenbo Zhang, Yuchen Liu, Yu Fu, Yuan Cheng, Yuan Qi
SemiSAM+ consists of one or multiple promptable foundation models as generalist models, and a trainable task-specific segmentation model as specialist model.
1 code implementation • 20 Feb 2025 • Yichi Zhang, Le Xue, Wenbo Zhang, Lanlan Li, Yuchen Liu, Chen Jiang, Yuan Cheng, Yuan Qi
Positron Emission Tomography (PET) imaging plays a crucial role in modern medical diagnostics by revealing the metabolic processes within a patient's body, which is essential for quantification of therapy response and monitoring treatment progress.
no code implementations • 7 Feb 2025 • Hanzhi Yu, Yuchen Liu, Zhaohui Yang, Haijian Sun, Mingzhe Chen
Since the DNT can predict the physical network status based on its historical status, the BSs may not need to send their physical network information at each time slot, allowing them to conserve spectrum resources to serve the users.
no code implementations • 7 Feb 2025 • Yuchen Liu, Chen Chen, Lingjuan Lyu, Yaochu Jin, Gang Chen
This gradient skew phenomenon allows Byzantine gradients to hide within the densely distributed skewed gradients.
1 code implementation • 4 Feb 2025 • Hoang M. Nguyen, Satya N. Shukla, Qiang Zhang, Hanchao Yu, Sreya D. Roy, Taipeng Tian, Lingjiong Zhu, Yuchen Liu
To address these limitations, we introduce BRIDLE (Bidirectional Residual Quantization Interleaved Discrete Learning Encoder), a self-supervised encoder pretraining framework that incorporates residual quantization (RQ) into the bidirectional training process, and is generalized for pretraining with audio, image, and video.
no code implementations • 1 Feb 2025 • Dianwei Chen, Zifan Zhang, Yuchen Liu, Xianfeng Terry Yang
Autonomous driving systems face significant challenges in handling unpredictable edge-case scenarios, such as adversarial pedestrian movements, dangerous vehicle maneuvers, and sudden environmental changes.
no code implementations • 29 Jan 2025 • Wenbin Wang, Qiwen Ma, Zifan Zhang, Yuchen Liu, Zhuqing Liu, Minghong Fang
In BadUnlearn, malicious clients send specifically designed local model updates to the server during the unlearning process, aiming to ensure that the resulting unlearned model remains poisoned.
no code implementations • 2 Jan 2025 • Lixiong Qin, Shilong Ou, Miaoxuan Zhang, Jiangning Wei, Yuhang Zhang, Xiaoshuai Song, Yuchen Liu, Mei Wang, Weiran Xu
Faces and humans are crucial elements in social interaction and are widely included in everyday photos and videos.
no code implementations • 20 Dec 2024 • Zihan Ding, Chi Jin, Difan Liu, Haitian Zheng, Krishna Kumar Singh, Qiang Zhang, Yan Kang, Zhe Lin, Yuchen Liu
In this work, we introduce a distillation method that combines variational score distillation and consistency distillation to achieve few-step video generation, maintaining both high quality and diversity.
1 code implementation • 1 Nov 2024 • Yingwei Ma, Rongyu Cao, Yongchang Cao, Yue Zhang, Jue Chen, Yibo Liu, Yuchen Liu, Binhua Li, Fei Huang, Yongbin Li
The results demonstrate that Lingma SWE-GPT 72B successfully resolves 30. 20% of the GitHub issues, marking a significant improvement in automatic issue resolution (22. 76% relative improvement compared to Llama 3. 1 405B), approaching the performance of closed-source models (31. 80\% issues of GPT-4o resolved).
1 code implementation • 19 Oct 2024 • Sizhe Liu, Jun Xia, Lecheng Zhang, Yuchen Liu, Yue Liu, Wenjie Du, Zhangyang Gao, Bozhen Hu, Cheng Tan, Hongxin Xiang, Stan Z. Li
Molecular relational learning (MRL) is crucial for understanding the interaction behaviors between molecular pairs, a critical aspect of drug discovery and development.
no code implementations • 23 Sep 2024 • Alireza Ganjdanesh, Yan Kang, Yuchen Liu, Richard Zhang, Zhe Lin, Heng Huang
Finally, with a selected configuration, we fine-tune our pruned experts to obtain our mixture of efficient experts.
no code implementations • 1 Aug 2024 • Weihang Ding, Zhaohui Yang, Mingzhe Chen, Yuchen Liu, Mohammad Shikh-Bahaei
To solve the simplified problem, this paper introduces both greedy and heuristic algorithms through optimizing both vehicle assignments and predictive beamforming.
1 code implementation • 31 Jul 2024 • Yuxin Wen, Yuchen Liu, Chen Chen, Lingjuan Lyu
In this work, we introduce a straightforward yet effective method for detecting memorized prompts by inspecting the magnitude of text-conditional predictions.
no code implementations • 29 Jun 2024 • Zifan Zhang, Yuchen Liu, Zhiyuan Peng, Mingzhe Chen, Dongkuan Xu, Shuguang Cui
To bridge this gap, we introduce a novel digital twin-assisted optimization framework, called D-REC, which integrates reinforcement learning (RL) with diverse intervention modules to ensure reliable caching in nextG wireless networks.
no code implementations • 14 Jun 2024 • Minghong Fang, Zifan Zhang, Hairi, Prashant Khanduri, Jia Liu, Songtao Lu, Yuchen Liu, Neil Gong
However, due to its fully decentralized nature, DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients.
2 code implementations • 4 Jun 2024 • Philip Anastassiou, Jiawei Chen, Jitong Chen, Yuanzhe Chen, Zhuo Chen, Ziyi Chen, Jian Cong, Lelai Deng, Chuang Ding, Lu Gao, Mingqing Gong, Peisong Huang, Qingqing Huang, Zhiying Huang, YuanYuan Huo, Dongya Jia, ChuMin Li, Feiya Li, Hui Li, Jiaxin Li, Xiaoyang Li, Xingxing Li, Lin Liu, Shouda Liu, Sichao Liu, Xudong Liu, Yuchen Liu, Zhengxi Liu, Lu Lu, Junjie Pan, Xin Wang, Yuping Wang, Yuxuan Wang, Zhen Wei, Jian Wu, Chao Yao, Yifeng Yang, YuanHao Yi, Junteng Zhang, Qidi Zhang, Shuo Zhang, Wenjie Zhang, Yang Zhang, Zilin Zhao, Dejian Zhong, Xiaobin Zhuang
Seed-TTS offers superior controllability over various speech attributes such as emotion and is capable of generating highly expressive and diverse speech for speakers in the wild.
1 code implementation • 4 Jun 2024 • Donglei Yu, Xiaomian Kang, Yuchen Liu, Yu Zhou, Chengqing Zong
Besides, building decision paths requires unidirectional encoders to simulate streaming source inputs, which impairs the translation quality of SiMT models.
1 code implementation • 28 May 2024 • Huiping Zhuang, Di Fang, Kai Tong, Yuchen Liu, Ziqian Zeng, Xu Zhou, Cen Chen
One of these scenarios can be formulated as an online continual learning (OCL) problem.
no code implementations • CVPR 2024 • Cusuh Ham, Matthew Fisher, James Hays, Nicholas Kolkin, Yuchen Liu, Richard Zhang, Tobias Hinz
We present personalized residuals and localized attention-guided sampling for efficient concept-driven generation using text-to-image diffusion models.
no code implementations • CVPR 2024 • Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Niraj K. Jha, Yuchen Liu
Specifically, for single-denoising-step pruning, we develop a novel ranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify redundant tokens, and a similarity-based recovery method to restore tokens for the convolution operation.
no code implementations • 22 Apr 2024 • Zifan Zhang, Minghong Fang, Jiayuan Huang, Yuchen Liu
Federated Learning (FL) offers a distributed framework to train a global control model across multiple base stations without compromising the privacy of their local network data.
no code implementations • 22 Apr 2024 • Zifan Zhang, Mingzhe Chen, Zhaohui Yang, Yuchen Liu
In recent years, the complexity of 5G and beyond wireless networks has escalated, prompting a need for innovative frameworks to facilitate flexible management and efficient deployment.
1 code implementation • 7 Apr 2024 • Junhong Wu, Yuchen Liu, Chengqing Zong
In the evolving landscape of Neural Machine Translation (NMT), the pretrain-then-finetune paradigm has yielded impressive results.
no code implementations • 4 Apr 2024 • Yuchen Liu, Luigi Palmieri, Sebastian Koch, Ilche Georgievski, Marco Aiello
In our extensive evaluation, we show that DELTA enables an efficient and fully automatic task planning pipeline, achieving higher planning success rates and significantly shorter planning times compared to the state of the art.
no code implementations • 27 Mar 2024 • Hanqing Fu, Gaolei Li, Jun Wu, Jianhua Li, Xi Lin, Kai Zhou, Yuchen Liu
Federated neuromorphic learning (FedNL) leverages event-driven spiking neural networks and federated learning frameworks to effectively execute intelligent analysis tasks over amounts of distributed low-power devices but also perform vulnerability to poisoning attacks.
1 code implementation • 23 Mar 2024 • Huiping Zhuang, Yuchen Liu, Run He, Kai Tong, Ziqian Zeng, Cen Chen, Yi Wang, Lap-Pui Chau
In this paper, we propose an exemplar-free approach--Forward-only Online Analytic Learning (F-OAL).
no code implementations • 29 Feb 2024 • Xukun Liu, Zhiyuan Peng, Xiaoyuan Yi, Xing Xie, Lirong Xiang, Yuchen Liu, Dongkuan Xu
While achieving remarkable progress in a broad range of tasks, large language models (LLMs) remain significantly limited in properly using massive external tools.
no code implementations • 22 Jan 2024 • Yujiao Zhu, Mingzhe Chen, Sihua Wang, Ye Hu, Yuchen Liu, Changchuan Yin
Meanwhile, since the accuracy of the distance estimation depends on the signal-to-noise ratio of the transmission signals, the active UAV must optimize its transmit power.
no code implementations • 9 Jan 2024 • Weining Weng, Yang Gu, Shuai Guo, Yuan Ma, Zhaohua Yang, Yuchen Liu, Yiqiang Chen
2) We provide a comprehensive review of SSL for EEG analysis, including taxonomy, methodology, and technique details of the existing EEG-based SSL frameworks, and discuss the difference between these methods.
no code implementations • CVPR 2024 • Zhengang Li, Yan Kang, Yuchen Liu, Difan Liu, Tobias Hinz, Feng Liu, Yanzhi Wang
Our method employs a supernet training paradigm that targets various model cost and resolution options using a weight-sharing method.
1 code implementation • 11 Dec 2023 • Yichi Zhang, Jin Yang, Yuchen Liu, Yuan Cheng, Yuan Qi
Semi-supervised learning has attracted much attention due to its less dependence on acquiring abundant annotations from experts compared to fully supervised methods, which is especially important for medical image segmentation which typically requires intensive pixel/voxel-wise labeling by domain experts.
no code implementations • 22 Nov 2023 • Licheng Lin, Mingzhe Chen, Zhaohui Yang, Yusen Wu, Yuchen Liu
In particular, our designed clustered FL algorithm must overcome two challenges associated with FL training.
1 code implementation • 13 Oct 2023 • Dongsheng Jiang, Yuchen Liu, Songlin Liu, Jin'e Zhao, Hao Zhang, Zhen Gao, Xiaopeng Zhang, Jin Li, Hongkai Xiong
By simply equipping it with an MLP layer for alignment, DINO surpasses CLIP in fine-grained related perception tasks.
1 code implementation • 2 Sep 2023 • Chen Wang, Minpeng Liao, Zhongqiang Huang, Jinliang Lu, Junhong Wu, Yuchen Liu, Chengqing Zong, Jiajun Zhang
One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text.
1 code implementation • 8 Aug 2023 • Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu
We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm.
1 code implementation • 17 Jun 2023 • Boyuan Cao, Xinyu Zhou, Congmin Guo, Baohua Zhang, Yuchen Liu, Qianqiu Tan
In the past few years, researchers have proposed many methods to address the above-mentioned issues and achieved very good results on publicly available datasets such as the Cornell dataset and the Jacquard dataset.
Ranked #1 on
Robotic Grasping
on NBMOD
(using extra training data)
no code implementations • 13 Jun 2023 • Gaolei Li, YuanYuan Zhao, Wenqi Wei, Yuchen Liu
Secondly, to rearm current security strategies, an finetuning-based deployment mechanism is proposed to transfer learned knowledge into the student model, while minimizing the defense cost.
2 code implementations • 23 May 2023 • Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, Dongkuan Xu
Augmented Language Models (ALMs) blend the reasoning capabilities of Large Language Models (LLMs) with tools that allow for knowledge retrieval and action execution.
no code implementations • 28 Apr 2023 • Yuchen Liu, Natasha Ong, Kaiyan Peng, Bo Xiong, Qifan Wang, Rui Hou, Madian Khabsa, Kaiyue Yang, David Liu, Donald S. Williamson, Hanchao Yu
Our model encodes different views of the input signal and builds several channel-resolution feature stages to process the multiple views of the input at different resolutions in parallel.
no code implementations • 4 Apr 2023 • Xuanchao Ma, Yuchen Liu
We find that the detection criteria used in infrared imaging ethylene leakage detection research cannot fully reflect real-world production conditions, which is not conducive to evaluate the performance of current image-based target detection methods.
no code implementations • 17 Mar 2023 • Chuanhe Liu, Xinjie Zhang, Xiaolong Liu, Tenggan Zhang, Liyu Meng, Yuchen Liu, Yuanyuan Deng, Wenqiang Jiang
This paper presents our submission to the Expression Classification Challenge of the fifth Affective Behavior Analysis in-the-wild (ABAW) Competition.
1 code implementation • 13 Feb 2023 • Yuchen Liu, Chen Chen, Lingjuan Lyu, Fangzhao Wu, Sai Wu, Gang Chen
In order to address this issue, we propose GAS, a \shorten approach that can successfully adapt existing robust AGRs to non-IID settings.
no code implementations • 9 Feb 2023 • Yuying Li, Yuchen Liu, Donald S. Williamson
More specifically, we develop a joint learning approach that uses a composite T60 module and a separate dereverberation module to simultaneously perform reverberation time estimation and dereverberation.
no code implementations • CVPR 2023 • Yuchen Liu, Yaoming Wang, Yabo Chen, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
Then, we propose a novel unsupervised domain generalization approach, namely Dual Nearest Neighbors contrastive learning with strong Augmentation (DN^2A).
no code implementations • ICCV 2023 • Yuchen Liu, Yabo Chen, Mengran Gou, Chun-Ting Huang, Yaoming Wang, Wenrui Dai, Hongkai Xiong
In this paper, we propose the first Unsupervised Domain Generalization framework for Face Anti-Spoofing, namely UDG-FAS, which could exploit large amounts of easily accessible unlabeled data to learn generalizable features for enhancing the low-data regime of FAS.
1 code implementation • CVPR 2023 • Yaoming Wang, Bowen Shi, Xiaopeng Zhang, Jin Li, Yuchen Liu, Wenrui Dai, Chenglin Li, Hongkai Xiong, Qi Tian
To mitigate the computational and storage demands, recent research has explored Parameter-Efficient Fine-Tuning (PEFT), which focuses on tuning a minimal number of parameters for efficient adaptation.
no code implementations • 2 Dec 2022 • Lei Shang, Mouxiao Huang, Wu Shi, Yuchen Liu, Yang Liu, Fei Wang, Baigui Sun, Xuansong Xie, Yu Qiao
Intuitively, FR algorithms can benefit from both the estimation of uncertainty and the detection of out-of-distribution (OOD) samples.
no code implementations • 4 Nov 2022 • Yuchen Liu, Li-Chia Yang, Alex Pawlicki, Marko Stamenovic
Speech quality assessment has been a critical component in many voice communication related applications such as telephony and online conferencing.
1 code implementation • 2 Nov 2022 • Yihong Dong, Xue Jiang, Yuchen Liu, Ge Li, Zhi Jin
CodePAD can leverage existing sequence-based models, and we show that it can achieve 100\% grammatical correctness percentage on these benchmark datasets.
1 code implementation • 18 Oct 2022 • Chen Wang, Yuchen Liu, Boxing Chen, Jiajun Zhang, Wei Luo, Zhongqiang Huang, Chengqing Zong
Existing zero-shot methods fail to align the two modalities of speech and text into a shared semantic space, resulting in much worse performance compared to the supervised ST methods.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+5
no code implementations • 24 Aug 2022 • Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Richard Zhang, S. Y. Kung
While concatenating GAN inversion and a 3D-aware, noise-to-image GAN is a straight-forward solution, it is inefficient and may lead to noticeable drop in editing quality.
1 code implementation • 3 Aug 2022 • Yuchen Liu
In regions that practice common law, relevant historical cases are essential references for sentencing.
1 code implementation • 31 Jul 2022 • Yabo Chen, Yuchen Liu, Dongsheng Jiang, Xiaopeng Zhang, Wenrui Dai, Hongkai Xiong, Qi Tian
We also analyze how to build good views for the teacher branch to produce latent representation from the perspective of information bottleneck.
1 code implementation • 19 Jul 2022 • Tenggan Zhang, Chuanhe Liu, Xiaolong Liu, Yuchen Liu, Liyu Meng, Lei Sun, Wenqiang Jiang, Fengyuan Zhang, Jinming Zhao, Qin Jin
This paper presents our system for the Multi-Task Learning (MTL) Challenge in the 4th Affective Behavior Analysis in-the-wild (ABAW) competition.
no code implementations • 13 Jul 2022 • Yali Du, Chengdong Ma, Yuchen Liu, Runji Lin, Hao Dong, Jun Wang, Yaodong Yang
Reinforcement learning algorithms require a large amount of samples; this often limits their real-world applications on even simple tasks.
1 code implementation • 30 May 2022 • Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu
In this paper, we study the problem of FAT under label skewness, and reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models.
1 code implementation • ACL 2022 • Jinming Zhao, Tenggan Zhang, Jingwen Hu, Yuchen Liu, Qin Jin, Xinchao Wang, Haizhou Li
In this work, we propose a Multi-modal Multi-scene Multi-label Emotional Dialogue dataset, M3ED, which contains 990 dyadic emotional dialogues from 56 different TV series, a total of 9, 082 turns and 24, 449 utterances.
1 code implementation • 18 Apr 2022 • Junhang Li, Jiao Wei, Can Tong, Tingting Shen, Yuchen Liu, Chen Li, Shouliang Qi, YuDong Yao, Yueyang Teng
Traditional nonnegative matrix factorization (NMF) learns a new feature representation on the whole data space, which means treating all features equally.
no code implementations • 24 Mar 2022 • Liyu Meng, Yuchen Liu, Xiaolong Liu, Zhaopei Huang, Yuan Cheng, Meng Wang, Chuanhe Liu, Qin Jin
In this paper, we briefly introduce our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition.
no code implementations • 21 Oct 2021 • Yuchen Liu, S. Y. Kung, David Wentzlaff
While most prior works in evolutionary learning aim at directly searching the structure of a network, few attempts have been made on another promising track, channel pruning, which recently has made major headway in designing efficient deep learning models.
no code implementations • 21 Oct 2021 • Yuchen Liu, David Wentzlaff, S. Y. Kung
We then propose a novel layer-adaptive hierarchical pruning approach, where we use a coarse class discrimination scheme for early layers and a fine one for later layers.
no code implementations • 29 Sep 2021 • Yuchen Liu, Yali Du, Runji Lin, Hangrui Bi, Mingdong Wu, Jun Wang, Hao Dong
Model-based RL is an effective approach for reducing sample complexity.
Model-based Reinforcement Learning
Reinforcement Learning (RL)
1 code implementation • ACL 2021 • Jingwen Hu, Yuchen Liu, Jinming Zhao, Qin Jin
Emotion recognition in conversation (ERC) is a crucial component in affective dialogue systems, which helps the system understand users' emotions and generate empathetic responses.
1 code implementation • CVPR 2021 • Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Federico Perazzi, S. Y. Kung
We then propose a novel content-aware method to guide the processes of both pruning and distillation.
no code implementations • 18 Feb 2021 • Yuchen Liu, Chenyang Xu, Ziquan Zhuang
We prove that on any log Fano pair of dimension $n$ whose stability threshold is less than $\frac{n+1}{n}$, any valuation computing the stability threshold has a finitely generated associated graded ring.
Algebraic Geometry Differential Geometry
no code implementations • ICCV 2021 • Yaoming Wang, Yuchen Liu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
Existing differentiable neural architecture search approaches simply assume the architectural distribution on each edge is independent of each other, which conflicts with the intrinsic properties of architecture.
no code implementations • 17 Nov 2020 • Yifan Yu, Shan Huang, Yuchen Liu, Yong Tan
We apply a partial-linear instrumental variable approach with a double machine learning framework to causally identify the impact of the negative discrete emotions on online content diffusion.
no code implementations • 28 Oct 2020 • Yuchen Liu, Junnan Zhu, Jiajun Zhang, Chengqing Zong
End-to-end speech translation aims to translate speech in one language into text in another language via an end-to-end way.
no code implementations • WS 2020 • Qian Wang, Yuchen Liu, Cong Ma, Yu Lu, Yining Wang, Long Zhou, Yang Zhao, Jiajun Zhang, Cheng-qing Zong
This paper describes the CASIA{'}s system for the IWSLT 2020 open domain translation task.
no code implementations • 29 Apr 2020 • Yuchen Liu, David Wentzlaff, S. Y. Kung
To this end, we initiate the first study on the effectiveness of a broad range of discriminant functions on channel pruning.
1 code implementation • 16 Dec 2019 • Yuchen Liu, Jiajun Zhang, Hao Xiong, Long Zhou, Zhongjun He, Hua Wu, Haifeng Wang, Cheng-qing Zong
Speech-to-text translation (ST), which translates source language speech into target language text, has attracted intensive attention in recent years.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+5
no code implementations • IJCNLP 2019 • Yining Wang, Jiajun Zhang, Long Zhou, Yuchen Liu, Cheng-qing Zong
In this paper, we introduce a novel interactive approach to translate a source language into two different languages simultaneously and interactively.
no code implementations • 17 Apr 2019 • Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, Cheng-qing Zong
End-to-end speech translation (ST), which directly translates from source language speech into target language text, has attracted intensive attentions in recent years.
no code implementations • 1 Nov 2018 • Long Zhou, Yuchen Liu, Jiajun Zhang, Cheng-qing Zong, Guoping Huang
Current Neural Machine Translation (NMT) employs a language-specific encoder to represent the source sentence and adopts a language-specific decoder to generate target translation.
no code implementations • 18 May 2018 • Quanshi Zhang, Yu Yang, Yuchen Liu, Ying Nian Wu, Song-Chun Zhu
Given feature maps of a certain conv-layer of the CNN, the explainer performs like an auto-encoder, which first disentangles the feature maps into object-part features and then inverts object-part features back to features of higher conv-layers of the CNN.
1 code implementation • 21 Nov 2017 • Chantat Eksombatchai, Pranav Jindal, Jerry Zitao Liu, Yuchen Liu, Rahul Sharma, Charles Sugnet, Mark Ulrich, Jure Leskovec
Furthermore, we develop a graph pruning strategy at that leads to an additional 58% improvement in recommendations.
no code implementations • 12 Nov 2015 • Dmitry Kislyuk, Yuchen Liu, David Liu, Eric Tzeng, Yushi Jing
This paper presents Pinterest Related Pins, an item-to-item recommendation system that combines collaborative filtering with content-based ranking.