1 code implementation • EMNLP 2021 • Haoran Li, Song Xu, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, BoWen Zhou
It thereby takes advantage of prior copying distributions and, at each time step, explicitly encourages the model to copy the input word that is relevant to the previously copied one.
Ranked #11 on
Abstractive Text Summarization
on CNN / Daily Mail
(using extra training data)
no code implementations • 18 May 2025 • Xin Yu, Yujia Wang, Jinghui Chen, Lingzhou Xue
However, it often suffers from sub-optimal performance compared with full fine-tuning since the update is constrained in the low-rank space.
no code implementations • CVPR 2025 • Gensheng Pei, Tao Chen, Yujia Wang, Xinhao Cai, Xiangbo Shu, Tianfei Zhou, Yazhou Yao
The CLIP model has demonstrated significant advancements in aligning visual and language modalities through large-scale pre-training on image-text pairs, enabling strong zero-shot classification and retrieval capabilities on various domains.
no code implementations • CVPR 2025 • Bo Zhou, Liulei Li, Yujia Wang, Huafeng Liu, Yazhou Yao, Wenguan Wang
We present UNIALIGN, a unified model to align an arbitrary number of modalities (\text e. g. , image, text, audio, 3D point cloud, etc.)
no code implementations • 28 Oct 2024 • Xiao Liu, Bo Qin, Dongzhu Liang, Guang Dong, Hanyu Lai, Hanchen Zhang, Hanlin Zhao, Iat Long Iong, Jiadai Sun, Jiaqi Wang, Junjie Gao, Junjun Shan, Kangning Liu, Shudan Zhang, Shuntian Yao, Siyi Cheng, Wentao Yao, Wenyi Zhao, Xinghan Liu, Xinyi Liu, Xinying Chen, Xinyue Yang, Yang Yang, Yifan Xu, Yu Yang, Yujia Wang, Yulin Xu, Zehan Qi, Yuxiao Dong, Jie Tang
This limitation underscores the importance of developing foundation agents capable of learning through autonomous environmental interactions by reinforcing existing models.
1 code implementation • 25 Jul 2024 • Yujia Wang, Shiqiang Wang, Songtao Lu, Jinghui Chen
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
no code implementations • 17 Jun 2024 • Xueying Du, Geng Zheng, Kaixin Wang, Yi Zou, Yujia Wang, Wentai Deng, Jiayi Feng, Mingwei Liu, Bihuan Chen, Xin Peng, Tao Ma, Yiling Lou
Although LLMs have shown promising potential in vulnerability detection, this study reveals their limitations in distinguishing between vulnerable and similar-but-benign patched code (only 0. 06 - 0. 14 accuracy).
no code implementations • 30 May 2024 • Yurui Chang, Bochuan Cao, Yujia Wang, Jinghui Chen, Lu Lin
In this study, we introduce a counterfactual explanation framework based on joint prompt attribution, XPrompt, which aims to explain how a few prompt texts collaboratively influences the LLM's complete generation.
no code implementations • 29 Mar 2024 • Yongqi Tong, Dawei Li, Sizhe Wang, Yujia Wang, Fei Teng, Jingbo Shang
We conduct a series of experiments to prove LLMs can obtain benefits from mistakes in both directions.
no code implementations • 22 Dec 2023 • Tiejin Chen, Yuanpu Cao, Yujia Wang, Cho-Jui Hsieh, Jinghui Chen
Specifically, FedPTR allows local clients or the server to optimize an auxiliary (synthetic) dataset that mimics the learning dynamics of the recent model update and utilizes it to project the next-step model trajectory for local training regularization.
no code implementations • 26 Oct 2023 • Zi Lin, Zihan Wang, Yongqi Tong, Yangkun Wang, Yuxin Guo, Yujia Wang, Jingbo Shang
This benchmark contains the rich, nuanced phenomena that can be tricky for current toxicity detection models to identify, revealing a significant domain difference compared to social media content.
2 code implementations • 5 May 2022 • Yujia Wang, Lu Lin, Jinghui Chen
We show that in the nonconvex stochastic optimization setting, our proposed FedCAMS achieves the same convergence rate of $O(\frac{1}{\sqrt{TKm}})$ as its non-compressed counterparts.
no code implementations • 1 Nov 2021 • Yujia Wang, Lu Lin, Jinghui Chen
We prove that the proposed communication-efficient distributed adaptive gradient method converges to the first-order stationary point with the same iteration complexity as uncompressed vanilla AMSGrad in the stochastic nonconvex optimization setting.
1 code implementation • Findings (EMNLP) 2021 • Song Xu, Haoran Li, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, Ying Liu, BoWen Zhou
K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.
1 code implementation • 10 Feb 2021 • Xiangyu Zhao, Peng Zhang, Fan Song, Guangda Fan, Yangyang Sun, Yujia Wang, Zheyuan Tian, Luqi Zhang, Guanglei Zhang
In this paper we propose a dilated dual attention U-Net (D2A U-Net) for COVID-19 lesion segmentation in CT slices based on dilated convolution and a novel dual attention mechanism to address the issues above.
1 code implementation • 1 Jan 2021 • Song Xu, Haoran Li, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, Ying Liu, BoWen Zhou
K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.
no code implementations • MIDL 2019 • Yichi Zhang, Lin Yuan, Yujia Wang, Jicong Zhang
Accurate segmentation of spine Magnetic Resonance Imaging (MRI) is highly demanded in morphological research, quantitative analysis, and diseases identification, such as spinal canal stenosis, disc herniation and degeneration.
no code implementations • 26 Nov 2018 • Qihao Liu, Yujia Wang, Xiaofeng Liu
To balance exploration and exploitation, the Novelty Search (NS) is employed in every chief agent to encourage policies with high novelty while maximizing per-episode performance.
no code implementations • 27 Sep 2018 • Yining Lang, Wei Liang, Yujia Wang, Lap-Fai Yu
In this paper, we propose a novel approach to synthesize 3D faces based on personality impression for creating virtual characters.
Graphics