Search Results for author: Wenlin Zhang

Found 7 papers, 1 papers with code

Depth-induced Saliency Comparison Network for Diagnosis of Alzheimer's Disease via Jointly Analysis of Visual Stimuli and Eye Movements

no code implementations15 Mar 2024 Yu Liu, Wenlin Zhang, Shaochu Wang, Fangyu Zuo, Peiguang Jing, Yong Ji

Early diagnosis of Alzheimer's Disease (AD) is very important for following medical treatments, and eye movements under special visual stimuli may serve as a potential non-invasive biomarker for detecting cognitive abnormalities of AD patients.

Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges

no code implementations27 Nov 2023 Nianwen Si, Hao Zhang, Heyu Chang, Wenlin Zhang, Dan Qu, WeiQiang Zhang

We further present evaluation datasets used in existing methods, and finally conclude this survey by presenting the ongoing challenges and future directions.

In-Context Learning Machine Unlearning +1

Tuning Large language model for End-to-end Speech Translation

no code implementations3 Oct 2023 Hao Zhang, Nianwen Si, Yaqi Chen, Wenlin Zhang, Xukui Yang, Dan Qu, Xiaolin Jiao

The training of LST consists of two stages: (1) Modality adjustment, where the adapter is tuned to align speech representation with text embedding space, and (2) Downstream task fine-tuning, where both the adapter and LLM model are trained to optimize performance on the E2EST task.

Language Modelling Large Language Model +2

RobuT: A Systematic Study of Table QA Robustness Against Human-Annotated Adversarial Perturbations

1 code implementation25 Jun 2023 Yilun Zhao, Chen Zhao, Linyong Nan, Zhenting Qi, Wenlin Zhang, Xiangru Tang, Boyu Mi, Dragomir Radev

Despite significant progress having been made in question answering on tabular data (Table QA), it's unclear whether, and to what extent existing Table QA models are robust to task-specific perturbations, e. g., replacing key question entities or shuffling table columns.

Few-Shot Learning Question Answering

Improving Speech Translation by Cross-Modal Multi-Grained Contrastive Learning

no code implementations20 Apr 2023 Hao Zhang, Nianwen Si, Yaqi Chen, Wenlin Zhang, Xukui Yang, Dan Qu, Wei-Qiang Zhang

However, the final model often performs worse on the MT task than the MT model trained alone, which means that the knowledge transfer ability of this method is also limited.

Contrastive Learning Machine Translation +3

Decouple Non-parametric Knowledge Distillation For End-to-end Speech Translation

no code implementations20 Apr 2023 Hao Zhang, Nianwen Si, Yaqi Chen, Wenlin Zhang, Xukui Yang, Dan Qu, Zhen Li

Existing techniques often attempt to make knowledge transfer from a powerful machine translation (MT) to speech translation (ST) model with some elaborate techniques, which often requires transcription as extra input during training.

Knowledge Distillation Machine Translation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.