Search Results for author: Yuxiang Nie

Found 9 papers, 6 papers with code

Foundation Model for Advancing Healthcare: Challenges, Opportunities, and Future Directions

1 code implementation4 Apr 2024 Yuting He, Fuxiang Huang, Xinrui Jiang, Yuxiang Nie, Minghao Wang, Jiguang Wang, Hao Chen

To answer these questions, a comprehensive and deep survey of the challenges, opportunities, and future directions of HFMs is presented in this survey.

Elysium: Exploring Object-level Perception in Videos via MLLM

1 code implementation25 Mar 2024 Han Wang, Yanjie Wang, YongJie Ye, Yuxiang Nie, Can Huang

Multi-modal Large Language Models (MLLMs) have demonstrated their ability to perceive objects in still images, but their application in video-related tasks, such as object tracking, remains understudied.

Object Referring Expression +4

Multi-task Learning for Low-resource Second Language Acquisition Modeling

1 code implementation25 Aug 2019 Yong Hu, He-Yan Huang, Tian Lan, Xiaochi Wei, Yuxiang Nie, Jiarui Qi, Liner Yang, Xian-Ling Mao

Second language acquisition (SLA) modeling is to predict whether second language learners could correctly answer the questions according to what they have learned.

Language Acquisition Multi-Task Learning

Unsupervised Question Answering via Answer Diversifying

1 code implementation COLING 2022 Yuxiang Nie, Heyan Huang, Zewen Chi, Xian-Ling Mao

Previous works usually make use of heuristic rules as well as pre-trained models to construct data and train QA models.

Data Augmentation Denoising +4

AttenWalker: Unsupervised Long-Document Question Answering via Attention-based Graph Walking

1 code implementation3 May 2023 Yuxiang Nie, Heyan Huang, Wei Wei, Xian-Ling Mao

To alleviate the problem, it might be possible to generate long-document QA pairs via unsupervised question answering (UQA) methods.

Few-Shot Learning Question Answering

SciMRC: Multi-perspective Scientific Machine Reading Comprehension

no code implementations25 Jun 2023 Xiao Zhang, Heqi Zheng, Yuxiang Nie, Heyan Huang, Xian-Ling Mao

However, the dataset has ignored the fact that different readers may have different levels of understanding of the text, and only includes single-perspective question-answer pairs, leading to a lack of consideration of different perspectives.

Machine Reading Comprehension

Mix-Initiative Response Generation with Dynamic Prefix Tuning

no code implementations26 Mar 2024 Yuxiang Nie, Heyan Huang, Xian-Ling Mao, Lizi Liao

Specifically, IDPT decouples initiative factors into different prefix parameters and uses the attention mechanism to adjust the selection of initiatives in guiding generation dynamically.

Response Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.