Search Results for author: Yuelin Bai

Found 11 papers, 6 papers with code

MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale

no code implementations6 Dec 2024 Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, YuBo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, Xiang Yue

To address these challenges, we introduce a scalable and cost-effective method to construct a large-scale multimodal instruction-tuning dataset with rich intermediate rationales designed to elicit CoT reasoning.

Multimodal Reasoning Visual Question Answering

Teach Multimodal LLMs to Comprehend Electrocardiographic Images

no code implementations21 Oct 2024 Ruoqi Liu, Yuelin Bai, Xiang Yue, Ping Zhang

However, the application of MLLMs to ECG image interpretation remains challenging due to the lack of instruction tuning datasets and well-established ECG image benchmarks for quantitative evaluation.

Image Comprehension

Can MLLMs Understand the Deep Implication Behind Chinese Images?

1 code implementation17 Oct 2024 Chenhao Zhang, Xi Feng, Yuelin Bai, Xinrun Du, Jinchang Hou, Kaixin Deng, Guangzeng Han, Qinrui Li, Bingli Wang, Jiaheng Liu, Xingwei Qu, Yifei Zhang, Qixuan Zhao, Yiming Liang, Ziqiang Liu, Feiteng Fang, Min Yang, Wenhao Huang, Chenghua Lin, Ge Zhang, Shiwen Ni

To fill the gap, we introduce the **C**hinese **I**mage **I**mplication understanding **Bench**mark, **CII-Bench**, which aims to assess the higher-order perception and understanding capabilities of MLLMs for Chinese images.

Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models

1 code implementation27 Sep 2024 Jiaming Li, Lei Zhang, Yunshui Li, Ziqiang Liu, Yuelin Bai, Run Luo, Longze Chen, Min Yang

Specifically, Ruler equips LLMs with the ability to generate responses of a specified length based on length constraints within the instructions.

Instruction Following

Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training

1 code implementation31 May 2024 Feiteng Fang, Yuelin Bai, Shiwen Ni, Min Yang, Xiaojun Chen, Ruifeng Xu

Prior RAG studies on the robustness of retrieval noises often confine themselves to a limited set of noise types, deviating from real-world retrieval environments and limiting practical applicability.

Hallucination Multi-Task Learning +2

MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in Intellectual Property

1 code implementation26 Feb 2024 Shiwen Ni, Minghuan Tan, Yuelin Bai, Fuqiang Niu, Min Yang, BoWen Zhang, Ruifeng Xu, Xiaojun Chen, Chengming Li, Xiping Hu, Ye Li, Jianping Fan

In this paper, we contribute a new benchmark, the first Multilingual-oriented quiZ on Intellectual Property (MoZIP), for the evaluation of LLMs in the IP domain.

Language Modeling Language Modelling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.