Search Results for author: Run Luo

Found 14 papers, 10 papers with code

PersonaMath: Enhancing Math Reasoning through Persona-Driven Data Augmentation

no code implementations2 Oct 2024 Jing Luo, Run Luo, Longze Chen, Liang Zhu, Chang Ao, Jiaming Li, Yukun Chen, Xin Cheng, Wen Yang, Jiayuan Su, Chengming Li, Min Yang

To bridge this gap, we propose a data augmentation approach and introduce PersonaMathQA, a dataset derived from MATH and GSM8K, on which we train the PersonaMath models.

Data Augmentation Diversity +3

Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models

1 code implementation27 Sep 2024 Jiaming Li, Lei Zhang, Yunshui Li, Ziqiang Liu, Yuelin Bai, Run Luo, Longze Chen, Min Yang

Specifically, Ruler equips LLMs with the ability to generate responses of a specified length based on length constraints within the instructions.

Instruction Following

MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct

no code implementations9 Sep 2024 Run Luo, Haonan Zhang, Longze Chen, Ting-En Lin, Xiong Liu, Yuchuan Wu, Min Yang, Minzheng Wang, Pengpeng Zeng, Lianli Gao, Heng Tao Shen, Yunshui Li, Xiaobo Xia, Fei Huang, Jingkuan Song, Yongbin Li

This framework iteratively improve data quality through a refined combination of fine-grained perception, cognitive reasoning, and interaction evolution, generating a more complex and diverse image-text instruction dataset that empowers MLLMs with enhanced capabilities.

Diversity Visual Reasoning

TrackSSM: A General Motion Predictor by State-Space Model

1 code implementation31 Aug 2024 Bin Hu, Run Luo, Zelin Liu, Cheng Wang, Wenyu Liu

Specifically, we propose Flow-SSM, a module that utilizes the position and motion information from historical trajectories to guide the temporal state transition of object bounding boxes.

Decoder Mamba +5

Autogenic Language Embedding for Coherent Point Tracking

1 code implementation30 Jul 2024 Zikai Song, Ying Tang, Run Luo, Lintao Ma, Junqing Yu, Yi-Ping Phoebe Chen, Wei Yang

Point tracking is a challenging task in computer vision, aiming to establish point-wise correspondence across long video sequences.

Decoder Point Tracking

Hierarchical Context Pruning: Optimizing Real-World Code Completion with Repository-Level Pretrained Code LLMs

1 code implementation26 Jun 2024 Lei Zhang, Yunshui Li, Jiaming Li, Xiaobo Xia, Jiaxi Yang, Run Luo, Minzheng Wang, Longze Chen, Junhao Liu, Min Yang

We applied the HCP strategy in experiments with six Repo-Code LLMs, and the results demonstrate that our proposed method can significantly enhance completion accuracy while substantially reducing the length of input.

Code Completion

Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA

1 code implementation25 Jun 2024 Minzheng Wang, Longze Chen, Cheng Fu, Shengyi Liao, Xinghua Zhang, Bingli Wu, Haiyang Yu, Nan Xu, Lei Zhang, Run Luo, Yunshui Li, Min Yang, Fei Huang, Yongbin Li

Long-context modeling capabilities have garnered widespread attention, leading to the emergence of Large Language Models (LLMs) with ultra-context windows.

Benchmarking Long-Context Understanding +2

Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models

1 code implementation28 May 2024 Longze Chen, Ziqiang Liu, Wanwei He, Yunshui Li, Run Luo, Min Yang

In this study, we propose a data mining framework \textbf{ProLong} that can assign each training sample with a long dependency score, which can be used to rank and filter samples that are more advantageous for enhancing long-context modeling abilities in LLM training.

Computational Efficiency Specificity

Marathon: A Race Through the Realm of Long Context with Large Language Models

1 code implementation15 Dec 2023 Lei Zhang, Yunshui Li, Ziqiang Liu, Jiaxi Yang, Junhao Liu, Longze Chen, Run Luo, Min Yang

With the advancement of large language models (LLMs) and the expansion of their context windows, existing long-context benchmarks fall short in effectively evaluating the models' comprehension and reasoning abilities in extended texts.

Long-Context Understanding Multiple-choice

VDialogUE: A Unified Evaluation Benchmark for Visually-grounded Dialogue

no code implementations14 Sep 2023 Yunshui Li, Binyuan Hui, Zhaochao Yin, Wanwei He, Run Luo, Yuxing Long, Min Yang, Fei Huang, Yongbin Li

Visually-grounded dialog systems, which integrate multiple modes of communication such as text and visual inputs, have become an increasingly popular area of investigation.

DiffusionTrack: Diffusion Model For Multi-Object Tracking

1 code implementation19 Aug 2023 Run Luo, Zikai Song, Lintao Ma, JinLin Wei, Wei Yang, Min Yang

In inference, the model refines a set of paired randomly generated boxes to the detection and tracking results in a flexible one-step or multi-step denoising diffusion process.

Denoising Multi-Object Tracking +3

Compact Transformer Tracker with Correlative Masked Modeling

1 code implementation26 Jan 2023 Zikai Song, Run Luo, Junqing Yu, Yi-Ping Phoebe Chen, Wei Yang

Transformer framework has been showing superior performances in visual object tracking for its great strength in information aggregation across the template and search image with the well-known attention mechanism.

Decoder Visual Object Tracking

VariabilityTrack:Multi-Object Tracking with Variable Speed Object Movement

no code implementations12 Mar 2022 Run Luo, JinLin Wei, Qiao Lin

Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos.

Multi-Object Tracking Object

Cannot find the paper you are looking for? You can Submit a new open access paper.