no code implementations • 24 Mar 2025 • Biwen Meng, Xi Long, Wanrong Yang, Ruochen Liu, Yi Tian, Yalin Zheng, Jingxin Liu
Deep learning has made significant progress in addressing challenges in various fields including computational pathology (CPath).
1 code implementation • 20 Mar 2025 • Jiale Wei, Shuchi Wu, Ruochen Liu, Xiang Ying, Jingbo Shang, Fangbo Tao
Memory, additional information beyond the training of large language models (LLMs), is crucial to various real-world applications, such as personal assistant.
no code implementations • 24 Feb 2025 • Ruochen Liu, Hao Chen, Yuanchen Bei, Zheyu Zhou, Lijia Chen, Qijie Shen, Feiran Huang, Fakhri Karray, Senzhang Wang
Specifically, we present FilterLLM, a framework that extends the next-word prediction capabilities of LLMs to billion-scale filtering tasks.
1 code implementation • 11 Oct 2024 • Hao Yan, Chaozhuo Li, Zhigang Yu, Jun Yin, Ruochen Liu, Peiyan Zhang, Weihao Han, Mingzheng Li, Zhengxin Zeng, Hao Sun, Weiwei Deng, Feng Sun, Qi Zhang, Senzhang Wang
However, the absence of meaningful benchmark datasets and standardized evaluation procedures for MAG representation learning has impeded progress in this field.
no code implementations • 7 Oct 2024 • Chen Zhang, Huan Hu, Yuan Zhou, Qiyang Cao, Ruochen Liu, Wenya Wei, Elvis S. Liu
To address the challenges of navigation and combat in modern 3D FPS games, we introduce a method that combines navigation mesh (Navmesh) and shooting-rule with deep reinforcement learning (NSRL).
no code implementations • 3 Oct 2024 • Stefan Juang, Hugh Cao, Arielle Zhou, Ruochen Liu, Nevin L. Zhang, Elvis Liu
This paper introduces Comparative Advantage Maximization (CAM), a method designed to enhance individual agent specialization in multiagent systems.
no code implementations • 14 Sep 2024 • Jun Yin, Zhengxin Zeng, Mingzheng Li, Hao Yan, Chaozhuo Li, Weihao Han, Jianjin Zhang, Ruochen Liu, Allen Sun, Denvy Deng, Feng Sun, Qi Zhang, Shirui Pan, Senzhang Wang
Owing to the unprecedented capability in semantic understanding and logical reasoning, the pre-trained large language models (LLMs) have shown fantastic potential in developing the next-generation recommender systems (RSs).
no code implementations • 26 Oct 2023 • Benjamin Yan, Ruochen Liu, David E. Kuo, Subathra Adithan, Eduardo Pontes Reis, Stephen Kwak, Vasantha Kumar Venugopal, Chloe P. O'Connell, Agustina Saenz, Pranav Rajpurkar, Michael Moor
First, we extract the content from an image; then, we verbalize the extracted content into a report that matches the style of a specific radiologist.
no code implementations • 5 Sep 2023 • Shunyang Zhang, Senzhang Wang, Xianzhen Tan, Ruochen Liu, Jian Zhang, Jianxin Wang
Spatial time series imputation is critically important to many real applications such as intelligent transportation and air quality monitoring.
no code implementations • 24 Aug 2023 • Haoyuan Lv, Ruochen Liu
Next, a task transfer strategy is established to select seeds from source tasks and correct unsuitable knowledge in seeds to suppress negative transfer.
1 code implementation • NeurIPS 2022 • Zelun Luo, Zane Durante, Linden Li, Wanze Xie, Ruochen Liu, Emily Jin, Zhuoyi Huang, Lun Yu Li, Jiajun Wu, Juan Carlos Niebles, Ehsan Adeli, Fei-Fei Li
Video-language models (VLMs), large models pre-trained on numerous but noisy video-text pairs from the internet, have revolutionized activity recognition through their remarkable generalization and open-vocabulary capabilities.
Ranked #2 on
Few Shot Action Recognition
on MOMA-LRG
(using extra training data)
no code implementations • 18 Sep 2022 • Zheming Tu, Changhao Chen, Xianfei Pan, Ruochen Liu, Jiarui Cui, Jun Mao
Accurate and robust localization is a fundamental need for mobile agents.