2 code implementations • 16 Jul 2024 • Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, YuAn Liu, Amit Agarwal, Zhe Chen, Mo Li, Yubo Ma, Hailong Sun, Xiangyu Zhao, Junbo Cui, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, Dahua Lin, Kai Chen
Based on the evaluation results obtained with the toolkit, we host OpenVLM Leaderboard, a comprehensive leaderboard to track the progress of multi-modality learning research.
1 code implementation • 20 Jun 2024 • Yuxuan Qiao, Haodong Duan, Xinyu Fang, Junming Yang, Lin Chen, Songyang Zhang, Jiaqi Wang, Dahua Lin, Kai Chen
Vision Language Models (VLMs) demonstrate remarkable proficiency in addressing a wide array of visual questions, which requires strong perception and reasoning faculties.
1 code implementation • 5 Jan 2024 • Daoan Zhang, Junming Yang, Hanjia Lyu, Zijian Jin, Yuan YAO, Mingkai Chen, Jiebo Luo
When exploring the development of Artificial General Intelligence (AGI), a critical task for these models involves interpreting and processing information from multiple image inputs.
Ranked #4 on
Visual Reasoning
on Winoground
1 code implementation • 5 Sep 2023 • Junming Yang, Xingguo Chen, Shengyuan Wang, Bolei Zhang
Model-based offline reinforcement learning (RL), which builds a supervised transition model with logging dataset to avoid costly interactions with the online environment, has been a promising approach for offline policy optimization.
1 code implementation • 24 May 2023 • Qi Wang, Junming Yang, Yunbo Wang, Xin Jin, Wenjun Zeng, Xiaokang Yang
Training offline RL models using visual inputs poses two significant challenges, i. e., the overfitting problem in representation learning and the overestimation bias for expected future rewards.
no code implementations • 6 Nov 2019 • Junming Yang, Yaoqi Li, Xuanyu Chen, Jiahang Cao, Kangkang Jiang
Training a practical and effective model for stock selection has been a greatly concerned problem in the field of artificial intelligence.