no code implementations • 17 Oct 2024 • Xu Han, Yuancheng Sun, Kai Chen, Kang Liu, Qiwei Ye
Coarse-grained(CG) molecular dynamics simulations offer computational efficiency for exploring protein conformational ensembles and thermodynamic properties.
no code implementations • 14 Jul 2024 • Yuyan Ni, Shikun Feng, Xin Hong, Yuancheng Sun, Wei-Ying Ma, Zhi-Ming Ma, Qiwei Ye, Yanyan Lan
Deep learning methods have been considered promising for accelerating molecular screening in drug discovery and material design.
1 code implementation • 26 May 2024 • Peitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou
Compressing lengthy context is a critical but technically challenging problem.
1 code implementation • 30 Apr 2024 • Peitian Zhang, Ninglu Shao, Zheng Liu, Shitao Xiao, Hongjin Qian, Qiwei Ye, Zhicheng Dou
We extend the context length of Llama-3-8B-Instruct from 8K to 80K via QLoRA fine-tuning.
no code implementations • 7 Mar 2024 • Zhaoqun Li, Jingcheng Yu, Qiwei Ye
Deep learning has made significant progress in protein structure prediction, advancing the development of computational biology.
no code implementations • 21 Feb 2024 • Kai Yang, Jan Ackermann, Zhenyu He, Guhao Feng, Bohang Zhang, Yunzhen Feng, Qiwei Ye, Di He, LiWei Wang
Our results show that while these models are expressive enough to solve general DP tasks, contrary to expectations, they require a model size that scales with the problem size.
1 code implementation • 16 Jan 2024 • Bohang Zhang, Jingchu Gai, Yiheng Du, Qiwei Ye, Di He, LiWei Wang
Specifically, we identify a fundamental expressivity measure termed homomorphism expressivity, which quantifies the ability of GNN models to count graphs under homomorphism.
1 code implementation • 7 Jan 2024 • Peitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou
In this paper, we propose Activation Beacon, a plug-in module for transformer-based LLMs that targets effective, efficient, and flexible compression of long contexts.
1 code implementation • NeurIPS 2021 • Hanzhe Hu, Fangyun Wei, Han Hu, Qiwei Ye, Jinshi Cui, LiWei Wang
The confidence bank is leveraged as an indicator to tilt training towards under-performing categories, instantiated in three strategies: 1) adaptive Copy-Paste and CutMix data augmentation approaches which give more chance for under-performing categories to be copied or cut; 2) an adaptive data sampling approach to encourage pixels from under-performing category to be sampled; 3) a simple yet effective re-weighting method to alleviate the training noise raised by pseudo-labeling.
no code implementations • 29 Sep 2021 • Qiwei Ye, Yuxuan Song, Chang Liu, Fangyun Wei, Tao Qin, Tie-Yan Liu
Stochastic polic have been widely applied for their good property in exploration and uncertainty quantification.
Ranked #1 on
MuJoCo Games
on Ant-v3
no code implementations • 9 Mar 2021 • Tianhao Zhang, Yueheng Li, Shuai Li, Qiwei Ye, Chen Wang, Guangming Xie
In this paper, the circle formation control problem is addressed for a group of cooperative underactuated fish-like robots involving unknown nonlinear dynamics and disturbances.
1 code implementation • 5 Apr 2020 • Yuxuan Song, Qiwei Ye, Minkai Xu, Tie-Yan Liu
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
Ranked #13 on
Image Generation
on STL-10
no code implementations • 30 Mar 2020 • Junjie Li, Sotetsu Koyamada, Qiwei Ye, Guoqing Liu, Chao Wang, Ruihan Yang, Li Zhao, Tao Qin, Tie-Yan Liu, Hsiao-Wuen Hon
Artificial Intelligence (AI) has achieved great success in many domains, and game AI is widely regarded as its beachhead since the dawn of AI.
no code implementations • 28 May 2019 • Ruihan Yang, Qiwei Ye, Tie-Yan Liu
Based on that, We proposed an end-to-end algorithm to learn exploration policy by meta-learning.
no code implementations • 28 May 2019 • Yufei Wang, Qiwei Ye, Tie-Yan Liu
In reinforcement learning, Return, which is the weighted accumulated future rewards, and Value, which is the expected return, serve as the objective that guides the learning of the policy.
1 code implementation • NeurIPS 2017 • Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu
We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size.
no code implementations • NeurIPS 2016 • Qi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhi-Ming Ma, Tie-Yan Liu
After partitioning the training data onto a number of (e. g., $M$) machines, this algorithm performs both local voting and global voting in each iteration.