Search Results for author: Qiwei Ye

Found 13 papers, 5 papers with code

SGNet: Folding Symmetrical Protein Complex with Deep Learning

no code implementations7 Mar 2024 Zhaoqun Li, Jingcheng Yu, Qiwei Ye

Deep learning has made significant progress in protein structure prediction, advancing the development of computational biology.

Protein Folding Protein Structure Prediction

Do Efficient Transformers Really Save Computation?

no code implementations21 Feb 2024 Kai Yang, Jan Ackermann, Zhenyu He, Guhao Feng, Bohang Zhang, Yunzhen Feng, Qiwei Ye, Di He, LiWei Wang

Our results show that while these models are expressive enough to solve general DP tasks, contrary to expectations, they require a model size that scales with the problem size.

Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness

1 code implementation16 Jan 2024 Bohang Zhang, Jingchu Gai, Yiheng Du, Qiwei Ye, Di He, LiWei Wang

Specifically, we identify a fundamental expressivity measure termed homomorphism expressivity, which quantifies the ability of GNN models to count graphs under homomorphism.

Graph Learning Subgraph Counting

Soaring from 4K to 400K: Extending LLM's Context with Activation Beacon

1 code implementation7 Jan 2024 Peitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou

Although the context window can be extended through fine-tuning, it will result in a considerable cost at both training and inference time, and exert an unfavorable impact to the LLM's original capabilities.

4k Language Modelling

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning

1 code implementation NeurIPS 2021 Hanzhe Hu, Fangyun Wei, Han Hu, Qiwei Ye, Jinshi Cui, LiWei Wang

The confidence bank is leveraged as an indicator to tilt training towards under-performing categories, instantiated in three strategies: 1) adaptive Copy-Paste and CutMix data augmentation approaches which give more chance for under-performing categories to be copied or cut; 2) an adaptive data sampling approach to encourage pixels from under-performing category to be sampled; 3) a simple yet effective re-weighting method to alleviate the training noise raised by pseudo-labeling.

Data Augmentation Semi-Supervised Semantic Segmentation

Particle Based Stochastic Policy Optimization

no code implementations29 Sep 2021 Qiwei Ye, Yuxuan Song, Chang Liu, Fangyun Wei, Tao Qin, Tie-Yan Liu

Stochastic polic have been widely applied for their good property in exploration and uncertainty quantification.

MuJoCo Games Offline RL +2

Decentralized Circle Formation Control for Fish-like Robots in the Real-world via Reinforcement Learning

no code implementations9 Mar 2021 Tianhao Zhang, Yueheng Li, Shuai Li, Qiwei Ye, Chen Wang, Guangming Xie

In this paper, the circle formation control problem is addressed for a group of cooperative underactuated fish-like robots involving unknown nonlinear dynamics and disturbances.

reinforcement-learning Reinforcement Learning (RL)

Suphx: Mastering Mahjong with Deep Reinforcement Learning

no code implementations30 Mar 2020 Junjie Li, Sotetsu Koyamada, Qiwei Ye, Guoqing Liu, Chao Wang, Ruihan Yang, Li Zhao, Tao Qin, Tie-Yan Liu, Hsiao-Wuen Hon

Artificial Intelligence (AI) has achieved great success in many domains, and game AI is widely regarded as its beachhead since the dawn of AI.

reinforcement-learning Reinforcement Learning (RL)

Beyond Exponentially Discounted Sum: Automatic Learning of Return Function

no code implementations28 May 2019 Yufei Wang, Qiwei Ye, Tie-Yan Liu

In reinforcement learning, Return, which is the weighted accumulated future rewards, and Value, which is the expected return, serve as the objective that guides the learning of the policy.

Atari Games Meta-Learning +2

LightGBM: A Highly Efficient Gradient Boosting Decision Tree

1 code implementation NeurIPS 2017 Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu

We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size.

A Communication-Efficient Parallel Algorithm for Decision Tree

no code implementations NeurIPS 2016 Qi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhi-Ming Ma, Tie-Yan Liu

After partitioning the training data onto a number of (e. g., $M$) machines, this algorithm performs both local voting and global voting in each iteration.

2k Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.