1 code implementation • 31 May 2024 • Yiwen Sun, Wenye Li
OpenTensor is a reproduction of AlphaTensor, which discovered a new algorithm that outperforms the state-of-the-art methods for matrix multiplication by Deep Reinforcement Learning (DRL).
no code implementations • 4 Apr 2024 • Jiacai Liu, Wenye Li, Ke Wei
Projected policy gradient under the simplex parameterization, policy gradient and natural policy gradient under the softmax parameterization, are fundamental algorithms in reinforcement learning.
no code implementations • 3 Feb 2024 • Yurui Chen, Junge Zhang, Ziyang Xie, Wenye Li, Feihu Zhang, Jiachen Lu, Li Zhang
Autonomous driving simulation system plays a crucial role in enhancing self-driving data and simulating complex and rare traffic scenarios, ensuring navigation safety.
3 code implementations • Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV-2024) 2023 • Fangchen Yu, Yina Xie, Lei Wu, Yafei Wen, Guozhi Wang, Shuai Ren, Xiaoxin Chen, Jianfeng Mao, Wenye Li
Document image dewarping is a crucial task in computer vision with numerous practical applications.
1 code implementation • 37th Conference on Neural Information Processing Systems (NeurIPS 2023) 2023 • Fangchen Yu, Runze Zhao, Zhan Shi, Yiwen Lu, Jicong Fan, Yicheng Zeng, Jianfeng Mao, Wenye Li
Secondly, we develop a series of affinity learning methods that equip the selfexpressive framework with ℓp-norm to construct an intrinsic affinity matrix with an adaptive extension.
2 code implementations • 26th European Conference on Artificial Intelligence 2023 • Fangchen Yu, Rui Bao, Jianfeng Mao, Wenye Li
Phylogenetic trees are essential in studying evolutionary relationships, and the Robinson-Foulds (RF) distance is a widely used metric to calculate pairwise dissimilarities between phylogenetic trees, with various applications in both the biology and computing communities.
2 code implementations • Thirty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI) 2023 • Fangchen Yu, Yicheng Zeng, Jianfeng Mao, Wenye Li
To address this challenge, we propose matrix correction algorithms that leverage the positive semi-definiteness (PSD) of the similarity matrix to improve similarity estimation in both offline and online scenarios.
1 code implementation • The Thirty-Seventh AAAI Conference on Artificial Intelligence 2023 • Wenye Li, Fangchen Yu, Zichen Ma
The first stage computes a fast yet high-quality approximate solution from a set of isometrically embeddable metrics, further improved by an effective heuristic.
no code implementations • 1 Mar 2023 • Ziyang Xie, Junge Zhang, Wenye Li, Feihu Zhang, Li Zhang
Specifically, we improve the scene parameterization function and the camera poses for learning better neural representations from street views.
no code implementations • 10 Nov 2022 • Haoning Zhang, Junwei Bao, Haipeng Sun, Youzheng Wu, Wenye Li, Shuguang Cui, Xiaodong He
Then, the noised previous state is used as the input to learn to predict the current state, improving the model's ability to update and correct slot values.
no code implementations • 11 Oct 2022 • Haoning Zhang, Junwei Bao, Haipeng Sun, Huaishao Luo, Wenye Li, Shuguang Cui
The unlabeled data of the DST task is incorporated into the self-training iterations, where the pseudo labels are predicted by a DST model trained on limited labeled data in advance.
no code implementations • 12 Aug 2022 • Zichen Ma, Yu Lu, Wenye Li, Shuguang Cui
This dynamically personalized FL technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
no code implementations • 17 May 2022 • Xinyu Chen, Renjie Li, Yueyao Yu, Yuanwen Shen, Wenye Li, Zhaoyu Zhang, Yin Zhang
In this work, we propose the first-ever Transformer model (POViT) to efficiently design and simulate semiconductor photonic devices with multiple objectives.
no code implementations • 10 Dec 2021 • Zichen Ma, Zihan Lu, Yu Lu, Wenye Li, JinFeng Yi, Shuguang Cui
In this paper, we design a federated two-stage learning framework that augments prototypical federated learning with a cut layer on devices and uses sign-based stochastic gradient descent with the majority vote method on model updates.
no code implementations • 17 Jun 2021 • Zichen Ma, Yu Lu, Zihan Lu, Wenye Li, JinFeng Yi, Shuguang Cui
Training in heterogeneous and potentially massive networks introduces bias into the system, which is originated from the non-IID data and the low participation rate in reality.
no code implementations • 1 Jan 2021 • Yueyao Yu, Jie Wang, Wenye Li, Yin Zhang
The stochastic gradient descent (SGD) method, first proposed in 1950's, has been the foundation for deep-neural-network (DNN) training with numerous enhancements including adding a momentum or adaptively selecting learning rates, or using both strategies and more.
no code implementations • 29 Jun 2020 • Wenye Li, Shuzhong Zhang
Random projection is often used to project higher-dimensional vectors onto a lower-dimensional space, while approximately preserving their pairwise distances.
no code implementations • 5 Nov 2019 • Wenye Li, Senyue Hao
As the first step in automated natural language processing, representing words and sentences is of central importance and has attracted significant research attention.
no code implementations • 30 Oct 2019 • Hailiang Li, Adele Y. C. Wang, Yang Liu, Du Tang, Zhibin Lei, Wenye Li
The Transformer based neural networks have been showing significant advantages on most evaluations of various natural language processing and other sequence-to-sequence tasks due to its inherent architecture based superiorities.
no code implementations • 27 Jul 2019 • Wenye Li
Inspired by the advances in biological science, the study of sparse binary projection models has attracted considerable recent research attention.
no code implementations • 18 Feb 2019 • Yueyao Yu, Pengfei Yu, Wenye Li
Deep learning models are vulnerable to adversarial examples, which poses an indisputable threat to their applications.
no code implementations • NeurIPS 2018 • Wenye Li, Jingwei Mao, Yin Zhang, Shuguang Cui
Similarity search is a fundamental problem in computing science with various applications and has attracted significant research attention, especially in large-scale search with high dimensions.
no code implementations • NeurIPS 2015 • Wenye Li
The Jaccard index is a standard statistics for comparing the pairwise similarity between data samples.