Search Results for author: Weihao Gao

Found 21 papers, 7 papers with code

Machine Learning Force Fields with Data Cost Aware Training

1 code implementation5 Jun 2023 Alexander Bukharin, Tianyi Liu, Shengjie Wang, Simiao Zuo, Weihao Gao, Wen Yan, Tuo Zhao

To address this issue, we propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.

Learning Regularized Positional Encoding for Molecular Prediction

no code implementations23 Nov 2022 Xiang Gao, Weihao Gao, Wenzhi Xiao, Zhirui Wang, Chong Wang, Liang Xiang

To model the complex nonlinearity in predicting molecular properties in an more end-to-end approach, we propose to encode the positional quantities with a learnable embedding that is continuous and differentiable.

Supervised Pretraining for Molecular Force Fields and Properties Prediction

no code implementations23 Nov 2022 Xiang Gao, Weihao Gao, Wenzhi Xiao, Zhirui Wang, Chong Wang, Liang Xiang

Experiments show that, compared to training from scratch, fine-tuning the pretrained model can significantly improve the performance for seven molecular property prediction tasks and two force field tasks.

Molecular Property Prediction Property Prediction

Learning to Simulate Unseen Physical Systems with Graph Neural Networks

no code implementations NeurIPS Workshop AI4Scien 2021 Ce Yang, Weihao Gao, Di wu, Chong Wang

Simulation of the dynamics of physical systems is essential to the development of both science and engineering.

Learning Large-Time-Step Molecular Dynamics with Graph Neural Networks

no code implementations NeurIPS Workshop AI4Scien 2021 Tianze Zheng, Weihao Gao, Chong Wang

Molecular dynamics (MD) simulation predicts the trajectory of atoms by solving Newton's equation of motion with a numeric integrator.

Defending against Reconstruction Attack in Vertical Federated Learning

no code implementations21 Jul 2021 Jiankai Sun, Yuanshun Yao, Weihao Gao, Junyuan Xie, Chong Wang

Recently researchers have studied input leakage problems in Federated Learning (FL) where a malicious party can reconstruct sensitive training inputs provided by users from shared gradient.

Privacy Preserving Reconstruction Attack +1

Vertical Federated Learning without Revealing Intersection Membership

no code implementations10 Jun 2021 Jiankai Sun, Xin Yang, Yuanshun Yao, Aonan Zhang, Weihao Gao, Junyuan Xie, Chong Wang

In this paper, we propose a vFL framework based on Private Set Union (PSU) that allows each party to keep sensitive membership information to itself.

Vertical Federated Learning

One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning

no code implementations27 Apr 2021 Chaosheng Dong, Xiaojie Jin, Weihao Gao, Yijia Wang, Hongyi Zhang, Xiang Wu, Jianchao Yang, Xiaobing Liu

Deep learning models in large-scale machine learning systems are often continuously trained with enormous data from production environments.

Deep Retrieval: An End-to-End Structure Model for Large-Scale Recommendations

1 code implementation1 Jan 2021 Weihao Gao, Xiangjun Fan, Jiankai Sun, Kai Jia, Wenzhi Xiao, Chong Wang, Xiaobing Liu

With the model learnt, a beam search over the latent codes is performed to retrieve the top candidates.

Retrieval

Information-Theoretic Understanding of Population Risk Improvement with Model Compression

1 code implementation27 Jan 2019 Yuheng Bu, Weihao Gao, Shaofeng Zou, Venugopal V. Veeravalli

We show that model compression can improve the population risk of a pre-trained model, by studying the tradeoff between the decrease in the generalization error and the increase in the empirical risk with model compression.

Clustering Model Compression

Rate Distortion For Model Compression: From Theory To Practice

no code implementations9 Oct 2018 Weihao Gao, Yu-Han Liu, Chong Wang, Sewoong Oh

Theoretically, we prove that the proposed scheme is optimal for compressing one-hidden-layer ReLU neural networks.

Data Compression Model Compression +1

Learning One-hidden-layer Neural Networks under General Input Distributions

no code implementations9 Oct 2018 Weihao Gao, Ashok Vardhan Makkuva, Sewoong Oh, Pramod Viswanath

Significant advances have been made recently on training neural networks, where the main challenge is in solving an optimization problem with abundant critical points.

The Nearest Neighbor Information Estimator is Adaptively Near Minimax Rate-Optimal

no code implementations NeurIPS 2018 Jiantao Jiao, Weihao Gao, Yanjun Han

We analyze the Kozachenko--Leonenko (KL) nearest neighbor estimator for the differential entropy.

Estimating Mutual Information for Discrete-Continuous Mixtures

1 code implementation NeurIPS 2017 Weihao Gao, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

We provide numerical experiments suggesting superiority of the proposed estimator compared to other heuristics of adding small continuous noise to all the samples and applying standard estimators tailored for purely continuous variables, and quantizing the samples and applying standard estimators tailored for purely discrete variables.

Clustering Mutual Information Estimation

Breaking the Bandwidth Barrier: Geometrical Adaptive Entropy Estimation

no code implementations NeurIPS 2016 Weihao Gao, Sewoong Oh, Pramod Viswanath

In this paper, we combine both these approaches to design new estimators of entropy and mutual information that outperform state of the art methods.

Demystifying Fixed k-Nearest Neighbor Information Estimators

1 code implementation11 Apr 2016 Weihao Gao, Sewoong Oh, Pramod Viswanath

In this paper we demonstrate that the estimator is consistent and also identify an upper bound on the rate of convergence of the bias as a function of number of samples.

Conditional Dependence via Shannon Capacity: Axioms, Estimators and Applications

no code implementations10 Feb 2016 Weihao Gao, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

We conduct an axiomatic study of the problem of estimating the strength of a known causal relationship between a pair of variables.

Cannot find the paper you are looking for? You can Submit a new open access paper.