Search Results for author: Lixue Cheng

Found 8 papers, 5 papers with code

Quantum approximate optimization via learning-based adaptive optimization

1 code implementation27 Mar 2023 Lixue Cheng, Yu-Qin Chen, Shi-Xin Zhang, Shengyu Zhang

Quantum approximate optimization algorithm (QAOA), one of the most representative quantum-classical hybrid algorithms, is designed to solve combinatorial optimization problems by transforming the discrete optimization problem into a classical optimization problem over continuous circuit parameters.

Bayesian Optimization Combinatorial Optimization

Molecular-orbital-based Machine Learning for Open-shell and Multi-reference Systems with Kernel Addition Gaussian Process Regression

1 code implementation17 Jul 2022 Lixue Cheng, Jiace Sun, J. Emiliano Deustua, Vignesh C. Bhethanabotla, Thomas F. Miller III

We introduce a novel machine learning strategy, kernel addition Gaussian process regression (KA-GPR), in molecular-orbital-based machine learning (MOB-ML) to learn the total correlation energies of general electronic structure theories for closed- and open-shell systems by introducing a machine learning strategy.

BIG-bench Machine Learning GPR

Molecular Dipole Moment Learning via Rotationally Equivariant Gaussian Process Regression with Derivatives in Molecular-orbital-based Machine Learning

2 code implementations31 May 2022 Jiace Sun, Lixue Cheng, Thomas F. Miller III

To demonstrate the ability of MOB-ML to function as generalized density-matrix functionals for molecular dipole moments and energies of organic molecules, we further apply the proposed MOB-ML approach to train and test the molecules from the QM9 dataset.

GPR

ODBO: Bayesian Optimization with Search Space Prescreening for Directed Protein Evolution

2 code implementations19 May 2022 Lixue Cheng, ZiYi Yang, ChangYu Hsieh, Benben Liao, Shengyu Zhang

Directed evolution is a versatile technique in protein engineering that mimics the process of natural selection by iteratively alternating between mutagenesis and screening in order to search for sequences that optimize a given property of interest, such as catalytic activity and binding affinity to a specified target.

Bayesian Optimization Experimental Design +1

Accurate Molecular-Orbital-Based Machine Learning Energies via Unsupervised Clustering of Chemical Space

no code implementations21 Apr 2022 Lixue Cheng, Jiace Sun, Thomas F. Miller III

The resulting clusters from supervised or unsupervised clustering is further combined with scalable Gaussian process regression (GPR) or linear regression (LR) to learn molecular energies accurately by generating a local regression model in each cluster.

BIG-bench Machine Learning Clustering +2

Molecular Energy Learning Using Alternative Blackbox Matrix-Matrix Multiplication Algorithm for Exact Gaussian Process

2 code implementations NeurIPS Workshop AI4Scien 2021 Jiace Sun, Lixue Cheng, Thomas F. Miller III

The training of MOB-ML was limited to 220 molecules, and BBMM and AltBBMM scale the training of MOB-ML up by over 30 times to 6500 molecules (more than a million pair energies).

BIG-bench Machine Learning

Regression-clustering for Improved Accuracy and Training Cost with Molecular-Orbital-Based Machine Learning

no code implementations4 Sep 2019 Lixue Cheng, Nikola B. Kovachki, Matthew Welborn, Thomas F. Miller III

Machine learning (ML) in the representation of molecular-orbital-based (MOB) features has been shown to be an accurate and transferable approach to the prediction of post-Hartree-Fock correlation energies.

BIG-bench Machine Learning Clustering +2

A Universal Density Matrix Functional from Molecular Orbital-Based Machine Learning: Transferability across Organic Molecules

no code implementations10 Jan 2019 Lixue Cheng, Matthew Welborn, Anders S. Christensen, Thomas F. Miller III

Finally, a transferability test in which models trained for seven-heavy-atom systems are used to predict energies for thirteen-heavy-atom systems reveals that MOB-ML reaches chemical accuracy with 36-fold fewer training calculations than $\Delta$-ML (140 versus 5000 training calculations).

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.