Search Results for author: Chai Wah Wu

Found 10 papers, 1 papers with code

Active Learning of Quantum System Hamiltonians yields Query Advantage

no code implementations29 Dec 2021 Arkopal Dutt, Edwin Pednault, Chai Wah Wu, Sarah Sheldon, John Smolin, Lev Bishop, Isaac L. Chuang

Hamiltonian learning is an important procedure in quantum system identification, calibration, and successful operation of quantum computers.

Active Learning

Dither computing: a hybrid deterministic-stochastic computing framework

no code implementations22 Feb 2021 Chai Wah Wu

We proposed an alternative framework, called dither computing, that combines aspects of stochastic computing and its deterministic variants and that can perform computing with similar efficiency, is unbiased, and with a variance and MSE also on the optimal order of $\Theta(\frac{1}{N^2})$.

Synchronization in dynamical systems coupled via multiple directed networks

no code implementations14 Nov 2020 Chai Wah Wu

We study synchronization and consensus in a group of dynamical systems coupled via multiple directed networks.

A Family of Robust Stochastic Operators for Reinforcement Learning

no code implementations NeurIPS 2019 Yingdong Lu, Mark Squillante, Chai Wah Wu

We consider a new family of stochastic operators for reinforcement learning with the goal of alleviating negative effects and becoming more robust to approximation or estimation errors.

reinforcement-learning Reinforcement Learning (RL)

A General Markov Decision Process Framework for Directly Learning Optimal Control Policies

no code implementations28 May 2019 Yingdong Lu, Mark S. Squillante, Chai Wah Wu

We consider a new form of reinforcement learning (RL) that is based on opportunities to directly learn the optimal control policy and a general Markov decision process (MDP) framework devised to support these opportunities.

Q-Learning Reinforcement Learning (RL)

TableNet: a multiplier-less implementation of neural networks for inferencing

no code implementations25 May 2019 Chai Wah Wu

We consider the use of look-up tables (LUT) to simplify the hardware implementation of a deep learning network for inferencing after weights have been successfully trained.

ProdSumNet: reducing model parameters in deep neural networks via product-of-sums matrix decompositions

1 code implementation6 Sep 2018 Chai Wah Wu

We show that good accuracy on MNIST and Fashion MNIST can be obtained using a relatively small number of trainable parameters.

A General Family of Robust Stochastic Operators for Reinforcement Learning

no code implementations21 May 2018 Yingdong Lu, Mark S. Squillante, Chai Wah Wu

We consider a new family of operators for reinforcement learning with the goal of alleviating the negative effects and becoming more robust to approximation or estimation errors.

reinforcement-learning Reinforcement Learning (RL)

Designing communication systems via iterative improvement: error correction coding with Bayes decoder and codebook optimized for source symbol error

no code implementations18 May 2018 Chai Wah Wu

For this metric, the positions of the bits are not relevant to the decoding, and in many noise models, not relevant to the BER either.

Decoder

Can machine learning identify interesting mathematics? An exploration using empirically observed laws

no code implementations18 May 2018 Chai Wah Wu

We explore the possibility of using machine learning to identify interesting mathematical structures by using certain quantities that serve as fingerprints.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.