Search Results for author: Daniel Huang

Found 8 papers, 5 papers with code

High-Dimensional Gaussian Process Regression with Soft Kernel Interpolation

1 code implementation28 Oct 2024 Chris Camaño, Daniel Huang

We introduce Soft Kernel Interpolation (SoftKI) designed for scalable Gaussian Process (GP) regression on high-dimensional datasets.

regression

On Training Derivative-Constrained Neural Networks

2 code implementations2 Oct 2023 KaiChieh Lo, Daniel Huang

We refer to the setting where the (partial) derivatives of a neural network's (NN's) predictions with respect to its inputs are used as additional training signal as a derivative-constrained (DC) NN.

ExpeL: LLM Agents Are Experiential Learners

1 code implementation20 Aug 2023 Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, Gao Huang

The recent surge in research interest in applying large language models (LLMs) to decision-making tasks has flourished by leveraging the extensive world knowledge embedded in LLMs.

Decision Making Transfer Learning +1

Push: Concurrent Probabilistic Programming for Bayesian Deep Learning

1 code implementation10 Jun 2023 Daniel Huang, Chris Camaño, Jonathan Tsegaye, Jonathan Austin Gale

We introduce a library called Push that takes a probabilistic programming approach to Bayesian deep learning (BDL).

Deep Learning Probabilistic Programming

On Learning to Prove

no code implementations24 Apr 2019 Daniel Huang

In this paper, we consider the problem of learning a first-order theorem prover that uses a representation of beliefs in mathematical claims to construct proofs.

Model Selection

GamePad: A Learning Environment for Theorem Proving

1 code implementation ICLR 2019 Daniel Huang, Prafulla Dhariwal, Dawn Song, Ilya Sutskever

In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant.

Automated Theorem Proving Position

Augur: Data-Parallel Probabilistic Modeling

no code implementations NeurIPS 2014 Jean-Baptiste Tristan, Daniel Huang, Joseph Tassarotti, Adam C. Pocock, Stephen Green, Guy L. Steele

We show that the compiler can generate data-parallel inference code scalable to thousands of GPU cores by making use of the conditional independence relationships in the Bayesian network.

Probabilistic Programming

Augur: a Modeling Language for Data-Parallel Probabilistic Inference

no code implementations12 Dec 2013 Jean-Baptiste Tristan, Daniel Huang, Joseph Tassarotti, Adam Pocock, Stephen J. Green, Guy L. Steele Jr

In this paper, we present a probabilistic programming language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs.

Code Completion Probabilistic Programming

Cannot find the paper you are looking for? You can Submit a new open access paper.