1 code implementation • 28 Oct 2024 • Chris Camaño, Daniel Huang
We introduce Soft Kernel Interpolation (SoftKI) designed for scalable Gaussian Process (GP) regression on high-dimensional datasets.
2 code implementations • 2 Oct 2023 • KaiChieh Lo, Daniel Huang
We refer to the setting where the (partial) derivatives of a neural network's (NN's) predictions with respect to its inputs are used as additional training signal as a derivative-constrained (DC) NN.
1 code implementation • 20 Aug 2023 • Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, Gao Huang
The recent surge in research interest in applying large language models (LLMs) to decision-making tasks has flourished by leveraging the extensive world knowledge embedded in LLMs.
1 code implementation • 10 Jun 2023 • Daniel Huang, Chris Camaño, Jonathan Tsegaye, Jonathan Austin Gale
We introduce a library called Push that takes a probabilistic programming approach to Bayesian deep learning (BDL).
no code implementations • 24 Apr 2019 • Daniel Huang
In this paper, we consider the problem of learning a first-order theorem prover that uses a representation of beliefs in mathematical claims to construct proofs.
1 code implementation • ICLR 2019 • Daniel Huang, Prafulla Dhariwal, Dawn Song, Ilya Sutskever
In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant.
no code implementations • NeurIPS 2014 • Jean-Baptiste Tristan, Daniel Huang, Joseph Tassarotti, Adam C. Pocock, Stephen Green, Guy L. Steele
We show that the compiler can generate data-parallel inference code scalable to thousands of GPU cores by making use of the conditional independence relationships in the Bayesian network.
no code implementations • 12 Dec 2013 • Jean-Baptiste Tristan, Daniel Huang, Joseph Tassarotti, Adam Pocock, Stephen J. Green, Guy L. Steele Jr
In this paper, we present a probabilistic programming language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs.