no code implementations • ICML 2020 • Geon-Hyeong Kim, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim
The estimated future likelihoods form the core of our new low-variance gradient estimator.
no code implementations • 6 Dec 2023 • TaeYoung Kim, Hongseok Yang
The recent theoretical analysis of deep neural networks in their infinite-width limits has deepened our understanding of initialisation, feature learning, and training of those networks, and brought new practical techniques for finding appropriate hyperparameters, learning network weights, and performing inference.
1 code implementation • 13 Nov 2023 • Tien Dat Nguyen, Jinwoo Kim, Hongseok Yang, Seunghoon Hong
We present a general framework for symmetrizing an arbitrary neural-network architecture and making it equivariant with respect to a given group.
no code implementations • 1 Jun 2023 • Hyunsu Kim, Hyungi Lee, Hongseok Yang, Juho Lee
The key component of our method is what we call equivariance regularizer for a given type of symmetries, which measures how much a model is equivariant with respect to the symmetries of the type.
1 code implementation • 2 Feb 2023 • Francois Caron, Fadhel Ayed, Paul Jung, Hoil Lee, Juho Lee, Hongseok Yang
We consider the optimisation of large and shallow neural networks via gradient flow, where the output of each hidden node is scaled by some positive parameter.
1 code implementation • 22 Aug 2022 • Wonyeol Lee, Xavier Rival, Hongseok Yang
We present a static analysis for discovering differentiable or more generally smooth parts of a given probabilistic program, and show how the analysis can be used to improve the pathwise gradient estimator, one of the most popular methods for posterior inference and model learning.
1 code implementation • 28 Jun 2022 • Sangho Lim, Eun-Gyeol Oh, Hongseok Yang
SATNet is a differentiable constraint solver with a custom backpropagation algorithm, which can be used as a layer in a deep-learning system.
1 code implementation • 17 May 2022 • Hoil Lee, Fadhel Ayed, Paul Jung, Juho Lee, Hongseok Yang, François Caron
Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a L\'evy measure on the positive reals.
2 code implementations • 28 Feb 2022 • Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim
We consider the problem of learning from observation (LfO), in which the agent aims to mimic the expert's behavior from the state-only demonstrations by experts.
no code implementations • ICLR 2022 • Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, HyeongJoo Hwang, Hongseok Yang, Kee-Eung Kim
We consider offline imitation learning (IL), which aims to mimic the expert's behavior from its demonstration without further interaction with the environment.
1 code implementation • ICLR 2022 • Hyungi Lee, Eunggu Yun, Hongseok Yang, Juho Lee
We show that simply introducing a scale prior on the last-layer parameters can turn infinitely-wide neural networks of any architecture into a richer class of stochastic processes.
no code implementations • 18 Jun 2021 • Paul Jung, Hoil Lee, Jiho Lee, Hongseok Yang
We consider infinitely-wide multi-layer perceptrons (MLPs) which are limits of standard deep feed-forward neural networks.
no code implementations • NeurIPS Workshop AIPLANS 2021 • Gwonsoo Che, Hongseok Yang
The parameters of the networks are learnt from a training set by our meta-algorithm.
no code implementations • pproximateinference AABI Symposium 2021 • Hyunsu Kim, Juho Lee, Hongseok Yang
The non-stationary kernel problem refers to the degraded performance of the algorithm due to the constant change of the transition kernel of the chain throughout the run of the algorithm.
no code implementations • 1 Oct 2020 • David Tolpin, Yuan Zhou, Hongseok Yang
In this work, we cast policy search in stochastic domains as a Bayesian inference problem and provide a scheme for encoding such problems as nested probabilistic programs.
1 code implementation • 1 Oct 2020 • David Tolpin, Yuan Zhou, Tom Rainforth, Hongseok Yang
We tackle the problem of conditioning probabilistic programs on distributions of observable variables.
no code implementations • NeurIPS 2020 • Wonyeol Lee, Hangyeol Yu, Xavier Rival, Hongseok Yang
For these PAP functions, we propose a new type of derivatives, called intensional derivatives, and prove that these derivatives always exist and coincide with standard derivatives for almost all inputs.
no code implementations • 2 Mar 2020 • David Tolpin, Yuan Zhou, Hongseok Yang
Probabilistic programs with mixed support (both continuous and discrete latent random variables) commonly appear in many probabilistic programming systems (PPSs).
no code implementations • 22 Nov 2019 • Hyoungjin Lim, Gwonsoo Che, Wonyeol Lee, Hongseok Yang
We present an algorithm for marginalising changepoints in time-series models that assume a fixed number of unknown changepoints.
no code implementations • ICML 2020 • Yuan Zhou, Hongseok Yang, Yee Whye Teh, Tom Rainforth
Universal probabilistic programming systems (PPSs) provide a powerful framework for specifying rich probabilistic models.
1 code implementation • 20 Jul 2019 • Wonyeol Lee, Hangyeol Yu, Xavier Rival, Hongseok Yang
In this paper, we analyse one of the most fundamental and versatile variational inference algorithms, called score estimator, using tools from denotational semantics and program analysis.
1 code implementation • 6 Mar 2019 • Yuan Zhou, Bradley J. Gram-Hansen, Tobias Kohn, Tom Rainforth, Hongseok Yang, Frank Wood
We develop a new Low-level, First-order Probabilistic Programming Language (LF-PPL) suited for models containing a mix of continuous, discrete, and/or piecewise-continuous variables.
3 code implementations • 27 Sep 2018 • Jan-Willem van de Meent, Brooks Paige, Hongseok Yang, Frank Wood
We start with a discussion of model-based reasoning and explain why conditioning is a foundational computation central to the fields of probabilistic machine learning and artificial intelligence.
no code implementations • 25 Jun 2018 • Tom Rainforth, Yuan Zhou, Xiaoyu Lu, Yee Whye Teh, Frank Wood, Hongseok Yang, Jan-Willem van de Meent
We introduce inference trees (ITs), a new class of inference methods that build on ideas from Monte Carlo tree search to perform adaptive sampling in a manner that balances exploration with exploitation, ensures consistency, and alleviates pathologies in existing adaptive methods.
1 code implementation • NeurIPS 2018 • Wonyeol Lee, Hangyeol Yu, Hongseok Yang
We tackle the challenge by generalizing the reparameterization trick, one of the most effective techniques for addressing the variance issue for differentiable models, so that the trick works for non-differentiable models as well.
1 code implementation • 7 Apr 2018 • Bradley Gram-Hansen, Yuan Zhou, Tobias Kohn, Tom Rainforth, Hongseok Yang, Frank Wood
Hamiltonian Monte Carlo (HMC) is arguably the dominant statistical inference algorithm used in most popular "first-order differentiable" Probabilistic Programming Languages (PPLs).
no code implementations • ICLR 2018 • Robert Cornish, Hongseok Yang, Frank Wood
We consider the question of how to assess generative adversarial networks, in particular with respect to whether or not they generalise beyond memorising the training data.
no code implementations • ICML 2018 • Tom Rainforth, Robert Cornish, Hongseok Yang, Andrew Warrington, Frank Wood
Many problems in machine learning and statistics involve nested expectations and thus do not permit conventional Monte Carlo (MC) estimation.
no code implementations • 10 Jan 2017 • Chris Heunen, Ohad Kammar, Sam Staton, Hongseok Yang
Higher-order probabilistic programming languages allow programmers to write sophisticated models in machine learning and statistics in a succinct and structured way, but step outside the standard measure-theoretic formalization of probability theory.
no code implementations • 3 Dec 2016 • Tom Rainforth, Robert Cornish, Hongseok Yang, Frank Wood
In this paper, we analyse the behaviour of nested Monte Carlo (NMC) schemes, for which classical convergence proofs are insufficient.
no code implementations • 14 Jun 2016 • Mike Wu, Yura Perov, Frank Wood, Hongseok Yang
We demonstrate this by developing a native Excel implementation of both a particle Markov Chain Monte Carlo variant and black-box variational inference for spreadsheet probabilistic programming.
no code implementations • 19 Jan 2016 • Sam Staton, Hongseok Yang, Chris Heunen, Ohad Kammar, Frank Wood
We study the semantic foundation of expressive probabilistic programming languages, that support higher-order functions, continuous distributions, and soft constraints (such as Anglican, Church, and Venture).
no code implementations • 5 Nov 2015 • Radu Grigore, Hongseok Yang
Our approach applies to parametric static analyses implemented in Datalog, and is based on counterexample-guided abstraction refinement.
Programming Languages Software Engineering D.2.4
no code implementations • 27 Jan 2015 • Jan-Willem van de Meent, Hongseok Yang, Vikash Mansinghka, Frank Wood
Particle Markov chain Monte Carlo techniques rank among current state-of-the-art methods for probabilistic program inference.