Search Results for author: Hongseok Yang

Found 34 papers, 13 papers with code

An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network

no code implementations6 Dec 2023 TaeYoung Kim, Hongseok Yang

The recent theoretical analysis of deep neural networks in their infinite-width limits has deepened our understanding of initialisation, feature learning, and training of those networks, and brought new practical techniques for finding appropriate hyperparameters, learning network weights, and performing inference.

Learning Symmetrization for Equivariance with Orbit Distance Minimization

1 code implementation13 Nov 2023 Tien Dat Nguyen, Jinwoo Kim, Hongseok Yang, Seunghoon Hong

We present a general framework for symmetrizing an arbitrary neural-network architecture and making it equivariant with respect to a given group.

Image Classification

Regularizing Towards Soft Equivariance Under Mixed Symmetries

no code implementations1 Jun 2023 Hyunsu Kim, Hyungi Lee, Hongseok Yang, Juho Lee

The key component of our method is what we call equivariance regularizer for a given type of symmetries, which measures how much a model is equivariant with respect to the symmetries of the type.

Motion Forecasting

Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning

1 code implementation2 Feb 2023 Francois Caron, Fadhel Ayed, Paul Jung, Hoil Lee, Juho Lee, Hongseok Yang

We consider the optimisation of large and shallow neural networks via gradient flow, where the output of each hidden node is scaled by some positive parameter.

Transfer Learning

Smoothness Analysis for Probabilistic Programs with Application to Optimised Variational Inference

1 code implementation22 Aug 2022 Wonyeol Lee, Xavier Rival, Hongseok Yang

We present a static analysis for discovering differentiable or more generally smooth parts of a given probabilistic program, and show how the analysis can be used to improve the pathwise gradient estimator, one of the most popular methods for posterior inference and model learning.

Variational Inference

Learning Symmetric Rules with SATNet

1 code implementation28 Jun 2022 Sangho Lim, Eun-Gyeol Oh, Hongseok Yang

SATNet is a differentiable constraint solver with a custom backpropagation algorithm, which can be used as a layer in a deep-learning system.

Logical Reasoning Rubik's Cube

Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility

1 code implementation17 May 2022 Hoil Lee, Fadhel Ayed, Paul Jung, Juho Lee, Hongseok Yang, François Caron

Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a L\'evy measure on the positive reals.

Gaussian Processes Representation Learning

LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation

2 code implementations28 Feb 2022 Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim

We consider the problem of learning from observation (LfO), in which the agent aims to mimic the expert's behavior from the state-only demonstrations by experts.

Imitation Learning

DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations

no code implementations ICLR 2022 Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, HyeongJoo Hwang, Hongseok Yang, Kee-Eung Kim

We consider offline imitation learning (IL), which aims to mimic the expert's behavior from its demonstration without further interaction with the environment.

Imitation Learning

Scale Mixtures of Neural Network Gaussian Processes

1 code implementation ICLR 2022 Hyungi Lee, Eunggu Yun, Hongseok Yang, Juho Lee

We show that simply introducing a scale prior on the last-layer parameters can turn infinitely-wide neural networks of any architecture into a richer class of stochastic processes.

Gaussian Processes

$α$-Stable convergence of heavy-tailed infinitely-wide neural networks

no code implementations18 Jun 2021 Paul Jung, Hoil Lee, Jiho Lee, Hongseok Yang

We consider infinitely-wide multi-layer perceptrons (MLPs) which are limits of standard deep feed-forward neural networks.

Adaptive Strategy for Resetting a Non-stationary Markov Chain during Learning via Joint Stochastic Approximation

no code implementations pproximateinference AABI Symposium 2021 Hyunsu Kim, Juho Lee, Hongseok Yang

The non-stationary kernel problem refers to the degraded performance of the algorithm due to the constant change of the transition kernel of the chain throughout the run of the algorithm.

Bayesian Policy Search for Stochastic Domains

no code implementations1 Oct 2020 David Tolpin, Yuan Zhou, Hongseok Yang

In this work, we cast policy search in stochastic domains as a Bayesian inference problem and provide a scheme for encoding such problems as nested probabilistic programs.

Probabilistic Programming Variational Inference

Probabilistic Programs with Stochastic Conditioning

1 code implementation1 Oct 2020 David Tolpin, Yuan Zhou, Tom Rainforth, Hongseok Yang

We tackle the problem of conditioning probabilistic programs on distributions of observable variables.

Probabilistic Programming

On Correctness of Automatic Differentiation for Non-Differentiable Functions

no code implementations NeurIPS 2020 Wonyeol Lee, Hangyeol Yu, Xavier Rival, Hongseok Yang

For these PAP functions, we propose a new type of derivatives, called intensional derivatives, and prove that these derivatives always exist and coincide with standard derivatives for almost all inputs.

Stochastically Differentiable Probabilistic Programs

no code implementations2 Mar 2020 David Tolpin, Yuan Zhou, Hongseok Yang

Probabilistic programs with mixed support (both continuous and discrete latent random variables) commonly appear in many probabilistic programming systems (PPSs).

Probabilistic Programming

Differentiable Algorithm for Marginalising Changepoints

no code implementations22 Nov 2019 Hyoungjin Lim, Gwonsoo Che, Wonyeol Lee, Hongseok Yang

We present an algorithm for marginalising changepoints in time-series models that assume a fixed number of unknown changepoints.

Time Series Time Series Analysis

Towards Verified Stochastic Variational Inference for Probabilistic Programs

1 code implementation20 Jul 2019 Wonyeol Lee, Hangyeol Yu, Xavier Rival, Hongseok Yang

In this paper, we analyse one of the most fundamental and versatile variational inference algorithms, called score estimator, using tools from denotational semantics and program analysis.

Probabilistic Programming Variational Inference

LF-PPL: A Low-Level First Order Probabilistic Programming Language for Non-Differentiable Models

1 code implementation6 Mar 2019 Yuan Zhou, Bradley J. Gram-Hansen, Tobias Kohn, Tom Rainforth, Hongseok Yang, Frank Wood

We develop a new Low-level, First-order Probabilistic Programming Language (LF-PPL) suited for models containing a mix of continuous, discrete, and/or piecewise-continuous variables.

Probabilistic Programming

An Introduction to Probabilistic Programming

3 code implementations27 Sep 2018 Jan-Willem van de Meent, Brooks Paige, Hongseok Yang, Frank Wood

We start with a discussion of model-based reasoning and explain why conditioning is a foundational computation central to the fields of probabilistic machine learning and artificial intelligence.

Probabilistic Programming

Inference Trees: Adaptive Inference with Exploration

no code implementations25 Jun 2018 Tom Rainforth, Yuan Zhou, Xiaoyu Lu, Yee Whye Teh, Frank Wood, Hongseok Yang, Jan-Willem van de Meent

We introduce inference trees (ITs), a new class of inference methods that build on ideas from Monte Carlo tree search to perform adaptive sampling in a manner that balances exploration with exploitation, ensures consistency, and alleviates pathologies in existing adaptive methods.

Reparameterization Gradient for Non-differentiable Models

1 code implementation NeurIPS 2018 Wonyeol Lee, Hangyeol Yu, Hongseok Yang

We tackle the challenge by generalizing the reparameterization trick, one of the most effective techniques for addressing the variance issue for differentiable models, so that the trick works for non-differentiable models as well.

Variational Inference

Hamiltonian Monte Carlo for Probabilistic Programs with Discontinuities

1 code implementation7 Apr 2018 Bradley Gram-Hansen, Yuan Zhou, Tobias Kohn, Tom Rainforth, Hongseok Yang, Frank Wood

Hamiltonian Monte Carlo (HMC) is arguably the dominant statistical inference algorithm used in most popular "first-order differentiable" Probabilistic Programming Languages (PPLs).

Probabilistic Programming

Towards a Testable Notion of Generalization for Generative Adversarial Networks

no code implementations ICLR 2018 Robert Cornish, Hongseok Yang, Frank Wood

We consider the question of how to assess generative adversarial networks, in particular with respect to whether or not they generalise beyond memorising the training data.

Generative Adversarial Network

On Nesting Monte Carlo Estimators

no code implementations ICML 2018 Tom Rainforth, Robert Cornish, Hongseok Yang, Andrew Warrington, Frank Wood

Many problems in machine learning and statistics involve nested expectations and thus do not permit conventional Monte Carlo (MC) estimation.

Experimental Design

A Convenient Category for Higher-Order Probability Theory

no code implementations10 Jan 2017 Chris Heunen, Ohad Kammar, Sam Staton, Hongseok Yang

Higher-order probabilistic programming languages allow programmers to write sophisticated models in machine learning and statistics in a succinct and structured way, but step outside the standard measure-theoretic formalization of probability theory.

Probabilistic Programming

On the Pitfalls of Nested Monte Carlo

no code implementations3 Dec 2016 Tom Rainforth, Robert Cornish, Hongseok Yang, Frank Wood

In this paper, we analyse the behaviour of nested Monte Carlo (NMC) schemes, for which classical convergence proofs are insufficient.

Spreadsheet Probabilistic Programming

no code implementations14 Jun 2016 Mike Wu, Yura Perov, Frank Wood, Hongseok Yang

We demonstrate this by developing a native Excel implementation of both a particle Markov Chain Monte Carlo variant and black-box variational inference for spreadsheet probabilistic programming.

Decision Making Decision Making Under Uncertainty +2

Semantics for probabilistic programming: higher-order functions, continuous distributions, and soft constraints

no code implementations19 Jan 2016 Sam Staton, Hongseok Yang, Chris Heunen, Ohad Kammar, Frank Wood

We study the semantic foundation of expressive probabilistic programming languages, that support higher-order functions, continuous distributions, and soft constraints (such as Anglican, Church, and Venture).

Probabilistic Programming

Abstraction Refinement Guided by a Learnt Probabilistic Model

no code implementations5 Nov 2015 Radu Grigore, Hongseok Yang

Our approach applies to parametric static analyses implemented in Datalog, and is based on counterexample-guided abstraction refinement.

Programming Languages Software Engineering D.2.4

Particle Gibbs with Ancestor Sampling for Probabilistic Programs

no code implementations27 Jan 2015 Jan-Willem van de Meent, Hongseok Yang, Vikash Mansinghka, Frank Wood

Particle Markov chain Monte Carlo techniques rank among current state-of-the-art methods for probabilistic program inference.

Probabilistic Programming

Cannot find the paper you are looking for? You can Submit a new open access paper.