Search Results for author: Hyunjik Kim

Found 18 papers, 10 papers with code

C3: High-performance and low-complexity neural compression from a single image or video

no code implementations5 Dec 2023 Hyunjik Kim, Matthias Bauer, Lucas Theis, Jonathan Richard Schwarz, Emilien Dupont

On the UVG video benchmark, we match the RD performance of the Video Compression Transformer (Mentzer et al.), a well-established neural video codec, with less than 5k MACs/pixel for decoding.

Video Compression

Finding Increasingly Large Extremal Graphs with AlphaZero and Tabu Search

no code implementations6 Nov 2023 Abbas Mehrabian, Ankit Anand, Hyunjik Kim, Nicolas Sonnerat, Matej Balog, Gheorghe Comanici, Tudor Berariu, Andrew Lee, Anian Ruoss, Anna Bulanova, Daniel Toyama, Sam Blackwell, Bernardino Romera Paredes, Petar Veličković, Laurent Orseau, Joonkyung Lee, Anurag Murty Naredla, Doina Precup, Adam Zsolt Wagner

This work studies a central extremal graph theory problem inspired by a 1975 conjecture of Erd\H{o}s, which aims to find graphs with a given size (number of nodes) that maximize the number of edges without having 3- or 4-cycles.

Decision Making Graph Generation

Spatial Functa: Scaling Functa to ImageNet Classification and Generation

no code implementations6 Feb 2023 Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Richard Schwarz, Hyunjik Kim

Neural fields, also known as implicit neural representations, have emerged as a powerful means to represent complex signals of various modalities.

Classification Image Generation

When Does Re-initialization Work?

no code implementations20 Jun 2022 Sheheryar Zaidi, Tudor Berariu, Hyunjik Kim, Jörg Bornschein, Claudia Clopath, Yee Whye Teh, Razvan Pascanu

However, when deployed alongside other carefully tuned regularization techniques, re-initialization methods offer little to no added benefit for generalization, although optimal generalization performance becomes less sensitive to the choice of learning rate and weight decay hyperparameters.

Data Augmentation Image Classification

Pre-training via Denoising for Molecular Property Prediction

1 code implementation31 May 2022 Sheheryar Zaidi, Michael Schaarschmidt, James Martens, Hyunjik Kim, Yee Whye Teh, Alvaro Sanchez-Gonzalez, Peter Battaglia, Razvan Pascanu, Jonathan Godwin

Many important problems involving molecular property prediction from 3D structures have limited data, posing a generalization challenge for neural networks.

Denoising Molecular Property Prediction +1

From data to functa: Your data point is a function and you can treat it like one

1 code implementation28 Jan 2022 Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Rezende, Dan Rosenbaum

A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location.

Imputation Novel View Synthesis

Group Equivariant Subsampling

1 code implementation NeurIPS 2021 Jin Xu, Hyunjik Kim, Tom Rainforth, Yee Whye Teh

We use these layers to construct group equivariant autoencoders (GAEs) that allow us to learn low-dimensional equivariant representations.

Translation

LieTransformer: Equivariant self-attention for Lie Groups

1 code implementation20 Dec 2020 Michael Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, Hyunjik Kim

Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing.

regression

The Lipschitz Constant of Self-Attention

no code implementations8 Jun 2020 Hyunjik Kim, George Papamakarios, Andriy Mnih

Lipschitz constants of neural networks have been explored in various contexts in deep learning, such as provable adversarial robustness, estimating Wasserstein distance, stabilising training of GANs, and formulating invertible neural networks.

Adversarial Robustness Language Modelling

MetaFun: Meta-Learning with Iterative Functional Updates

1 code implementation ICML 2020 Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh

We develop a functional encoder-decoder approach to supervised meta-learning, where labeled data is encoded into an infinite-dimensional functional representation rather than a finite-dimensional one.

Few-Shot Image Classification Meta-Learning

Meta-Learning surrogate models for sequential decision making

no code implementations28 Mar 2019 Alexandre Galashov, Jonathan Schwarz, Hyunjik Kim, Marta Garnelo, David Saxton, Pushmeet Kohli, S. M. Ali Eslami, Yee Whye Teh

We introduce a unified probabilistic framework for solving sequential decision making problems ranging from Bayesian optimisation to contextual bandits and reinforcement learning.

Bayesian Optimisation Decision Making +4

Attentive Neural Processes

7 code implementations ICLR 2019 Hyunjik Kim, andriy mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh

Neural Processes (NPs) (Garnelo et al 2018a;b) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions.

regression

Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects

1 code implementation NeurIPS 2018 Adam R. Kosiorek, Hyunjik Kim, Ingmar Posner, Yee Whye Teh

It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects.

Disentangling by Factorising

17 code implementations ICML 2018 Hyunjik Kim, andriy mnih

We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation.

Disentanglement

Collaborative Filtering with Side Information: a Gaussian Process Perspective

no code implementations23 May 2016 Hyunjik Kim, Xiaoyu Lu, Seth Flaxman, Yee Whye Teh

We tackle the problem of collaborative filtering (CF) with side information, through the lens of Gaussian Process (GP) regression.

Collaborative Filtering regression

Cannot find the paper you are looking for? You can Submit a new open access paper.