Search Results for author: Christian Walder

Found 30 papers, 9 papers with code

Synthnet: Learning synthesizers end-to-end

no code implementations ICLR 2019 Florin Schimbinschi, Christian Walder, Sarah Erfani, James Bailey

Learning synthesizers and generating music in the raw audio domain is a challenging task.

Learning k-Determinantal Point Processes for Personalized Ranking

no code implementations23 Jun 2024 Yuli Liu, Christian Walder, Lexing Xie

There are many personalized ranking approaches for item recommendation from implicit feedback like Bayesian Personalized Ranking (BPR) and listwise ranking.

Diversity Point Processes +1

BAIT: Benchmarking (Embedding) Architectures for Interactive Theorem-Proving

no code implementations6 Mar 2024 Sean Lamont, Michael Norrish, Amir Dezfouli, Christian Walder, Paul Montague

We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings.

Automated Theorem Proving Benchmarking

Latent Optimal Paths by Gumbel Propagation for Variational Bayesian Dynamic Programming

2 code implementations5 Jun 2023 Xinlei Niu, Christian Walder, Jing Zhang, Charles Patrick Martin

We show the equivalence of the Gibbs distribution to a message-passing algorithm by the properties of the Gumbel distribution and give all the ingredients required for variational Bayesian inference of a latent path, namely Bayesian dynamic programming (BDP).

Bayesian Inference Singing Voice Synthesis +1

DualVAE: Controlling Colours of Generated and Real Images

no code implementations30 May 2023 Keerth Rathakumar, David Liebowitz, Christian Walder, Kristen Moore, Salil S. Kanhere

The disentangled representation is obtained by two novel mechanisms: (i) a dual branch architecture that separates image colour attributes from geometric attributes, and (ii) a new ELBO that trains the combined colour and geometry representations.

Image Generation

R-U-SURE? Uncertainty-Aware Code Suggestions By Maximizing Utility Across Random User Intents

1 code implementation1 Mar 2023 Daniel D. Johnson, Daniel Tarlow, Christian Walder

Large language models show impressive results at predicting structured text such as code, but also commonly introduce errors and hallucinations in their output.

Sampled Transformer for Point Sets

no code implementations28 Feb 2023 Shidi Li, Christian Walder, Alexander Soen, Lexing Xie, Miaomiao Liu

The sparse transformer can reduce the computational complexity of the self-attention layers to $O(n)$, whilst still being a universal approximator of continuous sequence-to-sequence functions.

Inductive Bias

LegendreTron: Uprising Proper Multiclass Loss Learning

no code implementations27 Jan 2023 Kevin Lam, Christian Walder, Spiridon Penev, Richard Nock

Existing methods do this by fitting an inverse canonical link function which monotonically maps $\mathbb{R}$ to $[0, 1]$ to estimate probabilities for binary problems.

Determinantal Point Process Likelihoods for Sequential Recommendation

1 code implementation25 Apr 2022 Yuli Liu, Christian Walder, Lexing Xie

Sequential recommendation is a popular task in academic research and close to real-world application scenarios, where the goal is to predict the next action(s) of the user based on his/her previous sequence of actions.

Diversity Sequential Recommendation

SPA-VAE: Similar-Parts-Assignment for Unsupervised 3D Point Cloud Generation

no code implementations15 Mar 2022 Shidi Li, Christian Walder, Miaomiao Liu

This paper addresses the problem of unsupervised parts-aware point cloud generation with learned parts-based self-similarity.

Point Cloud Generation Single Particle Analysis

EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape Generation

no code implementations13 Oct 2021 Shidi Li, Miaomiao Liu, Christian Walder

We achieve this with a simple modification of the Variational Auto-Encoder which yields a joint model of the point cloud itself along with a schematic representation of it as a combination of shape primitives.

Inductive Bias Point Cloud Generation

Dense Uncertainty Estimation

1 code implementation13 Oct 2021 Jing Zhang, Yuchao Dai, Mochu Xiang, Deng-Ping Fan, Peyman Moghadam, Mingyi He, Christian Walder, Kaihao Zhang, Mehrtash Harandi, Nick Barnes

Deep neural networks can be roughly divided into deterministic neural networks and stochastic neural networks. The former is usually trained to achieve a mapping from input space to output space via maximum likelihood estimation for the weights, which leads to deterministic predictions during testing.

Decision Making

Humanly Certifying Superhuman Classifiers

no code implementations16 Sep 2021 Qiongkai Xu, Christian Walder, Chenchen Xu

In this paper, we first raise the challenge of evaluating the performance of both humans and models with respect to an oracle which is unobserved.

Learning to Continually Learn Rapidly from Few and Noisy Data

1 code implementation6 Mar 2021 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Neural networks suffer from catastrophic forgetting and are unable to sequentially learn new tasks without guaranteed stationarity in data distribution.

Continual Learning Meta-Learning

Highway-Connection Classifier Networks for Plastic yet Stable Continual Learning

no code implementations1 Jan 2021 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Catastrophic forgetting occurs when a neural network is trained sequentially on multiple tasks – its weights will be continuously modified and as a result, the network will lose its ability in solving a previous task.

Continual Learning

MTL2L: A Context Aware Neural Optimiser

1 code implementation18 Jul 2020 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Learning to learn (L2L) trains a meta-learner to assist the learning of a task-specific base learner.

Multi-Task Learning

All your loss are belong to Bayes

1 code implementation NeurIPS 2020 Christian Walder, Richard Nock

Loss functions are a cornerstone of machine learning and the starting point of most algorithms.

Gaussian Processes

EINS: Long Short-Term Memory with Extrapolated Input Network Simplification

no code implementations25 Sep 2019 Nicholas I-Hsien Kuo, Mehrtash T. Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen

This paper contrasts the two canonical recurrent neural networks (RNNs) of long short-term memory (LSTM) and gated recurrent unit (GRU) to propose our novel light-weight RNN of Extrapolated Input for Network Simplification (EINS).

Image Generation Imputation +2

Variational Inference for Sparse Gaussian Process Modulated Hawkes Process

1 code implementation25 May 2019 Rui Zhang, Christian Walder, Marian-Andrei Rizoiu

We validate the efficiency of our accelerated variational inference schema and practical utility of our tighter ELBO for model selection.

Model Optimization Model Selection +1

Efficient Non-parametric Bayesian Hawkes Processes

1 code implementation8 Oct 2018 Rui Zhang, Christian Walder, Marian-Andrei Rizoiu, Lexing Xie

On two large-scale Twitter diffusion datasets, we show that our methods outperform the current state-of-the-art in goodness-of-fit and that the time complexity is linear in the size of the dataset.

DecayNet: A Study on the Cell States of Long Short Term Memories

no code implementations27 Sep 2018 Nicholas I.H. Kuo, Mehrtash T. Harandi, Hanna Suominen, Nicolas Fourrier, Christian Walder, Gabriela Ferraro

It is unclear whether the extensively applied long-short term memory (LSTM) is an optimised architecture for recurrent neural networks.

Monge blunts Bayes: Hardness Results for Adversarial Training

no code implementations8 Jun 2018 Zac Cranko, Aditya Krishna Menon, Richard Nock, Cheng Soon Ong, Zhan Shi, Christian Walder

A key feature of our result is that it holds for all proper losses, and for a popular subset of these, the optimisation of this central measure appears to be independent of the loss.

Self-Bounded Prediction Suffix Tree via Approximate String Matching

no code implementations ICML 2018 Dongwoo Kim, Christian Walder

Prediction suffix trees (PST) provide an effective tool for sequence modelling and prediction.

Computer Assisted Composition with Recurrent Neural Networks

no code implementations1 Dec 2016 Christian Walder, Dongwoo Kim

For the case of highly permissive constraint sets, we find that sampling is to be preferred due to the overly regular nature of the optimisation based results.

Symbolic Music Data Version 1.0

no code implementations8 Jun 2016 Christian Walder

We also define training, testing and validation splits for the new dataset, based on a clustering scheme which we also describe.

BIG-bench Machine Learning Clustering

Modelling Symbolic Music: Beyond the Piano Roll

no code implementations4 Jun 2016 Christian Walder

In this paper, we consider the problem of probabilistically modelling symbolic music data.

Sound

Diffeomorphic Dimensionality Reduction

no code implementations NeurIPS 2008 Christian Walder, Bernhard Schölkopf

This paper introduces a new approach to constructing meaningful lower dimensional representations of sets of data points.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.