no code implementations • ICLR 2019 • Florin Schimbinschi, Christian Walder, Sarah Erfani, James Bailey
Learning synthesizers and generating music in the raw audio domain is a challenging task.
no code implementations • 23 Jun 2024 • Yuli Liu, Christian Walder, Lexing Xie
There are many personalized ranking approaches for item recommendation from implicit feedback like Bayesian Personalized Ranking (BPR) and listwise ranking.
no code implementations • 6 Mar 2024 • Sean Lamont, Michael Norrish, Amir Dezfouli, Christian Walder, Paul Montague
We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings.
2 code implementations • 5 Jun 2023 • Xinlei Niu, Christian Walder, Jing Zhang, Charles Patrick Martin
We show the equivalence of the Gibbs distribution to a message-passing algorithm by the properties of the Gumbel distribution and give all the ingredients required for variational Bayesian inference of a latent path, namely Bayesian dynamic programming (BDP).
no code implementations • 30 May 2023 • Keerth Rathakumar, David Liebowitz, Christian Walder, Kristen Moore, Salil S. Kanhere
The disentangled representation is obtained by two novel mechanisms: (i) a dual branch architecture that separates image colour attributes from geometric attributes, and (ii) a new ELBO that trains the combined colour and geometry representations.
1 code implementation • 1 Mar 2023 • Daniel D. Johnson, Daniel Tarlow, Christian Walder
Large language models show impressive results at predicting structured text such as code, but also commonly introduce errors and hallucinations in their output.
no code implementations • 28 Feb 2023 • Shidi Li, Christian Walder, Alexander Soen, Lexing Xie, Miaomiao Liu
The sparse transformer can reduce the computational complexity of the self-attention layers to $O(n)$, whilst still being a universal approximator of continuous sequence-to-sequence functions.
no code implementations • 27 Jan 2023 • Kevin Lam, Christian Walder, Spiridon Penev, Richard Nock
Existing methods do this by fitting an inverse canonical link function which monotonically maps $\mathbb{R}$ to $[0, 1]$ to estimate probabilities for binary problems.
1 code implementation • 25 Apr 2022 • Yuli Liu, Christian Walder, Lexing Xie
Sequential recommendation is a popular task in academic research and close to real-world application scenarios, where the goal is to predict the next action(s) of the user based on his/her previous sequence of actions.
no code implementations • 15 Mar 2022 • Shidi Li, Christian Walder, Miaomiao Liu
This paper addresses the problem of unsupervised parts-aware point cloud generation with learned parts-based self-similarity.
no code implementations • 13 Oct 2021 • Shidi Li, Miaomiao Liu, Christian Walder
We achieve this with a simple modification of the Variational Auto-Encoder which yields a joint model of the point cloud itself along with a schematic representation of it as a combination of shape primitives.
1 code implementation • 13 Oct 2021 • Jing Zhang, Yuchao Dai, Mochu Xiang, Deng-Ping Fan, Peyman Moghadam, Mingyi He, Christian Walder, Kaihao Zhang, Mehrtash Harandi, Nick Barnes
Deep neural networks can be roughly divided into deterministic neural networks and stochastic neural networks. The former is usually trained to achieve a mapping from input space to output space via maximum likelihood estimation for the weights, which leads to deterministic predictions during testing.
no code implementations • 29 Sep 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen
Neural networks usually excel in learning a single task.
no code implementations • 16 Sep 2021 • Qiongkai Xu, Christian Walder, Chenchen Xu
In this paper, we first raise the challenge of evaluating the performance of both humans and models with respect to an oracle which is unobserved.
1 code implementation • 6 Mar 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Neural networks suffer from catastrophic forgetting and are unable to sequentially learn new tasks without guaranteed stationarity in data distribution.
no code implementations • NeurIPS 2021 • Minchao Wu, Michael Norrish, Christian Walder, Amir Dezfouli
We propose a novel approach to interactive theorem-proving (ITP) using deep reinforcement learning.
no code implementations • 1 Jan 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Catastrophic forgetting occurs when a neural network is trained sequentially on multiple tasks – its weights will be continuously modified and as a result, the network will lose its ability in solving a previous task.
1 code implementation • 18 Jul 2020 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Learning to learn (L2L) trains a meta-learner to assist the learning of a task-specific base learner.
1 code implementation • NeurIPS 2020 • Christian Walder, Richard Nock
Loss functions are a cornerstone of machine learning and the starting point of most algorithms.
no code implementations • 25 Sep 2019 • Nicholas I-Hsien Kuo, Mehrtash T. Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen
This paper contrasts the two canonical recurrent neural networks (RNNs) of long short-term memory (LSTM) and gated recurrent unit (GRU) to propose our novel light-weight RNN of Extrapolated Input for Network Simplification (EINS).
no code implementations • 10 Sep 2019 • Chamin Hewa Koneputugodage, Rhys Healy, Sean Lamont, Ian Mallett, Matt Brown, Matt Walters, Ushini Attanayake, Libo Zhang, Roger T. Dean, Alexander Hunter, Charles Gretton, Christian Walder
We address the problem of combining sequence models of symbolic music with user defined constraints.
1 code implementation • 25 May 2019 • Rui Zhang, Christian Walder, Marian-Andrei Rizoiu
We validate the efficiency of our accelerated variational inference schema and practical utility of our tighter ELBO for model selection.
1 code implementation • 8 Oct 2018 • Rui Zhang, Christian Walder, Marian-Andrei Rizoiu, Lexing Xie
On two large-scale Twitter diffusion datasets, we show that our methods outperform the current state-of-the-art in goodness-of-fit and that the time complexity is linear in the size of the dataset.
no code implementations • 27 Sep 2018 • Nicholas I.H. Kuo, Mehrtash T. Harandi, Hanna Suominen, Nicolas Fourrier, Christian Walder, Gabriela Ferraro
It is unclear whether the extensively applied long-short term memory (LSTM) is an optimised architecture for recurrent neural networks.
no code implementations • 8 Jun 2018 • Zac Cranko, Aditya Krishna Menon, Richard Nock, Cheng Soon Ong, Zhan Shi, Christian Walder
A key feature of our result is that it holds for all proper losses, and for a popular subset of these, the optimisation of this central measure appears to be independent of the loss.
no code implementations • ICML 2018 • Dongwoo Kim, Christian Walder
Prediction suffix trees (PST) provide an effective tool for sequence modelling and prediction.
no code implementations • 1 Dec 2016 • Christian Walder, Dongwoo Kim
For the case of highly permissive constraint sets, we find that sampling is to be preferred due to the overly regular nature of the optimisation based results.
no code implementations • 8 Jun 2016 • Christian Walder
We also define training, testing and validation splits for the new dataset, based on a clustering scheme which we also describe.
no code implementations • 4 Jun 2016 • Christian Walder
In this paper, we consider the problem of probabilistically modelling symbolic music data.
Sound
no code implementations • NeurIPS 2008 • Christian Walder, Bernhard Schölkopf
This paper introduces a new approach to constructing meaningful lower dimensional representations of sets of data points.