2 code implementations • 15 Feb 2021 • Ilia Sucholutsky, Nam-Hwui Kim, Ryan P. Browne, Matthias Schonlau
We propose a novel, modular method for generating soft-label prototypical lines that still maintains representational accuracy even when there are fewer prototypes than the number of classes in the data.
2 code implementations • 31 Oct 2020 • Ilia Sucholutsky, Matthias Schonlau
Using prototype methods to reduce the size of training datasets can drastically reduce the computational cost of classification with instance-based learning algorithms like the k-Nearest Neighbour classifier.
1 code implementation • 19 Sep 2020 • Ilia Sucholutsky, Matthias Schonlau
We leverage what are typically considered the worst qualities of deep learning algorithms - high computational cost, requirement for large data, no explainability, high dependence on hyper-parameter choice, overfitting, and vulnerability to adversarial perturbations - in order to create a method for the secure and efficient training of remotely deployed neural networks over unsecured channels.
4 code implementations • 17 Sep 2020 • Ilia Sucholutsky, Matthias Schonlau
We propose the `less than one'-shot learning task where models must learn $N$ new classes given only $M<N$ examples and we show that this is achievable with the help of soft labels.
4 code implementations • 6 Oct 2019 • Ilia Sucholutsky, Matthias Schonlau
We propose to simultaneously distill both images and their labels, thus assigning each synthetic sample a `soft' label (a distribution of labels).
no code implementations • 10 Apr 2019 • Ilia Sucholutsky, Apurva Narayan, Matthias Schonlau, Sebastian Fischmeister
The output of the model will be a close reconstruction of the true data, and can be fed to algorithms that rely on clean data.
no code implementations • 15 Feb 2017 • Hyukjun Gweon, Matthias Schonlau, Stefan Steiner
In this paper we propose a novel approach, Nearest Labelset using Double Distances (NLDD), that predicts the labelset observed in the training data that minimizes a weighted sum of the distances in both the feature space and the label space to the new instance.