Search Results for author: Ilia Sucholutsky

Found 15 papers, 5 papers with code

Around the world in 60 words: A generative vocabulary test for online research

no code implementations3 Feb 2023 Pol van Rijn, Yue Sun, Harin Lee, Raja Marjieh, Ilia Sucholutsky, Francesca Lanzarini, Elisabeth André, Nori Jacoby

Six behavioral experiments (N=236) in six countries and eight languages show that (a) our test can distinguish between native speakers of closely related languages, (b) the test is reliable ($r=0. 82$), and (c) performance strongly correlates with existing tests (LexTale) and self-reports.

What Language Reveals about Perception: Distilling Psychophysical Knowledge from Large Language Models

no code implementations2 Feb 2023 Raja Marjieh, Ilia Sucholutsky, Pol van Rijn, Nori Jacoby, Thomas L. Griffiths

We reformulate this problem as that of distilling psychophysical information from text and show how this can be done by combining large language models (LLMs) with a classic psychophysical method based on similarity judgments.

Alignment with human representations supports robust few-shot learning

no code implementations27 Jan 2023 Ilia Sucholutsky, Thomas L. Griffiths

Should we care whether AI systems have representations of the world that are similar to those of humans?

Few-Shot Learning

Human-in-the-Loop Mixup

no code implementations2 Nov 2022 Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley Love, Adrian Weller

We focus on the synthetic data used in mixup: a powerful regularizer shown to improve model robustness, generalization, and calibration.

On the Informativeness of Supervision Signals

no code implementations2 Nov 2022 Ilia Sucholutsky, Raja Marjieh, Nori Jacoby, Thomas L. Griffiths

Learning transferable representations by training a classifier is a well-established technique in deep learning (e. g., ImageNet pretraining), but it remains an open theoretical question why this kind of task-specific pre-training should result in ''good'' representations that actually capture the underlying structure of the data.

Contrastive Learning Informativeness +1

Analyzing Diffusion as Serial Reproduction

no code implementations29 Sep 2022 Raja Marjieh, Ilia Sucholutsky, Thomas A. Langlois, Nori Jacoby, Thomas L. Griffiths

Diffusion models are a class of generative models that learn to synthesize samples by inverting a diffusion process that gradually maps data into noise.

Scheduling

Words are all you need? Language as an approximation for human similarity judgments

no code implementations8 Jun 2022 Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Theodore R. Sumers, Harin Lee, Thomas L. Griffiths, Nori Jacoby

Based on the results of this comprehensive study, we provide a concise guide for researchers interested in collecting or approximating human similarity data.

Contrastive Learning Information Retrieval +2

Predicting Human Similarity Judgments Using Large Language Models

no code implementations9 Feb 2022 Raja Marjieh, Ilia Sucholutsky, Theodore R. Sumers, Nori Jacoby, Thomas L. Griffiths

Similarity judgments provide a well-established method for accessing mental representations, with applications in psychology, neuroscience and machine learning.

Can Humans Do Less-Than-One-Shot Learning?

no code implementations9 Feb 2022 Maya Malaviya, Ilia Sucholutsky, Kerem Oktar, Thomas L. Griffiths

Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly {\em how} small?

One-Shot Learning

One Line To Rule Them All: Generating LO-Shot Soft-Label Prototypes

2 code implementations15 Feb 2021 Ilia Sucholutsky, Nam-Hwui Kim, Ryan P. Browne, Matthias Schonlau

We propose a novel, modular method for generating soft-label prototypical lines that still maintains representational accuracy even when there are fewer prototypes than the number of classes in the data.

One-Shot Learning

Optimal 1-NN Prototypes for Pathological Geometries

2 code implementations31 Oct 2020 Ilia Sucholutsky, Matthias Schonlau

Using prototype methods to reduce the size of training datasets can drastically reduce the computational cost of classification with instance-based learning algorithms like the k-Nearest Neighbour classifier.

SecDD: Efficient and Secure Method for Remotely Training Neural Networks

1 code implementation19 Sep 2020 Ilia Sucholutsky, Matthias Schonlau

We leverage what are typically considered the worst qualities of deep learning algorithms - high computational cost, requirement for large data, no explainability, high dependence on hyper-parameter choice, overfitting, and vulnerability to adversarial perturbations - in order to create a method for the secure and efficient training of remotely deployed neural networks over unsecured channels.

'Less Than One'-Shot Learning: Learning N Classes From M<N Samples

4 code implementations17 Sep 2020 Ilia Sucholutsky, Matthias Schonlau

We propose the `less than one'-shot learning task where models must learn $N$ new classes given only $M<N$ examples and we show that this is achievable with the help of soft labels.

One-Shot Learning

Soft-Label Dataset Distillation and Text Dataset Distillation

4 code implementations6 Oct 2019 Ilia Sucholutsky, Matthias Schonlau

We propose to simultaneously distill both images and their labels, thus assigning each synthetic sample a `soft' label (a distribution of labels).

Data Summarization Image Classification +1

Deep Learning for System Trace Restoration

no code implementations10 Apr 2019 Ilia Sucholutsky, Apurva Narayan, Matthias Schonlau, Sebastian Fischmeister

The output of the model will be a close reconstruction of the true data, and can be fed to algorithms that rely on clean data.

Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.