Search Results for author: Ilia Sucholutsky

Found 26 papers, 6 papers with code

Soft-Label Dataset Distillation and Text Dataset Distillation

3 code implementations6 Oct 2019 Ilia Sucholutsky, Matthias Schonlau

We propose to simultaneously distill both images and their labels, thus assigning each synthetic sample a `soft' label (a distribution of labels).

Data Summarization Image Classification +1

'Less Than One'-Shot Learning: Learning N Classes From M<N Samples

3 code implementations17 Sep 2020 Ilia Sucholutsky, Matthias Schonlau

We propose the `less than one'-shot learning task where models must learn $N$ new classes given only $M<N$ examples and we show that this is achievable with the help of soft labels.

One-Shot Learning

Optimal 1-NN Prototypes for Pathological Geometries

1 code implementation31 Oct 2020 Ilia Sucholutsky, Matthias Schonlau

Using prototype methods to reduce the size of training datasets can drastically reduce the computational cost of classification with instance-based learning algorithms like the k-Nearest Neighbour classifier.

One Line To Rule Them All: Generating LO-Shot Soft-Label Prototypes

1 code implementation15 Feb 2021 Ilia Sucholutsky, Nam-Hwui Kim, Ryan P. Browne, Matthias Schonlau

We propose a novel, modular method for generating soft-label prototypical lines that still maintains representational accuracy even when there are fewer prototypes than the number of classes in the data.

One-Shot Learning

SecDD: Efficient and Secure Method for Remotely Training Neural Networks

1 code implementation19 Sep 2020 Ilia Sucholutsky, Matthias Schonlau

We leverage what are typically considered the worst qualities of deep learning algorithms - high computational cost, requirement for large data, no explainability, high dependence on hyper-parameter choice, overfitting, and vulnerability to adversarial perturbations - in order to create a method for the secure and efficient training of remotely deployed neural networks over unsecured channels.

Human-in-the-Loop Mixup

1 code implementation2 Nov 2022 Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley Love, Adrian Weller

We focus on the synthetic data used in mixup: a powerful regularizer shown to improve model robustness, generalization, and calibration.

Deep Learning for System Trace Restoration

no code implementations10 Apr 2019 Ilia Sucholutsky, Apurva Narayan, Matthias Schonlau, Sebastian Fischmeister

The output of the model will be a close reconstruction of the true data, and can be fed to algorithms that rely on clean data.

Anomaly Detection

Can Humans Do Less-Than-One-Shot Learning?

no code implementations9 Feb 2022 Maya Malaviya, Ilia Sucholutsky, Kerem Oktar, Thomas L. Griffiths

Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly {\em how} small?

One-Shot Learning

Predicting Human Similarity Judgments Using Large Language Models

no code implementations9 Feb 2022 Raja Marjieh, Ilia Sucholutsky, Theodore R. Sumers, Nori Jacoby, Thomas L. Griffiths

Similarity judgments provide a well-established method for accessing mental representations, with applications in psychology, neuroscience and machine learning.

Words are all you need? Language as an approximation for human similarity judgments

no code implementations8 Jun 2022 Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Theodore R. Sumers, Harin Lee, Thomas L. Griffiths, Nori Jacoby

Based on the results of this comprehensive study, we provide a concise guide for researchers interested in collecting or approximating human similarity data.

Contrastive Learning Information Retrieval +2

Analyzing Diffusion as Serial Reproduction

no code implementations29 Sep 2022 Raja Marjieh, Ilia Sucholutsky, Thomas A. Langlois, Nori Jacoby, Thomas L. Griffiths

Diffusion models are a class of generative models that learn to synthesize samples by inverting a diffusion process that gradually maps data into noise.

Scheduling

Large language models predict human sensory judgments across six modalities

no code implementations2 Feb 2023 Raja Marjieh, Ilia Sucholutsky, Pol van Rijn, Nori Jacoby, Thomas L. Griffiths

Determining the extent to which the perceptual world can be recovered from language is a longstanding problem in philosophy and cognitive science.

Philosophy

Around the world in 60 words: A generative vocabulary test for online research

no code implementations3 Feb 2023 Pol van Rijn, Yue Sun, Harin Lee, Raja Marjieh, Ilia Sucholutsky, Francesca Lanzarini, Elisabeth André, Nori Jacoby

Six behavioral experiments (N=236) in six countries and eight languages show that (a) our test can distinguish between native speakers of closely related languages, (b) the test is reliable ($r=0. 82$), and (c) performance strongly correlates with existing tests (LexTale) and self-reports.

Cultural Vocal Bursts Intensity Prediction

Human Uncertainty in Concept-Based AI Systems

no code implementations22 Mar 2023 Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham

We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.

Decision Making

Dimensions of Disagreement: Unpacking Divergence and Misalignment in Cognitive Science and Artificial Intelligence

no code implementations3 Oct 2023 Kerem Oktar, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths

The increasing prevalence of artificial agents creates a correspondingly increasing need to manage disagreements between humans and artificial agents, as well as between artificial agents themselves.

Concept Alignment as a Prerequisite for Value Alignment

no code implementations30 Oct 2023 Sunayana Rane, Mark Ho, Ilia Sucholutsky, Thomas L. Griffiths

Value alignment is essential for building AI systems that can safely and reliably interact with people.

Concept Alignment

Learning Human-like Representations to Enable Learning Human Values

no code implementations21 Dec 2023 Andrea Wynn, Ilia Sucholutsky, Thomas L. Griffiths

We propose that this kind of representational alignment between machine learning (ML) models and humans can also support value alignment, allowing ML systems to conform to human values and societal norms.

Ethics Few-Shot Learning +1

Concept Alignment

no code implementations9 Jan 2024 Sunayana Rane, Polyphony J. Bruna, Ilia Sucholutsky, Christopher Kello, Thomas L. Griffiths

Discussion of AI alignment (alignment between humans and AI systems) has focused on value alignment, broadly referring to creating AI systems that share human values.

Concept Alignment Philosophy

Measuring Implicit Bias in Explicitly Unbiased Large Language Models

no code implementations6 Feb 2024 Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths

Large language models (LLMs) can pass explicit bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases.

Decision Making

A Rational Analysis of the Speech-to-Song Illusion

no code implementations10 Feb 2024 Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Harin Lee, Thomas L. Griffiths, Nori Jacoby

Here we provide a formal account of this phenomenon, by recasting it as a statistical inference whereby a rational agent attempts to decide whether a sequence of utterances is more likely to have been produced in a song or speech.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.