no code implementations • 10 Apr 2019 • Ilia Sucholutsky, Apurva Narayan, Matthias Schonlau, Sebastian Fischmeister
The output of the model will be a close reconstruction of the true data, and can be fed to algorithms that rely on clean data.
3 code implementations • 6 Oct 2019 • Ilia Sucholutsky, Matthias Schonlau
We propose to simultaneously distill both images and their labels, thus assigning each synthetic sample a `soft' label (a distribution of labels).
3 code implementations • 17 Sep 2020 • Ilia Sucholutsky, Matthias Schonlau
We propose the `less than one'-shot learning task where models must learn $N$ new classes given only $M<N$ examples and we show that this is achievable with the help of soft labels.
1 code implementation • 19 Sep 2020 • Ilia Sucholutsky, Matthias Schonlau
We leverage what are typically considered the worst qualities of deep learning algorithms - high computational cost, requirement for large data, no explainability, high dependence on hyper-parameter choice, overfitting, and vulnerability to adversarial perturbations - in order to create a method for the secure and efficient training of remotely deployed neural networks over unsecured channels.
1 code implementation • 31 Oct 2020 • Ilia Sucholutsky, Matthias Schonlau
Using prototype methods to reduce the size of training datasets can drastically reduce the computational cost of classification with instance-based learning algorithms like the k-Nearest Neighbour classifier.
1 code implementation • 15 Feb 2021 • Ilia Sucholutsky, Nam-Hwui Kim, Ryan P. Browne, Matthias Schonlau
We propose a novel, modular method for generating soft-label prototypical lines that still maintains representational accuracy even when there are fewer prototypes than the number of classes in the data.
no code implementations • 9 Feb 2022 • Maya Malaviya, Ilia Sucholutsky, Kerem Oktar, Thomas L. Griffiths
Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly {\em how} small?
no code implementations • 9 Feb 2022 • Raja Marjieh, Ilia Sucholutsky, Theodore R. Sumers, Nori Jacoby, Thomas L. Griffiths
Similarity judgments provide a well-established method for accessing mental representations, with applications in psychology, neuroscience and machine learning.
no code implementations • 8 Jun 2022 • Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Theodore R. Sumers, Harin Lee, Thomas L. Griffiths, Nori Jacoby
Based on the results of this comprehensive study, we provide a concise guide for researchers interested in collecting or approximating human similarity data.
no code implementations • 29 Sep 2022 • Raja Marjieh, Ilia Sucholutsky, Thomas A. Langlois, Nori Jacoby, Thomas L. Griffiths
Diffusion models are a class of generative models that learn to synthesize samples by inverting a diffusion process that gradually maps data into noise.
no code implementations • 2 Nov 2022 • Ilia Sucholutsky, Ruairidh M. Battleday, Katherine M. Collins, Raja Marjieh, Joshua C. Peterson, Pulkit Singh, Umang Bhatt, Nori Jacoby, Adrian Weller, Thomas L. Griffiths
Supervised learning typically focuses on learning transferable representations from training examples annotated by humans.
1 code implementation • 2 Nov 2022 • Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley Love, Adrian Weller
We focus on the synthetic data used in mixup: a powerful regularizer shown to improve model robustness, generalization, and calibration.
no code implementations • 2 Feb 2023 • Raja Marjieh, Ilia Sucholutsky, Pol van Rijn, Nori Jacoby, Thomas L. Griffiths
Determining the extent to which the perceptual world can be recovered from language is a longstanding problem in philosophy and cognitive science.
no code implementations • 3 Feb 2023 • Pol van Rijn, Yue Sun, Harin Lee, Raja Marjieh, Ilia Sucholutsky, Francesca Lanzarini, Elisabeth André, Nori Jacoby
Six behavioral experiments (N=236) in six countries and eight languages show that (a) our test can distinguish between native speakers of closely related languages, (b) the test is reliable ($r=0. 82$), and (c) performance strongly correlates with existing tests (LexTale) and self-reports.
no code implementations • 22 Mar 2023 • Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham
We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.
no code implementations • 3 Oct 2023 • Kerem Oktar, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths
The increasing prevalence of artificial agents creates a correspondingly increasing need to manage disagreements between humans and artificial agents, as well as between artificial agents themselves.
no code implementations • 18 Oct 2023 • Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert Müller, Mariya Toneva, Thomas L. Griffiths
Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields.
no code implementations • 30 Oct 2023 • Sunayana Rane, Mark Ho, Ilia Sucholutsky, Thomas L. Griffiths
Value alignment is essential for building AI systems that can safely and reliably interact with people.
no code implementations • 21 Dec 2023 • Andrea Wynn, Ilia Sucholutsky, Thomas L. Griffiths
We propose that this kind of representational alignment between machine learning (ML) models and humans can also support value alignment, allowing ML systems to conform to human values and societal norms.
no code implementations • 9 Jan 2024 • Sunayana Rane, Polyphony J. Bruna, Ilia Sucholutsky, Christopher Kello, Thomas L. Griffiths
Discussion of AI alignment (alignment between humans and AI systems) has focused on value alignment, broadly referring to creating AI systems that share human values.
no code implementations • 5 Feb 2024 • Andi Peng, Andreea Bobu, Belinda Z. Li, Theodore R. Sumers, Ilia Sucholutsky, Nishanth Kumar, Thomas L. Griffiths, Julie A. Shah
We observe that how humans behave reveals how they see the world.
no code implementations • 6 Feb 2024 • Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths
Large language models (LLMs) can pass explicit bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases.
no code implementations • 10 Feb 2024 • Raja Marjieh, Pol van Rijn, Ilia Sucholutsky, Harin Lee, Thomas L. Griffiths, Nori Jacoby
Here we provide a formal account of this phenomenon, by recasting it as a statistical inference whereby a rational agent attempts to decide whether a sequence of utterances is more likely to have been produced in a song or speech.
no code implementations • 15 Feb 2024 • Allison Chen, Ilia Sucholutsky, Olga Russakovsky, Thomas L. Griffiths
Does language help make sense of the visual world?
no code implementations • 28 Feb 2024 • Andi Peng, Ilia Sucholutsky, Belinda Z. Li, Theodore R. Sumers, Thomas L. Griffiths, Jacob Andreas, Julie A. Shah
We describe a framework for using natural language to design state abstractions for imitation learning.
no code implementations • 4 Mar 2024 • Xiaoliang Luo, Akilles Rechardt, Guangzhi Sun, Kevin K. Nejad, Felipe Yáñez, Bati Yilmaz, Kangjoo Lee, Alexandra O. Cohen, Valentina Borghesani, Anton Pashkov, Daniele Marinazzo, Jonathan Nicholas, Alessandro Salatiello, Ilia Sucholutsky, Pasquale Minervini, Sepehr Razavi, Roberta Rocca, Elkhan Yusifov, Tereza Okalova, Nianlong Gu, Martin Ferianc, Mikail Khona, Kaustubh R. Patil, Pui-Shee Lee, Rui Mata, Nicholas E. Myers, Jennifer K Bizley, Sebastian Musslick, Isil Poyraz Bilgin, Guiomar Niso, Justin M. Ales, Michael Gaebler, N Apurva Ratan Murty, Leyla Loued-Khenissi, Anna Behler, Chloe M. Hall, Jessica Dafflon, Sherry Dongqi Bao, Bradley C. Love
LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts.