Search Results for author: A. Emin Orhan

Found 14 papers, 13 papers with code

Self-supervised learning of video representations from a child's perspective

1 code implementation1 Feb 2024 A. Emin Orhan, Wentao Wang, Alex N. Wang, Mengye Ren, Brenden M. Lake

These results suggest that important temporal aspects of a child's internal model of the world may be learnable from their visual experience using highly generic learning algorithms and without strong inductive biases.

Object Recognition Self-Supervised Learning

Scaling may be all you need for achieving human-level object recognition capacity with human-like visual experience

1 code implementation7 Aug 2023 A. Emin Orhan

We find that it is feasible to reach human-level object recognition capacity at sub-human scales of model size, data size, and image size, if these factors are scaled up simultaneously.

Object Recognition Self-Supervised Learning

Recognition, recall, and retention of few-shot memories in large language models

1 code implementation30 Mar 2023 A. Emin Orhan

In recognition experiments, we ask if the model can distinguish the seen example from a novel example; in recall experiments, we ask if the model can correctly recall the seen example when cued by a part of it; and in retention experiments, we periodically probe the model's memory for the original examples as the model is trained continuously with new examples.

Can deep learning match the efficiency of human visual long-term memory in storing object details?

1 code implementation27 Apr 2022 A. Emin Orhan

Humans have a remarkably large capacity to store detailed visual information in long-term memory even after a single exposure, as demonstrated by classic experiments in psychology.

Compositional generalization in semantic parsing with pretrained transformers

1 code implementation30 Sep 2021 A. Emin Orhan

Finally, we show that larger models are harder to train from scratch and their generalization accuracy is lower when trained up to convergence on the relatively small SCAN and COGS datasets, but the benefits of large-scale pretraining become much clearer with larger models.

Out-of-Distribution Generalization Semantic Parsing

How much human-like visual experience do current self-supervised learning algorithms need in order to achieve human-level object recognition?

1 code implementation23 Sep 2021 A. Emin Orhan

The exact values of these estimates are sensitive to some underlying assumptions, however even in the most optimistic scenarios they remain orders of magnitude larger than a human lifetime.

Object Recognition Representation Learning +1

Robustness properties of Facebook's ResNeXt WSL models

1 code implementation17 Jul 2019 A. Emin Orhan

We show that these models display an unprecedented degree of robustness against common image corruptions and perturbations, as measured by the ImageNet-C and ImageNet-P benchmarks.

Adversarial Robustness

Improving the robustness of ImageNet classifiers using elements of human visual cognition

1 code implementation20 Jun 2019 A. Emin Orhan, Brenden M. Lake

As reported in previous work, we show that an explicit episodic memory improves the robustness of image recognition models against small-norm adversarial perturbations under some threat models.

Clustering Retrieval

Improved memory in recurrent neural networks with sequential non-normal dynamics

1 code implementation ICLR 2020 A. Emin Orhan, Xaq Pitkow

In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks.

A Simple Cache Model for Image Recognition

1 code implementation NeurIPS 2018 A. Emin Orhan

We propose to extract this extra class-relevant information using a simple key-value cache memory to improve the classification performance of the model at test time.

General Classification Language Modelling

Skip Connections Eliminate Singularities

no code implementations ICLR 2018 A. Emin Orhan, Xaq Pitkow

Here, we present a novel explanation for the benefits of skip connections in training very deep networks.

Efficient Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback

1 code implementation12 Jan 2016 A. Emin Orhan, Wei Ji Ma

We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.