Search Results for author: Gamaleldin F. Elsayed

Found 11 papers, 7 papers with code

Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames

1 code implementation9 Feb 2023 Ondrej Biza, Sjoerd van Steenkiste, Mehdi S. M. Sajjadi, Gamaleldin F. Elsayed, Aravindh Mahendran, Thomas Kipf

Automatically discovering composable abstractions from raw perceptual data is a long-standing challenge in machine learning.

Object Object Discovery

Teacher-Generated Spatial-Attention Labels Boost Robustness and Accuracy of Contrastive Models

no code implementations CVPR 2023 Yushi Yao, Chang Ye, Junfeng He, Gamaleldin F. Elsayed

We then traina model with a primary contrastive objective; to this stan-dard configuration, we add a simple output head trained topredict the attentional map for each image, guided by thepseudo labels from teacher model.

Image Retrieval Retrieval

Conditional Object-Centric Learning from Video

3 code implementations ICLR 2022 Thomas Kipf, Gamaleldin F. Elsayed, Aravindh Mahendran, Austin Stone, Sara Sabour, Georg Heigold, Rico Jonschkowski, Alexey Dosovitskiy, Klaus Greff

Object-centric representations are a promising path toward more systematic generalization by providing flexible abstractions upon which compositional world models can be built.

Instance Segmentation Object +3

Addressing the Real-world Class Imbalance Problem in Dermatology

no code implementations9 Oct 2020 Wei-Hung Weng, Jonathan Deaton, Vivek Natarajan, Gamaleldin F. Elsayed, YuAn Liu

Class imbalance is a common problem in medical diagnosis, causing a standard classifier to be biased towards the common classes and perform poorly on the rare classes.

Benchmarking Few-Shot Learning +1

Revisiting Spatial Invariance with Low-Rank Local Connectivity

no code implementations ICML 2020 Gamaleldin F. Elsayed, Prajit Ramachandran, Jonathon Shlens, Simon Kornblith

Convolutional neural networks are among the most successful architectures in deep learning with this success at least partially attributable to the efficacy of spatial invariance as an inductive bias.

Inductive Bias

Saccader: Improving Accuracy of Hard Attention Models for Vision

2 code implementations NeurIPS 2019 Gamaleldin F. Elsayed, Simon Kornblith, Quoc V. Le

Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, their decisions are difficult to interpret.

Hard Attention Image Classification

Adversarial Reprogramming of Neural Networks

6 code implementations ICLR 2019 Gamaleldin F. Elsayed, Ian Goodfellow, Jascha Sohl-Dickstein

Previous adversarial attacks have been designed to degrade performance of models or cause machine learning models to produce specific outputs chosen ahead of time by the attacker.

BIG-bench Machine Learning General Classification

Adversarial Examples that Fool both Computer Vision and Time-Limited Humans

no code implementations NeurIPS 2018 Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, Jascha Sohl-Dickstein

Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich.

BIG-bench Machine Learning Open-Ended Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.