Search Results for author: Zeyad Emam

Found 5 papers, 3 papers with code

End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking

1 code implementation11 Feb 2022 Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein

Algorithmic extrapolation can be achieved through recurrent systems, which can be iterated many times to solve difficult reasoning problems.

Logical Reasoning

Thinking Deeper With Recurrent Networks: Logical Extrapolation Without Overthinking

no code implementations29 Sep 2021 Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein

Classical machine learning systems perform best when they are trained and tested on the same distribution, and they lack a mechanism to increase model power after training is complete.

On The State of Data In Computer Vision: Human Annotations Remain Indispensable for Developing Deep Learning Models

no code implementations31 Jul 2021 Zeyad Emam, Andrew Kondrich, Sasha Harrison, Felix Lau, Yushi Wang, Aerin Kim, Elliot Branson

High-quality labeled datasets play a crucial role in fueling the development of machine learning (ML), and in particular the development of deep learning (DL).

Continual Learning

Understanding Generalization through Visualizations

2 code implementations NeurIPS Workshop ICBINB 2020 W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, Justin K. Terry, Furong Huang, Tom Goldstein

The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive.

Cannot find the paper you are looking for? You can Submit a new open access paper.