no code implementations • 12 Feb 2020 • Mohammad Saeed Abrishami, Amir Erfan Eshratifar, David Eigen, Yanzhi Wang, Shahin Nazarian, Massoud Pedram
However, fine-tuning a transfer model with data augmentation in the raw input space has a high computational cost to run the full network for every augmented input.
no code implementations • 6 Sep 2019 • Amir Erfan Eshratifar, David Eigen, Michael Gormish, Massoud Pedram
Small inter-class and large intra-class variations are the main challenges in fine-grained visual classification.
1 code implementation • CVPR 2019 • Hongyang Li, David Eigen, Samuel Dodge, Matthew Zeiler, Xiaogang Wang
Few-shot learning is an important area of research.
no code implementations • 18 Oct 2018 • Amir Erfan Eshratifar, David Eigen, Massoud Pedram
Therefore, the degree of the contribution of a task to the parameter updates is controlled by introducing a set of weights on the loss function of the tasks.
no code implementations • 21 Sep 2018 • Amir Erfan Eshratifar, Mohammad Saeed Abrishami, David Eigen, Massoud Pedram
Transfer-learning and meta-learning are two effective methods to apply knowledge learned from large data sources to new tasks.
no code implementations • CVPR 2015 • Li Wan, David Eigen, Rob Fergus
In this paper, we propose a new model that combines these two approaches, obtaining the advantages of each.
no code implementations • 9 Apr 2015 • Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann Lecun
Current state-of-the-art classification and detection algorithms rely on supervised training.
no code implementations • ICCV 2015 • Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann Lecun
Current state-of-the-art classification and detection algorithms rely on supervised training.
no code implementations • 19 Nov 2014 • Li Wan, David Eigen, Rob Fergus
In this paper, we propose a new model that combines these two approaches, obtaining the advantages of each.
4 code implementations • ICCV 2015 • David Eigen, Rob Fergus
In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling.
Ranked #71 on Monocular Depth Estimation on NYU-Depth V2
10 code implementations • NeurIPS 2014 • David Eigen, Christian Puhrsch, Rob Fergus
Predicting depth is an essential component in understanding the 3D geometry of a scene.
4 code implementations • 21 Dec 2013 • Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann Lecun
This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks.
no code implementations • 16 Dec 2013 • David Eigen, Marc'Aurelio Ranzato, Ilya Sutskever
In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones.
no code implementations • 6 Dec 2013 • David Eigen, Jason Rolfe, Rob Fergus, Yann Lecun
A key challenge in designing convolutional network models is sizing them appropriately.