1 code implementation • 25 Apr 2022 • Avi Gazneli, Gadi Zimerman, Tal Ridnik, Gilad Sharir, Asaf Noy
While efficient architectures and a plethora of augmentations for end-to-end image classification tasks have been suggested and heavily investigated, state-of-the-art techniques for audio classifications still rely on numerous representations of the audio signal together with large architectures, fine-tuned from large datasets.
Ranked #4 on
Keyword Spotting
on Google Speech Commands
(Google Speech Commands V2 35 metric)
no code implementations • 19 Apr 2022 • Niv Nayman, Avram Golbert, Asaf Noy, Tan Ping, Lihi Zelnik-Manor
Encouraged by the recent transferability results of self-supervised models, we propose a method that combines self-supervised and supervised pretraining to generate models with both high diversity and high accuracy, and as a result high transferability.
2 code implementations • 7 Apr 2022 • Tal Ridnik, Hussam Lawen, Emanuel Ben-Baruch, Asaf Noy
The scheme, named USI (Unified Scheme for ImageNet), is based on knowledge distillation and modern tricks.
1 code implementation • 25 Nov 2021 • Tal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben-Baruch, Asaf Noy
In this paper, we introduce ML-Decoder, a new attention-based classification head.
Ranked #2 on
Multi-Label Classification
on OpenImages-v6
1 code implementation • 24 Oct 2021 • Niv Nayman, Yonathan Aflalo, Asaf Noy, Rong Jin, Lihi Zelnik-Manor
Practical use of neural networks often involves requirements on latency, energy and memory among others.
1 code implementation • CVPR 2022 • Emanuel Ben-Baruch, Tal Ridnik, Itamar Friedman, Avi Ben-Cohen, Nadav Zamir, Asaf Noy, Lihi Zelnik-Manor
We propose to estimate the class distribution using a dedicated temporary model, and we show its improved efficiency over a naive estimation computed using the dataset's partial annotations.
Ranked #1 on
Multi-Label Classification
on OpenImages-v6
1 code implementation • 26 Sep 2021 • Tamar Glaser, Emanuel Ben-Baruch, Gilad Sharir, Nadav Zamir, Asaf Noy, Lihi Zelnik-Manor
We address this gap with a tailor-made solution, combining the power of CNNs for image representation and transformers for album representation to perform global reasoning on image collection, offering a practical and efficient solution for photo albums event recognition.
5 code implementations • 22 Apr 2021 • Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelnik-Manor
ImageNet-1K serves as the primary dataset for pretraining deep learning models for computer vision tasks.
Ranked #2 on
Image Classification
on Stanford Cars
2 code implementations • 25 Mar 2021 • Gilad Sharir, Asaf Noy, Lihi Zelnik-Manor
Methods that reach State of the Art (SotA) accuracy, usually make use of 3D convolution layers as a way to abstract the temporal information from video frames.
Ranked #25 on
Action Recognition
on UCF101
(using extra training data)
2 code implementations • 23 Feb 2021 • Niv Nayman, Yonathan Aflalo, Asaf Noy, Lihi Zelnik-Manor
Realistic use of neural networks often requires adhering to multiple constraints on latency, energy and memory among others.
Ranked #21 on
Neural Architecture Search
on ImageNet
no code implementations • 12 Jan 2021 • Asaf Noy, Yi Xu, Yonathan Aflalo, Lihi Zelnik-Manor, Rong Jin
We show that convergence to a global minimum is guaranteed for networks with widths quadratic in the sample size and linear in their depth at a time logarithmic in both.
no code implementations • 3 Oct 2020 • Yi Xu, Asaf Noy, Ming Lin, Qi Qian, Hao Li, Rong Jin
To this end, we develop two novel algorithms, termed "AugDrop" and "MixLoss", to correct the data bias in the data augmentation.
5 code implementations • ICCV 2021 • Emanuel Ben-Baruch, Tal Ridnik, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, Lihi Zelnik-Manor
In this paper, we introduce a novel asymmetric loss ("ASL"), which operates differently on positive and negative samples.
Ranked #4 on
Multi-Label Classification
on NUS-WIDE
3 code implementations • 30 Mar 2020 • Tal Ridnik, Hussam Lawen, Asaf Noy, Emanuel Ben Baruch, Gilad Sharir, Itamar Friedman
In this work, we introduce a series of architecture modifications that aim to boost neural networks' accuracy, while retaining their GPU training and inference efficiency.
Ranked #6 on
Fine-Grained Image Classification
on Oxford 102 Flowers
(using extra training data)
1 code implementation • 19 Feb 2020 • Yonathan Aflalo, Asaf Noy, Ming Lin, Itamar Friedman, Lihi Zelnik
Through this we produce compact architectures with the same FLOPs as EfficientNet-B0 and MobileNetV3 but with higher accuracy, by $1\%$ and $0. 3\%$ respectively on ImageNet, and faster runtime on GPU.
Ranked #3 on
Network Pruning
on ImageNet
2 code implementations • NeurIPS 2019 • Niv Nayman, Asaf Noy, Tal Ridnik, Itamar Friedman, Rong Jin, Lihi Zelnik-Manor
This paper introduces a novel optimization method for differential neural architecture search, based on the theory of prediction with expert advice.
1 code implementation • 8 Apr 2019 • Asaf Noy, Niv Nayman, Tal Ridnik, Nadav Zamir, Sivan Doveh, Itamar Friedman, Raja Giryes, Lihi Zelnik-Manor
In this paper, we propose a differentiable search space that allows the annealing of architecture weights, while gradually pruning inferior operations.