2 code implementations • 16 Jan 2024 • Tal Ridnik, Dedy Kredo, Itamar Friedman
Hence, many of the optimizations and tricks that have been successful in natural language generation may not be effective for code tasks.
1 code implementation • 25 Apr 2022 • Avi Gazneli, Gadi Zimerman, Tal Ridnik, Gilad Sharir, Asaf Noy
While efficient architectures and a plethora of augmentations for end-to-end image classification tasks have been suggested and heavily investigated, state-of-the-art techniques for audio classifications still rely on numerous representations of the audio signal together with large architectures, fine-tuned from large datasets.
Ranked #4 on
Keyword Spotting
on Google Speech Commands
(Google Speech Commands V2 35 metric)
2 code implementations • 7 Apr 2022 • Tal Ridnik, Hussam Lawen, Emanuel Ben-Baruch, Asaf Noy
The scheme, named USI (Unified Scheme for ImageNet), is based on knowledge distillation and modern tricks.
1 code implementation • 25 Nov 2021 • Tal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben-Baruch, Asaf Noy
In this paper, we introduce ML-Decoder, a new attention-based classification head.
Ranked #2 on
Multi-Label Classification
on OpenImages-v6
1 code implementation • CVPR 2022 • Emanuel Ben-Baruch, Tal Ridnik, Itamar Friedman, Avi Ben-Cohen, Nadav Zamir, Asaf Noy, Lihi Zelnik-Manor
We propose to estimate the class distribution using a dedicated temporary model, and we show its improved efficiency over a naive estimation computed using the dataset's partial annotations.
Ranked #1 on
Multi-Label Classification
on OpenImages-v6
5 code implementations • 22 Apr 2021 • Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelnik-Manor
ImageNet-1K serves as the primary dataset for pretraining deep learning models for computer vision tasks.
Ranked #2 on
Image Classification
on Stanford Cars
5 code implementations • ICCV 2021 • Emanuel Ben-Baruch, Tal Ridnik, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, Lihi Zelnik-Manor
In this paper, we introduce a novel asymmetric loss ("ASL"), which operates differently on positive and negative samples.
Ranked #4 on
Multi-Label Classification
on NUS-WIDE
3 code implementations • 30 Mar 2020 • Tal Ridnik, Hussam Lawen, Asaf Noy, Emanuel Ben Baruch, Gilad Sharir, Itamar Friedman
In this work, we introduce a series of architecture modifications that aim to boost neural networks' accuracy, while retaining their GPU training and inference efficiency.
Ranked #6 on
Fine-Grained Image Classification
on Oxford 102 Flowers
(using extra training data)
2 code implementations • NeurIPS 2019 • Niv Nayman, Asaf Noy, Tal Ridnik, Itamar Friedman, Rong Jin, Lihi Zelnik-Manor
This paper introduces a novel optimization method for differential neural architecture search, based on the theory of prediction with expert advice.
1 code implementation • 8 Apr 2019 • Asaf Noy, Niv Nayman, Tal Ridnik, Nadav Zamir, Sivan Doveh, Itamar Friedman, Raja Giryes, Lihi Zelnik-Manor
In this paper, we propose a differentiable search space that allows the annealing of architecture weights, while gradually pruning inferior operations.