no code implementations • 25 Sep 2023 • Klara Janouskova, Tamir Shor, Chaim Baskin, Jiri Matas
Test-Time Adaptation (TTA) methods improve the robustness of deep neural networks to domain shift on a variety of tasks such as image classification or segmentation.
1 code implementation • 26 Aug 2023 • Moshe Kimhi, Shai Kimhi, Evgenii Zheltonozhskii, Or Litany, Chaim Baskin
We present a novel confidence refinement scheme that enhances pseudo-labels in semi-supervised semantic segmentation.
1 code implementation • ICCV 2023 • Mitchell Keren Taraday, Chaim Baskin
Traditional methods for learning with the presence of noisy labels have successfully handled datasets with artificially injected noise but still fall short of adequately handling real-world noise.
1 code implementation • 27 Mar 2023 • Tsachi Blau, Roy Ganz, Chaim Baskin, Michael Elad, Alex Bronstein
We show that the proposed method achieves state-of-the-art results and validate our claim through extensive experiments on a variety of defense methods, classifier architectures, and datasets.
1 code implementation • 11 Jul 2022 • Yaniv Nemcovsky, Matan Jacoby, Alex M. Bronstein, Chaim Baskin
While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs.
no code implementations • 13 Jun 2022 • Tom Avrech, Evgenii Zheltonozhskii, Chaim Baskin, Ehud Rivlin
In this work, we present a novel method for real-time environment exploration, whose only requirements are a visually similar dataset for pre-training, enough lighting in the scene, and an on-board forward-looking RGB camera for environmental sensing.
1 code implementation • 31 May 2022 • Itay Eilat, Ben Finkelshtein, Chaim Baskin, Nir Rosenfeld
Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions.
no code implementations • 30 May 2022 • Moshe Kimhi, Tal Rozen, Tal Kopetz, Olya Sirkin, Avi Mendelson, Chaim Baskin
Quantized neural networks are well known for reducing latency, power consumption, and model size without significant degradation in accuracy, making them highly applicable for systems with limited resources and low power requirements.
1 code implementation • 5 Apr 2022 • Tal Rozen, Moshe Kimhi, Brian Chmiel, Avi Mendelson, Chaim Baskin
The proposed method consists of a training scheme that we call Weight Distribution Mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart.
1 code implementation • 2 Mar 2022 • Ben Finkelshtein, Chaim Baskin, Haggai Maron, Nadav Dym
Equivariance to permutations and rigid motions is an important inductive bias for various 3D learning problems.
1 code implementation • 31 Jan 2022 • Or Feldman, Amit Boyarski, Shai Feldman, Dani Kogan, Avi Mendelson, Chaim Baskin
Two popular alternatives that offer a good trade-off between expressive power and computational efficiency are combinatorial (i. e., obtained via the Weisfeiler-Leman (WL) test) and spectral invariants.
2 code implementations • 30 Jan 2022 • Maxim Fishman, Chaim Baskin, Evgenii Zheltonozhskii, Almog David, Ron Banner, Avi Mendelson
Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance.
2 code implementations • CVPR 2022 • Adam Botach, Evgenii Zheltonozhskii, Chaim Baskin
Due to the complex nature of this multimodal task, which combines text reasoning, video understanding, instance segmentation and tracking, existing approaches typically rely on sophisticated pipelines in order to tackle it.
Ranked #4 on
Referring Video Object Segmentation
on MeViS
1 code implementation • 25 Mar 2021 • Evgenii Zheltonozhskii, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Or Litany
In this paper, we identify a "warm-up obstacle": the inability of standard warm-up stages to train high quality feature extractors and avert memorization of noisy labels.
Ranked #1 on
Image Classification
on CIFAR-10 (with noisy labels)
no code implementations • 22 Mar 2021 • Ameen Ali, Tomer Galanti, Evgeniy Zheltonozhskiy, Chaim Baskin, Lior Wolf
We consider the problem of the extraction of semantic attributes, supervised only with classification labels.
1 code implementation • 6 Nov 2020 • Ben Finkelshtein, Chaim Baskin, Evgenii Zheltonozhskii, Uri Alon
Graph neural networks (GNNs) have shown broad applicability in a variety of domains.
1 code implementation • 24 Aug 2020 • Evgenii Zheltonozhskii, Chaim Baskin, Alex M. Bronstein, Avi Mendelson
Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data.
Ranked #1 on
Unsupervised Image Classification
on ObjectNet
no code implementations • 19 Apr 2020 • Alex Karbachevsky, Chaim Baskin, Evgenii Zheltonozhskii, Yevgeny Yermolin, Freddy Gabbay, Alex M. Bronstein, Avi Mendelson
Convolutional Neural Networks (CNNs) have become common in many fields including computer vision, speech recognition, and natural language processing.
no code implementations • 4 Mar 2020 • Evgenii Zheltonozhskii, Chaim Baskin, Yaniv Nemcovsky, Brian Chmiel, Avi Mendelson, Alex M. Bronstein
Even though deep learning has shown unmatched performance on various tasks, neural networks have been shown to be vulnerable to small adversarial perturbations of the input that lead to significant performance degradation.
2 code implementations • 17 Nov 2019 • Yury Nahshan, Brian Chmiel, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson
We show that with more aggressive quantization, the loss landscape becomes highly non-separable with steep curvature, making the selection of quantization parameters more challenging.
2 code implementations • 17 Nov 2019 • Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Maxim Fishman, Alex M. Bronstein, Avi Mendelson
In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks.
1 code implementation • 25 Sep 2019 • Chaim Baskin, Brian Chmiel, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson
Our method trains the model to achieve low-entropy feature maps, which enables efficient compression at inference time using classical transform coding methods.
1 code implementation • 26 May 2019 • Brian Chmiel, Chaim Baskin, Ron Banner, Evgenii Zheltonozhskii, Yevgeny Yermolin, Alex Karbachevsky, Alex M. Bronstein, Avi Mendelson
We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy.
2 code implementations • 22 Apr 2019 • Yochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, Avi Mendelson
While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training.
2 code implementations • 7 Feb 2019 • Nir Diamant, Dean Zadok, Chaim Baskin, Eli Schwartz, Alex M. Bronstein
Beauty is in the eye of the beholder.
no code implementations • 27 Nov 2018 • Natan Liss, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Raja Giryes
While most works use uniform quantizers for both parameters and activations, it is not always the optimal one, and a non-uniform quantizer need to be considered.
1 code implementation • ICLR 2019 • Chaim Baskin, Natan Liss, Yoav Chai, Evgenii Zheltonozhskii, Eli Schwartz, Raja Giryes, Avi Mendelson, Alexander M. Bronstein
Convolutional Neural Networks (CNN) are very popular in many fields including computer vision, speech recognition, natural language processing, to name a few.
no code implementations • 29 Apr 2018 • Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alex M. Bronstein, Avi Mendelson
We present a novel method for neural network quantization that emulates a non-uniform $k$-quantile quantizer, which adapts to the distribution of the quantized parameters.
no code implementations • 31 Jul 2017 • Chaim Baskin, Natan Liss, Evgenii Zheltonozhskii, Alex M. Bronshtein, Avi Mendelson
Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e. g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers.