1 code implementation • 25 Nov 2024 • Yaniv Nemcovsky, Avi Mendelson, Chaim Baskin
Our approach enables simultaneous optimization of multiple sparse patches' locations and perturbations for any given number and shape.
1 code implementation • 15 Nov 2024 • Moshe Kimhi, Idan Kashani, Avi Mendelson, Chaim Baskin
The widely used ReLU is favored for its hardware efficiency, {as the implementation at inference is a one bit sign case,} yet suffers from issues such as the ``dying ReLU'' problem, where during training, neurons fail to activate and constantly remain at zero, as highlighted by Lu et al.
no code implementations • 22 Oct 2024 • Tsachi Blau, Moshe Kimhi, Yonatan Belinkov, Alexander Bronstein, Chaim Baskin
A more parameter-efficient approach is Prompt Tuning (PT), which updates only a few learnable tokens, and differently, In-Context Learning (ICL) adapts the model to a new task by simply including examples in the input without any training.
no code implementations • 28 Sep 2024 • Mitchell Keren Taraday, Almog David, Chaim Baskin
Message Passing Graph Neural Networks (MPGNNs) have emerged as the preferred method for modeling complex interactions across diverse graph entities.
no code implementations • 19 Sep 2024 • Tamir Shor, Chaim Baskin, Alex Bronstein
Dynamic Magnetic Resonance Imaging (MRI) is a crucial non-invasive method used to capture the movement of internal organs and tissues, making it a key tool for medical diagnosis.
1 code implementation • 1 Jul 2024 • Moshe Kimhi, David Vainshtein, Chaim Baskin, Dotan Di Castro
The ability of robots to manipulate objects relies heavily on their aptitude for visual perception.
Ranked #1 on Instance Segmentation on ARMBench
1 code implementation • 16 Jun 2024 • Eden Grad, Moshe Kimhi, Lion Halika, Chaim Baskin
Obtaining accurate labels for instance segmentation is particularly challenging due to the complex nature of the task.
Ranked #1 on Instance Segmentation on COCO-N Medium
1 code implementation • 13 Jun 2024 • Maor Dikter, Tsachi Blau, Chaim Baskin
The code for our experiments is available at https://github. com/clearProject/CLEAR/tree/main
1 code implementation • 13 May 2024 • Zachary Bamberger, Ofek Glick, Chaim Baskin, Yonatan Belinkov
Language Models (LMs) often struggle with linguistic understanding at the discourse level, even though discourse patterns such as coherence, cohesion, and narrative flow are prevalent in their pre-training data.
1 code implementation • 9 Apr 2024 • Tamir Shor, Chaim Baskin, Alex Bronstein
In this work, we present a novel algorithm for both breast cancer classification and segmentation.
1 code implementation • 27 Feb 2024 • Gabriele Serussi, Tamir Shor, Tom Hirshberg, Chaim Baskin, Alex Bronstein
Multi-rotor aerial autonomous vehicles (MAVs) primarily rely on vision for navigation purposes.
no code implementations • 4 Oct 2023 • Or Feldman, Chaim Baskin
Modern approaches for learning on dynamic graphs have adopted the use of batches instead of applying updates one by one.
no code implementations • 25 Sep 2023 • Klara Janouskova, Tamir Shor, Chaim Baskin, Jiri Matas
Test-Time Adaptation (TTA) methods improve the robustness of deep neural networks to domain shift on a variety of tasks such as image classification or segmentation.
1 code implementation • 26 Aug 2023 • Moshe Kimhi, Shai Kimhi, Evgenii Zheltonozhskii, Or Litany, Chaim Baskin
We present a novel confidence refinement scheme that enhances pseudo labels in semi-supervised semantic segmentation.
1 code implementation • ICCV 2023 • Mitchell Keren Taraday, Chaim Baskin
Traditional methods for learning with the presence of noisy labels have successfully handled datasets with artificially injected noise but still fall short of adequately handling real-world noise.
1 code implementation • 27 Mar 2023 • Tsachi Blau, Roy Ganz, Chaim Baskin, Michael Elad, Alex M. Bronstein
Robust classification methods predominantly concentrate on algorithms that address a specific threat model, resulting in ineffective defenses against other threat models.
1 code implementation • 11 Jul 2022 • Yaniv Nemcovsky, Matan Jacoby, Alex M. Bronstein, Chaim Baskin
While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs.
no code implementations • 13 Jun 2022 • Tom Avrech, Evgenii Zheltonozhskii, Chaim Baskin, Ehud Rivlin
In this work, we present a novel method for real-time environment exploration, whose only requirements are a visually similar dataset for pre-training, enough lighting in the scene, and an on-board forward-looking RGB camera for environmental sensing.
1 code implementation • 31 May 2022 • Itay Eilat, Ben Finkelshtein, Chaim Baskin, Nir Rosenfeld
Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions.
1 code implementation • 30 May 2022 • Moshe Kimhi, Tal Rozen, Avi Mendelson, Chaim Baskin
Challenging this assumption, we argue that the optimal minimum changes as the precision changes, and thus, it is better to look at quantization as a random process, placing the foundation for a different approach to quantize neural networks, which, during the training procedure, quantizes the model to a different precision, looks at the bit allocation as a Markov Decision Process, and then, finds an optimal bitwidth allocation for measuring specified behaviors on a specific device via direct signals from the particular hardware architecture.
1 code implementation • 5 Apr 2022 • Tal Rozen, Moshe Kimhi, Brian Chmiel, Avi Mendelson, Chaim Baskin
The proposed method consists of a training scheme that we call Weight Distribution Mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart.
1 code implementation • 2 Mar 2022 • Ben Finkelshtein, Chaim Baskin, Haggai Maron, Nadav Dym
Equivariance to permutations and rigid motions is an important inductive bias for various 3D learning problems.
1 code implementation • 31 Jan 2022 • Or Feldman, Amit Boyarski, Shai Feldman, Dani Kogan, Avi Mendelson, Chaim Baskin
Two popular alternatives that offer a good trade-off between expressive power and computational efficiency are combinatorial (i. e., obtained via the Weisfeiler-Leman (WL) test) and spectral invariants.
2 code implementations • 30 Jan 2022 • Maxim Fishman, Chaim Baskin, Evgenii Zheltonozhskii, Almog David, Ron Banner, Avi Mendelson
Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance.
2 code implementations • CVPR 2022 • Adam Botach, Evgenii Zheltonozhskii, Chaim Baskin
Due to the complex nature of this multimodal task, which combines text reasoning, video understanding, instance segmentation and tracking, existing approaches typically rely on sophisticated pipelines in order to tackle it.
Ranked #5 on Referring Expression Segmentation on J-HMDB
1 code implementation • 25 Mar 2021 • Evgenii Zheltonozhskii, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Or Litany
In this paper, we identify a "warm-up obstacle": the inability of standard warm-up stages to train high quality feature extractors and avert memorization of noisy labels.
Ranked #1 on Image Classification on CIFAR-10 (with noisy labels)
no code implementations • 22 Mar 2021 • Ameen Ali, Tomer Galanti, Evgeniy Zheltonozhskiy, Chaim Baskin, Lior Wolf
We consider the problem of the extraction of semantic attributes, supervised only with classification labels.
1 code implementation • 6 Nov 2020 • Ben Finkelshtein, Chaim Baskin, Evgenii Zheltonozhskii, Uri Alon
Graph neural networks (GNNs) have shown broad applicability in a variety of domains.
1 code implementation • 24 Aug 2020 • Evgenii Zheltonozhskii, Chaim Baskin, Alex M. Bronstein, Avi Mendelson
Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data.
Ranked #1 on Unsupervised Image Classification on ObjectNet
no code implementations • 19 Apr 2020 • Alex Karbachevsky, Chaim Baskin, Evgenii Zheltonozhskii, Yevgeny Yermolin, Freddy Gabbay, Alex M. Bronstein, Avi Mendelson
Convolutional Neural Networks (CNNs) have become common in many fields including computer vision, speech recognition, and natural language processing.
no code implementations • 4 Mar 2020 • Evgenii Zheltonozhskii, Chaim Baskin, Yaniv Nemcovsky, Brian Chmiel, Avi Mendelson, Alex M. Bronstein
Even though deep learning has shown unmatched performance on various tasks, neural networks have been shown to be vulnerable to small adversarial perturbations of the input that lead to significant performance degradation.
2 code implementations • 17 Nov 2019 • Yury Nahshan, Brian Chmiel, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson
We show that with more aggressive quantization, the loss landscape becomes highly non-separable with steep curvature, making the selection of quantization parameters more challenging.
2 code implementations • 17 Nov 2019 • Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Maxim Fishman, Alex M. Bronstein, Avi Mendelson
In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks.
1 code implementation • 25 Sep 2019 • Chaim Baskin, Brian Chmiel, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson
Our method trains the model to achieve low-entropy feature maps, which enables efficient compression at inference time using classical transform coding methods.
1 code implementation • 26 May 2019 • Brian Chmiel, Chaim Baskin, Ron Banner, Evgenii Zheltonozhskii, Yevgeny Yermolin, Alex Karbachevsky, Alex M. Bronstein, Avi Mendelson
We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy.
2 code implementations • 22 Apr 2019 • Yochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, Avi Mendelson
While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training.
2 code implementations • 7 Feb 2019 • Nir Diamant, Dean Zadok, Chaim Baskin, Eli Schwartz, Alex M. Bronstein
Beauty is in the eye of the beholder.
no code implementations • 27 Nov 2018 • Natan Liss, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Raja Giryes
While most works use uniform quantizers for both parameters and activations, it is not always the optimal one, and a non-uniform quantizer need to be considered.
1 code implementation • ICLR 2019 • Chaim Baskin, Natan Liss, Yoav Chai, Evgenii Zheltonozhskii, Eli Schwartz, Raja Giryes, Avi Mendelson, Alexander M. Bronstein
Convolutional Neural Networks (CNN) are very popular in many fields including computer vision, speech recognition, natural language processing, to name a few.
no code implementations • 29 Apr 2018 • Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alex M. Bronstein, Avi Mendelson
We present a novel method for neural network quantization that emulates a non-uniform $k$-quantile quantizer, which adapts to the distribution of the quantized parameters.
no code implementations • 31 Jul 2017 • Chaim Baskin, Natan Liss, Evgenii Zheltonozhskii, Alex M. Bronshtein, Avi Mendelson
Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e. g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers.