Search Results for author: Chaim Baskin

Found 32 papers, 22 papers with code

End-to-End Referring Video Object Segmentation with Multimodal Transformers

2 code implementations CVPR 2022 Adam Botach, Evgenii Zheltonozhskii, Chaim Baskin

Due to the complex nature of this multimodal task, which combines text reasoning, video understanding, instance segmentation and tracking, existing approaches typically rely on sophisticated pipelines in order to tackle it.

Inductive Bias Referring Expression Segmentation +5

Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels

1 code implementation25 Mar 2021 Evgenii Zheltonozhskii, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Or Litany

In this paper, we identify a "warm-up obstacle": the inability of standard warm-up stages to train high quality feature extractors and avert memorization of noisy labels.

Learning with noisy labels Memorization

Loss Aware Post-training Quantization

2 code implementations17 Nov 2019 Yury Nahshan, Brian Chmiel, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson

We show that with more aggressive quantization, the loss landscape becomes highly non-separable with steep curvature, making the selection of quantization parameters more challenging.

Quantization

Self-Supervised Learning for Large-Scale Unsupervised Image Clustering

1 code implementation24 Aug 2020 Evgenii Zheltonozhskii, Chaim Baskin, Alex M. Bronstein, Avi Mendelson

Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data.

Clustering General Classification +4

A Simple and Universal Rotation Equivariant Point-cloud Network

1 code implementation2 Mar 2022 Ben Finkelshtein, Chaim Baskin, Haggai Maron, Nadav Dym

Equivariance to permutations and rigid motions is an important inductive bias for various 3D learning problems.

Inductive Bias

Feature Map Transform Coding for Energy-Efficient CNN Inference

1 code implementation26 May 2019 Brian Chmiel, Chaim Baskin, Ron Banner, Evgenii Zheltonozhskii, Yevgeny Yermolin, Alex Karbachevsky, Alex M. Bronstein, Avi Mendelson

We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy.

Video Compression

Graph Representation Learning via Aggregation Enhancement

2 code implementations30 Jan 2022 Maxim Fishman, Chaim Baskin, Evgenii Zheltonozhskii, Almog David, Ron Banner, Avi Mendelson

Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance.

Data Augmentation Graph Representation Learning +3

Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks

2 code implementations22 Apr 2019 Yochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, Avi Mendelson

While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training.

Network Pruning Neural Architecture Search +1

Single-Node Attacks for Fooling Graph Neural Networks

1 code implementation6 Nov 2020 Ben Finkelshtein, Chaim Baskin, Evgenii Zheltonozhskii, Uri Alon

Graph neural networks (GNNs) have shown broad applicability in a variety of domains.

Adversarial Attack

Enhanced Meta Label Correction for Coping with Label Corruption

1 code implementation ICCV 2023 Mitchell Keren Taraday, Chaim Baskin

Traditional methods for learning with the presence of noisy labels have successfully handled datasets with artificially injected noise but still fall short of adequately handling real-world noise.

Learning with noisy labels

CAT: Compression-Aware Training for bandwidth reduction

1 code implementation25 Sep 2019 Chaim Baskin, Brian Chmiel, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson

Our method trains the model to achieve low-entropy feature maps, which enables efficient compression at inference time using classical transform coding methods.

Quantization

Bimodal Distributed Binarized Neural Networks

1 code implementation5 Apr 2022 Tal Rozen, Moshe Kimhi, Brian Chmiel, Avi Mendelson, Chaim Baskin

The proposed method consists of a training scheme that we call Weight Distribution Mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart.

Binarization Quantization

Physical Passive Patch Adversarial Attacks on Visual Odometry Systems

1 code implementation11 Jul 2022 Yaniv Nemcovsky, Matan Jacoby, Alex M. Bronstein, Chaim Baskin

While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs.

Autonomous Navigation Drone navigation +1

Smoothed Inference for Adversarially-Trained Models

2 code implementations17 Nov 2019 Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Maxim Fishman, Alex M. Bronstein, Avi Mendelson

In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks.

Adversarial Defense

Weisfeiler and Leman Go Infinite: Spectral and Combinatorial Pre-Colorings

1 code implementation31 Jan 2022 Or Feldman, Amit Boyarski, Shai Feldman, Dani Kogan, Avi Mendelson, Chaim Baskin

Two popular alternatives that offer a good trade-off between expressive power and computational efficiency are combinatorial (i. e., obtained via the Weisfeiler-Leman (WL) test) and spectral invariants.

Computational Efficiency Isomorphism Testing +1

Classifier Robustness Enhancement Via Test-Time Transformation

1 code implementation27 Mar 2023 Tsachi Blau, Roy Ganz, Chaim Baskin, Michael Elad, Alex Bronstein

We show that the proposed method achieves state-of-the-art results and validate our claim through extensive experiments on a variety of defense methods, classifier architectures, and datasets.

Adversarial Attack

UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural Networks

no code implementations29 Apr 2018 Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alex M. Bronstein, Avi Mendelson

We present a novel method for neural network quantization that emulates a non-uniform $k$-quantile quantizer, which adapts to the distribution of the quantized parameters.

Quantization

Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform

no code implementations31 Jul 2017 Chaim Baskin, Natan Liss, Evgenii Zheltonozhskii, Alex M. Bronshtein, Avi Mendelson

Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e. g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers.

General Classification

Efficient non-uniform quantizer for quantized neural network targeting reconfigurable hardware

no code implementations27 Nov 2018 Natan Liss, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Raja Giryes

While most works use uniform quantizers for both parameters and activations, it is not always the optimal one, and a non-uniform quantizer need to be considered.

Image Classification speech-recognition +1

Colored Noise Injection for Training Adversarially Robust Neural Networks

no code implementations4 Mar 2020 Evgenii Zheltonozhskii, Chaim Baskin, Yaniv Nemcovsky, Brian Chmiel, Avi Mendelson, Alex M. Bronstein

Even though deep learning has shown unmatched performance on various tasks, neural networks have been shown to be vulnerable to small adversarial perturbations of the input that lead to significant performance degradation.

HCM: Hardware-Aware Complexity Metric for Neural Network Architectures

no code implementations19 Apr 2020 Alex Karbachevsky, Chaim Baskin, Evgenii Zheltonozhskii, Yevgeny Yermolin, Freddy Gabbay, Alex M. Bronstein, Avi Mendelson

Convolutional Neural Networks (CNNs) have become common in many fields including computer vision, speech recognition, and natural language processing.

Quantization speech-recognition

Weakly Supervised Recovery of Semantic Attributes

no code implementations22 Mar 2021 Ameen Ali, Tomer Galanti, Evgeniy Zheltonozhskiy, Chaim Baskin, Lior Wolf

We consider the problem of the extraction of semantic attributes, supervised only with classification labels.

Strategic Classification with Graph Neural Networks

1 code implementation31 May 2022 Itay Eilat, Ben Finkelshtein, Chaim Baskin, Nir Rosenfeld

Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions.

Classification

FBM: Fast-Bit Allocation for Mixed-Precision Quantization

no code implementations30 May 2022 Moshe Kimhi, Tal Rozen, Tal Kopetz, Olya Sirkin, Avi Mendelson, Chaim Baskin

Quantized neural networks are well known for reducing latency, power consumption, and model size without significant degradation in accuracy, making them highly applicable for systems with limited resources and low power requirements.

Quantization

GoToNet: Fast Monocular Scene Exposure and Exploration

no code implementations13 Jun 2022 Tom Avrech, Evgenii Zheltonozhskii, Chaim Baskin, Ehud Rivlin

In this work, we present a novel method for real-time environment exploration, whose only requirements are a visually similar dataset for pre-training, enough lighting in the scene, and an on-board forward-looking RGB camera for environmental sensing.

Single Image Test-Time Adaptation for Segmentation

no code implementations25 Sep 2023 Klara Janouskova, Tamir Shor, Chaim Baskin, Jiri Matas

Test-Time Adaptation (TTA) methods improve the robustness of deep neural networks to domain shift on a variety of tasks such as image classification or segmentation.

Image Classification Segmentation +1

Leveraging Temporal Graph Networks Using Module Decoupling

no code implementations4 Oct 2023 Or Feldman, Chaim Baskin

Modern approaches for learning on dynamic graphs have adopted the use of batches instead of applying updates one by one.

Cannot find the paper you are looking for? You can Submit a new open access paper.