Search Results for author: Avi Mendelson

Found 17 papers, 12 papers with code

AMED: Automatic Mixed-Precision Quantization for Edge Devices

1 code implementation30 May 2022 Moshe Kimhi, Tal Rozen, Avi Mendelson, Chaim Baskin

Challenging this assumption, we argue that the optimal minimum changes as the precision changes, and thus, it is better to look at quantization as a random process, placing the foundation for a different approach to quantize neural networks, which, during the training procedure, quantizes the model to a different precision, looks at the bit allocation as a Markov Decision Process, and then, finds an optimal bitwidth allocation for measuring specified behaviors on a specific device via direct signals from the particular hardware architecture.


Bimodal Distributed Binarized Neural Networks

1 code implementation5 Apr 2022 Tal Rozen, Moshe Kimhi, Brian Chmiel, Avi Mendelson, Chaim Baskin

The proposed method consists of a training scheme that we call Weight Distribution Mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart.

Binarization Quantization

Weisfeiler and Leman Go Infinite: Spectral and Combinatorial Pre-Colorings

1 code implementation31 Jan 2022 Or Feldman, Amit Boyarski, Shai Feldman, Dani Kogan, Avi Mendelson, Chaim Baskin

Two popular alternatives that offer a good trade-off between expressive power and computational efficiency are combinatorial (i. e., obtained via the Weisfeiler-Leman (WL) test) and spectral invariants.

Computational Efficiency Isomorphism Testing +1

Graph Representation Learning via Aggregation Enhancement

2 code implementations30 Jan 2022 Maxim Fishman, Chaim Baskin, Evgenii Zheltonozhskii, Almog David, Ron Banner, Avi Mendelson

Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance.

Data Augmentation Graph Representation Learning +3

Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels

1 code implementation25 Mar 2021 Evgenii Zheltonozhskii, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Or Litany

In this paper, we identify a "warm-up obstacle": the inability of standard warm-up stages to train high quality feature extractors and avert memorization of noisy labels.

Learning with noisy labels Memorization

Self-Supervised Learning for Large-Scale Unsupervised Image Clustering

1 code implementation24 Aug 2020 Evgenii Zheltonozhskii, Chaim Baskin, Alex M. Bronstein, Avi Mendelson

Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data.

Clustering General Classification +4

HCM: Hardware-Aware Complexity Metric for Neural Network Architectures

no code implementations19 Apr 2020 Alex Karbachevsky, Chaim Baskin, Evgenii Zheltonozhskii, Yevgeny Yermolin, Freddy Gabbay, Alex M. Bronstein, Avi Mendelson

Convolutional Neural Networks (CNNs) have become common in many fields including computer vision, speech recognition, and natural language processing.

Quantization speech-recognition

Colored Noise Injection for Training Adversarially Robust Neural Networks

no code implementations4 Mar 2020 Evgenii Zheltonozhskii, Chaim Baskin, Yaniv Nemcovsky, Brian Chmiel, Avi Mendelson, Alex M. Bronstein

Even though deep learning has shown unmatched performance on various tasks, neural networks have been shown to be vulnerable to small adversarial perturbations of the input that lead to significant performance degradation.

Smoothed Inference for Adversarially-Trained Models

2 code implementations17 Nov 2019 Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Maxim Fishman, Alex M. Bronstein, Avi Mendelson

In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks.

Adversarial Defense

Loss Aware Post-training Quantization

2 code implementations17 Nov 2019 Yury Nahshan, Brian Chmiel, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson

We show that with more aggressive quantization, the loss landscape becomes highly non-separable with steep curvature, making the selection of quantization parameters more challenging.


CAT: Compression-Aware Training for bandwidth reduction

1 code implementation25 Sep 2019 Chaim Baskin, Brian Chmiel, Evgenii Zheltonozhskii, Ron Banner, Alex M. Bronstein, Avi Mendelson

Our method trains the model to achieve low-entropy feature maps, which enables efficient compression at inference time using classical transform coding methods.


Feature Map Transform Coding for Energy-Efficient CNN Inference

1 code implementation26 May 2019 Brian Chmiel, Chaim Baskin, Ron Banner, Evgenii Zheltonozhskii, Yevgeny Yermolin, Alex Karbachevsky, Alex M. Bronstein, Avi Mendelson

We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy.

Video Compression

Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks

2 code implementations22 Apr 2019 Yochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, Avi Mendelson

While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training.

Network Pruning Neural Architecture Search +1

Efficient non-uniform quantizer for quantized neural network targeting reconfigurable hardware

no code implementations27 Nov 2018 Natan Liss, Chaim Baskin, Avi Mendelson, Alex M. Bronstein, Raja Giryes

While most works use uniform quantizers for both parameters and activations, it is not always the optimal one, and a non-uniform quantizer need to be considered.

Image Classification speech-recognition +1

UNIQ: Uniform Noise Injection for Non-Uniform Quantization of Neural Networks

no code implementations29 Apr 2018 Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alex M. Bronstein, Avi Mendelson

We present a novel method for neural network quantization that emulates a non-uniform $k$-quantile quantizer, which adapts to the distribution of the quantized parameters.


Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform

no code implementations31 Jul 2017 Chaim Baskin, Natan Liss, Evgenii Zheltonozhskii, Alex M. Bronshtein, Avi Mendelson

Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e. g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.