Search Results for author: Mark Sandler

Found 35 papers, 16 papers with code

Deep Embeddings for Robust User-Based Amateur Vocal Percussion Classification

no code implementations10 Apr 2022 Alejandro Delgado, Emir Demirel, Vinod Subramanian, Charalampos Saitis, Mark Sandler

Vocal Percussion Transcription (VPT) is concerned with the automatic detection and classification of vocal percussion sound events, allowing music creators and producers to sketch drum lines on the fly.

Classification feature selection +1

Deep Conditional Representation Learning for Drum Sample Retrieval by Vocalisation

1 code implementation10 Apr 2022 Alejandro Delgado, Charalampos Saitis, Emmanouil Benetos, Mark Sandler

Imitating musical instruments with the human voice is an efficient way of communicating ideas between music producers, from sketching melody lines to clarifying desired sonorities.

Representation Learning

Fine-tuning Image Transformers using Learnable Memory

no code implementations29 Mar 2022 Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, Andrew Jackson

In this paper we propose augmenting Vision Transformer models with learnable memory tokens.

HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning

no code implementations11 Jan 2022 Andrey Zhmoginov, Mark Sandler, Max Vladymyrov

In this work we propose a HyperTransformer, a transformer-based model for few-shot learning that generates weights of a convolutional neural network (CNN) directly from support samples.

Few-Shot Learning

HyperTransformer: Attention-Based CNN Model Generation from Few Samples

no code implementations29 Sep 2021 Andrey Zhmoginov, Max Vladymyrov, Mark Sandler

In this work we propose a HyperTransformer, a transformer based model that generates all weights of a CNN model directly from the support samples.

Few-Shot Learning

Compositional Models: Multi-Task Learning and Knowledge Transfer with Modular Networks

no code implementations23 Jul 2021 Andrey Zhmoginov, Dina Bashkirova, Mark Sandler

From practical perspective, our approach allows to: (a) reuse existing modules for learning new task by adjusting the computation order, (b) use it for unsupervised multi-source domain adaptation to illustrate that adaptation to unseen data can be achieved by only manipulating the order of pretrained modules, (c) show how our approach can be used to increase accuracy of existing architectures for image classification tasks such as ImageNet, without any parameter increase, by reusing the same block multiple times.

Domain Adaptation Image Classification +1

Meta-Learning Bidirectional Update Rules

1 code implementation10 Apr 2021 Mark Sandler, Max Vladymyrov, Andrey Zhmoginov, Nolan Miller, Andrew Jackson, Tom Madams, Blaise Aguera y Arcas

We show that classical gradient-based backpropagation in neural networks can be seen as a special case of a two-state network where one state is used for activations and another for gradients, with update rules derived from the chain rule.

Meta-Learning

SpotPatch: Parameter-Efficient Transfer Learning for Mobile Object Detection

no code implementations4 Jan 2021 Keren Ye, Adriana Kovashka, Mark Sandler, Menglong Zhu, Andrew Howard, Marco Fornoni

In this paper we address the question: can task-specific detectors be trained and represented as a shared set of weights, plus a very small set of additional weights for each task?

Object Detection Transfer Learning

Large-Scale Generative Data-Free Distillation

no code implementations10 Dec 2020 Liangchen Luo, Mark Sandler, Zi Lin, Andrey Zhmoginov, Andrew Howard

Knowledge distillation is one of the most popular and effective techniques for knowledge transfer, model compression and semi-supervised learning.

Knowledge Distillation Model Compression +1

Image segmentation via Cellular Automata

no code implementations11 Aug 2020 Mark Sandler, Andrey Zhmoginov, Liangcheng Luo, Alexander Mordvintsev, Ettore Randazzo, Blaise Agúera y Arcas

The update rule is applied repeatedly in parallel to a large random subset of cells and after convergence is used to produce segmentation masks that are then back-propagated to learn the optimal update rules using standard gradient descent methods.

Semantic Segmentation

Structured Multi-Hashing for Model Compression

no code implementations CVPR 2020 Elad Eban, Yair Movshovitz-Attias, Hao Wu, Mark Sandler, Andrew Poon, Yerlan Idelbayev, Miguel A. Carreira-Perpinan

Despite the success of deep neural networks (DNNs), state-of-the-art models are too large to deploy on low-resource devices or common server configurations in which multiple models are held in memory.

Model Compression

Non-discriminative data or weak model? On the relative importance of data and model resolution

no code implementations7 Sep 2019 Mark Sandler, Jonathan Baccash, Andrey Zhmoginov, Andrew Howard

We explore the question of how the resolution of the input image ("input resolution") affects the performance of a neural network when compared to the resolution of the hidden layers ("internal resolution").

Information-Bottleneck Approach to Salient Region Discovery

no code implementations22 Jul 2019 Andrey Zhmoginov, Ian Fischer, Mark Sandler

We propose a new method for learning image attention masks in a semi-supervised setting based on the Information Bottleneck principle.

Efficient On-line Computation of Visibility Graphs

1 code implementation8 May 2019 Delia Fano Yela, Florian Thalmann, Vincenzo Nicosia, Dan Stowell, Mark Sandler

The empirical evidence suggests the proposed method for computation of visibility graphs offers an on-line computation solution at no additional computation time cost.

Data Structures and Algorithms

Visibility graphs for robust harmonic similarity measures between audio spectra

1 code implementation5 Mar 2019 Delia Fano Yela, Dan Stowell, Mark Sandler

We present experiments demonstrating the utility of this distance measure for real and synthesised audio data.

Sound Audio and Speech Processing

MnasNet: Platform-Aware Neural Architecture Search for Mobile

16 code implementations CVPR 2019 Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, Quoc V. Le

In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency.

Ranked #8 on Real-Time Object Detection on COCO (using extra training data)

Image Classification Neural Architecture Search +1

NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications

4 code implementations ECCV 2018 Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, Hartwig Adam

This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget.

Image Classification

MobileNetV2: Inverted Residuals and Linear Bottlenecks

117 code implementations CVPR 2018 Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen

In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes.

Image Classification Object Detection +3

CycleGAN, a Master of Steganography

no code implementations8 Dec 2017 Casey Chu, Andrey Zhmoginov, Mark Sandler

CycleGAN (Zhu et al. 2017) is one recent successful approach to learn a transformation between two image distributions.

A Tutorial on Deep Learning for Music Information Retrieval

2 code implementations13 Sep 2017 Keunwoo Choi, György Fazekas, Kyunghyun Cho, Mark Sandler

Following their success in Computer Vision and other areas, deep learning techniques have recently become widely adopted in Music Information Retrieval (MIR) research.

Information Retrieval Music Information Retrieval

A Comparison of Audio Signal Preprocessing Methods for Deep Neural Networks on Music Tagging

1 code implementation6 Sep 2017 Keunwoo Choi, György Fazekas, Kyunghyun Cho, Mark Sandler

In this paper, we empirically investigate the effect of audio preprocessing on music tagging with deep neural networks.

Music Tagging

Transfer learning for music classification and regression tasks

3 code implementations27 Mar 2017 Keunwoo Choi, György Fazekas, Mark Sandler, Kyunghyun Cho

In this paper, we present a transfer learning approach for music classification and regression tasks.

Classification General Classification +3

The Power of Sparsity in Convolutional Neural Networks

no code implementations21 Feb 2017 Soravit Changpinyo, Mark Sandler, Andrey Zhmoginov

Deep convolutional networks are well-known for their high computational and memory demands.

Towards Music Captioning: Generating Music Playlist Descriptions

no code implementations17 Aug 2016 Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler

Descriptions are often provided along with recommendations to help users' discovery.

Explaining Deep Convolutional Neural Networks on Music Classification

1 code implementation8 Jul 2016 Keunwoo Choi, George Fazekas, Mark Sandler

Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e. g. genre classification, mood detection, and chord recognition.

Chord Recognition Classification +5

Inverting face embeddings with convolutional neural networks

1 code implementation14 Jun 2016 Andrey Zhmoginov, Mark Sandler

Deep neural networks have dramatically advanced the state of the art for many areas of machine learning.

Face Transfer

Towards Playlist Generation Algorithms Using RNNs Trained on Within-Track Transitions

no code implementations7 Jun 2016 Keunwoo Choi, George Fazekas, Mark Sandler

We introduce a novel playlist generation algorithm that focuses on the quality of transitions using a recurrent neural network (RNN).

Automatic tagging using deep convolutional neural networks

10 code implementations1 Jun 2016 Keunwoo Choi, George Fazekas, Mark Sandler

We present a content-based automatic music tagging algorithm using fully convolutional neural networks (FCNs).

Music Tagging

Text-based LSTM networks for Automatic Music Composition

3 code implementations18 Apr 2016 Keunwoo Choi, George Fazekas, Mark Sandler

In this paper, we introduce new methods and discuss results of text-based LSTM (Long Short-Term Memory) networks for automatic music composition.

Cannot find the paper you are looking for? You can Submit a new open access paper.