Search Results for author: Pieter-Jan Kindermans

Found 21 papers, 13 papers with code

When adversarial examples are excusable

no code implementations25 Apr 2022 Pieter-Jan Kindermans, Charles Staats

Qualitatively, the remaining adversarial errors are similar to test errors on difficult examples.

TabNAS: Rejection Sampling for Neural Architecture Search on Tabular Datasets

1 code implementation15 Apr 2022 Chengrun Yang, Gabriel Bender, Hanxiao Liu, Pieter-Jan Kindermans, Madeleine Udell, Yifeng Lu, Quoc Le, Da Huang

The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc.

Image Retrieval Neural Architecture Search +1

Discovering Multi-Hardware Mobile Models via Architecture Search

no code implementations18 Aug 2020 Grace Chu, Okan Arikan, Gabriel Bender, Weijun Wang, Achille Brighton, Pieter-Jan Kindermans, Hanxiao Liu, Berkin Akin, Suyog Gupta, Andrew Howard

Hardware-aware neural architecture designs have been predominantly focusing on optimizing model performance on single hardware and model development complexity, where another important factor, model deployment complexity, has been largely ignored.

Neural Architecture Search

Can weight sharing outperform random architecture search? An investigation with TuNAS

1 code implementation CVPR 2020 Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kindermans, Quoc Le

Efficient Neural Architecture Search methods based on weight sharing have shown good promise in democratizing Neural Architecture Search for computer vision models.

Image Classification Neural Architecture Search

MobileDets: Searching for Object Detection Architectures for Mobile Accelerators

4 code implementations CVPR 2021 Yunyang Xiong, Hanxiao Liu, Suyog Gupta, Berkin Akin, Gabriel Bender, Yongzhe Wang, Pieter-Jan Kindermans, Mingxing Tan, Vikas Singh, Bo Chen

By incorporating regular convolutions in the search space and directly optimizing the network architectures for object detection, we obtain a family of object detection models, MobileDets, that achieve state-of-the-art results across mobile accelerators.

Neural Architecture Search Object +2

BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models

1 code implementation ECCV 2020 Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, Xiaodan Song, Ruoming Pang, Quoc Le

Without extra retraining or post-processing steps, we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs.

Neural Architecture Search

Neural Predictor for Neural Architecture Search

2 code implementations ECCV 2020 Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, Pieter-Jan Kindermans

First we train N random architectures to generate N (architecture, validation accuracy) pairs and use them to train a regression model that predicts accuracy based on the architecture.

Neural Architecture Search regression

Scaling Up Neural Architecture Search with Big Single-Stage Models

no code implementations25 Sep 2019 Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, Xiaodan Song, Quoc Le

In this work, we propose BigNAS, an approach that simplifies this workflow and scales up neural architecture search to target a wide range of model sizes simultaneously.

Neural Architecture Search

iNNvestigate neural networks!

1 code implementation13 Aug 2018 Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans

The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

Interpretable Machine Learning

Backprop Evolution

no code implementations8 Aug 2018 Maximilian Alber, Irwan Bello, Barret Zoph, Pieter-Jan Kindermans, Prajit Ramachandran, Quoc Le

The back-propagation algorithm is the cornerstone of deep learning.

SchNet - a deep learning architecture for molecules and materials

5 code implementations J. Chem. Phys. 2017 Kristof T. Schütt, Huziel E. Sauceda, Pieter-Jan Kindermans, Alexandre Tkatchenko, Klaus-Robert Müller

Deep learning has led to a paradigm shift in artificial intelligence, including web, text and image search, speech recognition, as well as bioinformatics, with growing impact in chemical physics.

Formation Energy Chemical Physics Materials Science

Don't Decay the Learning Rate, Increase the Batch Size

3 code implementations ICLR 2018 Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le

We can further reduce the number of parameter updates by increasing the learning rate $\epsilon$ and scaling the batch size $B \propto \epsilon$.

SchNet: A continuous-filter convolutional neural network for modeling quantum interactions

5 code implementations NeurIPS 2017 Kristof T. Schütt, Pieter-Jan Kindermans, Huziel E. Sauceda, Stefan Chmiela, Alexandre Tkatchenko, Klaus-Robert Müller

Deep learning has the potential to revolutionize quantum chemistry as it is ideally suited to learn representations for structured data and speed up the exploration of chemical space.

 Ranked #1 on Time Series on QM9

Formation Energy Time Series +1

A P300 BCI for the Masses: Prior Information Enables Instant Unsupervised Spelling

no code implementations NeurIPS 2012 Pieter-Jan Kindermans, Hannes Verschore, David Verstraeten, Benjamin Schrauwen

The usability of Brain Computer Interfaces (BCI) based on the P300 speller is severely hindered by the need for long training times and many repetitions of the same stimulus.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.