2 code implementations • 5 Oct 2022 • Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, Dumitru Erhan
To the best of our knowledge, this is the first time a paper studies generating videos from time variable prompts.
Ranked #4 on Video Prediction on BAIR Robot Pushing
no code implementations • 25 Apr 2022 • Pieter-Jan Kindermans, Charles Staats
Qualitatively, the remaining adversarial errors are similar to test errors on difficult examples.
1 code implementation • 15 Apr 2022 • Chengrun Yang, Gabriel Bender, Hanxiao Liu, Pieter-Jan Kindermans, Madeleine Udell, Yifeng Lu, Quoc Le, Da Huang
The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc.
no code implementations • 18 Aug 2020 • Grace Chu, Okan Arikan, Gabriel Bender, Weijun Wang, Achille Brighton, Pieter-Jan Kindermans, Hanxiao Liu, Berkin Akin, Suyog Gupta, Andrew Howard
Hardware-aware neural architecture designs have been predominantly focusing on optimizing model performance on single hardware and model development complexity, where another important factor, model deployment complexity, has been largely ignored.
1 code implementation • CVPR 2020 • Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kindermans, Quoc Le
Efficient Neural Architecture Search methods based on weight sharing have shown good promise in democratizing Neural Architecture Search for computer vision models.
4 code implementations • CVPR 2021 • Yunyang Xiong, Hanxiao Liu, Suyog Gupta, Berkin Akin, Gabriel Bender, Yongzhe Wang, Pieter-Jan Kindermans, Mingxing Tan, Vikas Singh, Bo Chen
By incorporating regular convolutions in the search space and directly optimizing the network architectures for object detection, we obtain a family of object detection models, MobileDets, that achieve state-of-the-art results across mobile accelerators.
1 code implementation • ECCV 2020 • Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, Xiaodan Song, Ruoming Pang, Quoc Le
Without extra retraining or post-processing steps, we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs.
Ranked #30 on Neural Architecture Search on ImageNet
2 code implementations • ECCV 2020 • Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, Pieter-Jan Kindermans
First we train N random architectures to generate N (architecture, validation accuracy) pairs and use them to train a regression model that predicts accuracy based on the architecture.
no code implementations • 25 Sep 2019 • Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, Xiaodan Song, Quoc Le
In this work, we propose BigNAS, an approach that simplifies this workflow and scales up neural architecture search to target a wide range of model sizes simultaneously.
1 code implementation • 13 Aug 2018 • Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans
The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.
no code implementations • 8 Aug 2018 • Maximilian Alber, Irwan Bello, Barret Zoph, Pieter-Jan Kindermans, Prajit Ramachandran, Quoc Le
The back-propagation algorithm is the cornerstone of deep learning.
no code implementations • ICML 2018 • Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, Quoc Le
There is growing interest in automating neural network architecture design.
3 code implementations • NeurIPS 2019 • Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim
We propose an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks.
5 code implementations • J. Chem. Phys. 2017 • Kristof T. Schütt, Huziel E. Sauceda, Pieter-Jan Kindermans, Alexandre Tkatchenko, Klaus-Robert Müller
Deep learning has led to a paradigm shift in artificial intelligence, including web, text and image search, speech recognition, as well as bioinformatics, with growing impact in chemical physics.
Ranked #6 on Formation Energy on Materials Project
Formation Energy Chemical Physics Materials Science
no code implementations • NeurIPS 2017 • Maximilian Alber, Pieter-Jan Kindermans, Kristof Schütt, Klaus-Robert Müller, Fei Sha
Kernel machines as well as neural networks possess universal function approximation properties.
1 code implementation • ICLR 2018 • Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
Saliency methods aim to explain the predictions of deep neural networks.
3 code implementations • ICLR 2018 • Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le
We can further reduce the number of parameter updates by increasing the learning rate $\epsilon$ and scaling the batch size $B \propto \epsilon$.
5 code implementations • NeurIPS 2017 • Kristof T. Schütt, Pieter-Jan Kindermans, Huziel E. Sauceda, Stefan Chmiela, Alexandre Tkatchenko, Klaus-Robert Müller
Deep learning has the potential to revolutionize quantum chemistry as it is ideally suited to learn representations for structured data and speed up the exploration of chemical space.
Ranked #1 on Time Series on QM9
3 code implementations • ICLR 2018 • Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne
We show that these methods do not produce the theoretically correct explanation for a linear model.
no code implementations • 22 Nov 2016 • Pieter-Jan Kindermans, Kristof Schütt, Klaus-Robert Müller, Sven Dähne
Understanding neural networks is becoming increasingly important.
no code implementations • NeurIPS 2012 • Pieter-Jan Kindermans, Hannes Verschore, David Verstraeten, Benjamin Schrauwen
The usability of Brain Computer Interfaces (BCI) based on the P300 speller is severely hindered by the need for long training times and many repetitions of the same stimulus.