no code implementations • ECCV 2020 • Boyang Deng, JP Lewis, Timothy Jeruzalski, Gerard Pons-Moll, Geoffrey Hinton, Mohammad Norouzi, Andrea Tagliasacchi
Efficient representation of articulated objects such as human bodies is an important problem in computer vision and graphics.
7 code implementations • NA 2022 • Geoffrey Hinton
The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth further investigation.
no code implementations • 5 Dec 2022 • Kevin Clark, Kelvin Guu, Ming-Wei Chang, Panupong Pasupat, Geoffrey Hinton, Mohammad Norouzi
Dynamic evaluation of language models (LMs) adapts model parameters at test time using gradient information from previous tokens and substantially improves LM performance.
1 code implementation • 19 Oct 2022 • Renjie Liao, Simon Kornblith, Mengye Ren, David J. Fleet, Geoffrey Hinton
We revisit the challenging problem of training Gaussian-Bernoulli restricted Boltzmann machines (GRBMs), introducing two innovations.
1 code implementation • 12 Oct 2022 • Ting Chen, Lala Li, Saurabh Saxena, Geoffrey Hinton, David J. Fleet
Panoptic segmentation assigns semantic and instance ID labels to every pixel of an image.
1 code implementation • 7 Oct 2022 • Mengye Ren, Simon Kornblith, Renjie Liao, Geoffrey Hinton
Forward gradient learning computes a noisy directional gradient and is a biologically plausible alternative to backprop for learning deep neural networks.
5 code implementations • 8 Aug 2022 • Ting Chen, Ruixiang Zhang, Geoffrey Hinton
The main idea behind our approach is to first represent the discrete data as binary bits, and then train a continuous diffusion model to model these bits as real numbers which we call analog bits.
Ranked #5 on
Image Captioning
on COCO
1 code implementation • 15 Jun 2022 • Ting Chen, Saurabh Saxena, Lala Li, Tsung-Yi Lin, David J. Fleet, Geoffrey Hinton
Despite that, by formulating the output of each task as a sequence of discrete tokens with a unified interface, we show that one can train a neural network with a single model architecture and loss function on all these tasks, with no task-specific customization.
1 code implementation • 19 May 2022 • Shekoofeh Azizi, Laura Culp, Jan Freyberg, Basil Mustafa, Sebastien Baur, Simon Kornblith, Ting Chen, Patricia MacWilliams, S. Sara Mahdavi, Ellery Wulczyn, Boris Babenko, Megan Wilson, Aaron Loh, Po-Hsuan Cameron Chen, YuAn Liu, Pinal Bavishi, Scott Mayer McKinney, Jim Winkens, Abhijit Guha Roy, Zach Beaver, Fiona Ryan, Justin Krogue, Mozziyar Etemadi, Umesh Telang, Yun Liu, Lily Peng, Greg S. Corrado, Dale R. Webster, David Fleet, Geoffrey Hinton, Neil Houlsby, Alan Karthikesalingam, Mohammad Norouzi, Vivek Natarajan
These results suggest that REMEDIS can significantly accelerate the life-cycle of medical imaging AI development thereby presenting an important step forward for medical imaging AI to deliver broad impact.
6 code implementations • ICLR 2022 • Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, Geoffrey Hinton
We present Pix2Seq, a simple and generic framework for object detection.
Ranked #61 on
Object Detection
on COCO minival
(using extra training data)
5 code implementations • 25 Feb 2021 • Geoffrey Hinton
Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM.
1 code implementation • NeurIPS 2021 • Weiwei Sun, Andrea Tagliasacchi, Boyang Deng, Sara Sabour, Soroosh Yazdani, Geoffrey Hinton, Kwang Moo Yi
We propose a self-supervised capsule architecture for 3D point clouds.
1 code implementation • ICLR 2021 • Aniruddh Raghu, Maithra Raghu, Simon Kornblith, David Duvenaud, Geoffrey Hinton
We find that commentaries can improve training speed and/or performance, and provide insights about the dataset and training process.
8 code implementations • NeurIPS 2020 • Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey Hinton
The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
Self-Supervised Image Classification
Semi-Supervised Image Classification
6 code implementations • NeurIPS 2021 • Rishabh Agarwal, Levi Melnick, Nicholas Frosst, Xuezhou Zhang, Ben Lengerich, Rich Caruana, Geoffrey Hinton
They perform similarly to existing state-of-the-art generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees.
1 code implementation • ICML 2020 • William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, Navdeep Jaitly
This paper presents the Imputer, a neural sequence model that generates output sequences iteratively via imputations.
no code implementations • 18 Feb 2020 • Yao Qin, Nicholas Frosst, Colin Raffel, Garrison Cottrell, Geoffrey Hinton
There has been an ongoing cycle where stronger defenses against adversarial attacks are subsequently broken by a more advanced defense-aware attack.
78 code implementations • ICML 2020 • Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
Ranked #4 on
Contrastive Learning
on imagenet-1k
Contrastive Learning
Self-Supervised Image Classification
+3
no code implementations • 10 Feb 2020 • Rafael Müller, Simon Kornblith, Geoffrey Hinton
By training a small "student" model to match these probabilities, it is possible to transfer most of the generalization ability of the teacher to the student, often producing a much better small model than directly training the student on the training data.
no code implementations • 6 Dec 2019 • Boyang Deng, JP Lewis, Timothy Jeruzalski, Gerard Pons-Moll, Geoffrey Hinton, Mohammad Norouzi, Andrea Tagliasacchi
Efficient representation of articulated objects such as human bodies is an important problem in computer vision and graphics.
no code implementations • CVPR 2020 • Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, Andrea Tagliasacchi
We introduce a network architecture to represent a low dimensional family of convexes.
18 code implementations • NeurIPS 2019 • Michael R. Zhang, James Lucas, Geoffrey Hinton, Jimmy Ba
The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms.
no code implementations • ICLR 2020 • Yao Qin, Nicholas Frosst, Sara Sabour, Colin Raffel, Garrison Cottrell, Geoffrey Hinton
Then, we diagnose the adversarial examples for CapsNets and find that the success of the reconstructive attack is highly related to the visual similarity between the source and target class.
3 code implementations • NeurIPS 2019 • Rafael Müller, Simon Kornblith, Geoffrey Hinton
The generalization and learning speed of a multi-class neural network can often be significantly improved by using soft targets that are a weighted average of the hard targets and the uniform distribution over labels.
no code implementations • 28 May 2019 • Boyang Deng, Simon Kornblith, Geoffrey Hinton
To generalize to novel visual scenes with new viewpoints and new object poses, a visual system needs representations of the shapes of the parts of an object that are invariant to changes in viewpoint or pose.
8 code implementations • ICML 2019 2019 • Simon Kornblith, Mohammad Norouzi, Honglak Lee, Geoffrey Hinton
We introduce a similarity index that measures the relationship between representational similarity matrices and does not suffer from this limitation.
3 code implementations • 5 Feb 2019 • Nicholas Frosst, Nicolas Papernot, Geoffrey Hinton
We explore and expand the $\textit{Soft Nearest Neighbor Loss}$ to measure the $\textit{entanglement}$ of class manifolds in representation space: i. e., how close pairs of points from the same class are relative to pairs of points from different classes.
no code implementations • 16 Nov 2018 • Nicholas Frosst, Sara Sabour, Geoffrey Hinton
In addition to being trained to classify images, the capsule model is trained to reconstruct the images from the pose parameters and identity of the correct top-level capsule.
no code implementations • ACL 2018 • Jamie Kiros, William Chan, Geoffrey Hinton
We introduce Picturebook, a large-scale lookup operation to ground language via {`}snapshots{'} of our physical world accessed through image search.
6 code implementations • 27 Nov 2017 • Nicholas Frosst, Geoffrey Hinton
They excel when the input data is high dimensional, the relationship between the input and the output is complicated, and the number of labeled training examples is large.
2 code implementations • 23 Jan 2017 • Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, Geoffrey Hinton
We systematically explore regularizing neural networks by penalizing low entropy output distributions.
4 code implementations • 23 Jan 2017 • Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean
In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters.
Ranked #14 on
Language Modelling
on One Billion Word
4 code implementations • NeurIPS 2016 • Jimmy Ba, Geoffrey Hinton, Volodymyr Mnih, Joel Z. Leibo, Catalin Ionescu
Until recently, research on artificial neural networks was largely restricted to systems with only two types of variable: Neural activities that represent the current or recent input and weights that learn to capture regularities among inputs, outputs and payoffs.
57 code implementations • 9 Mar 2015 • Geoffrey Hinton, Oriol Vinyals, Jeff Dean
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions.
Ranked #4 on
Knowledge Distillation
on ImageNet
8 code implementations • NeurIPS 2015 • Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey Hinton
Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades.
Ranked #22 on
Constituency Parsing
on Penn Treebank
no code implementations • Journal of Machine Learning Research 2014 • Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov
The key idea is to randomly drop units (along with their connections) from the neural network during training.
no code implementations • Proceedings of the 30th International Conference on Machine Learning 2013 • Ilya Sutskever, James Martens, George Dahl, Geoffrey Hinton
Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum.
5 code implementations • 22 Mar 2013 • Alex Graves, Abdel-rahman Mohamed, Geoffrey Hinton
Recurrent neural networks (RNNs) are a powerful model for sequential data.
Ranked #18 on
Speech Recognition
on TIMIT
no code implementations • Signal Processing Magazine 2012 • Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, Brian Kingsbury
Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input.
1 code implementation • ICML: Proceedings of the 24th international conference on Machine learning 2007 • Ruslan Salakhutdinov, andriy mnih, Geoffrey Hinton
Most of the existing approaches to collaborative filtering cannot handle very large data sets.