Search Results for author: Ruslan Salakhutdinov

Found 197 papers, 105 papers with code

DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations

1 code implementation3 Mar 2022 Yiwei Lyu, Paul Pu Liang, Zihao Deng, Ruslan Salakhutdinov, Louis-Philippe Morency

The ability for a human to understand an Artificial Intelligence (AI) model's decision-making process is critical in enabling stakeholders to visualize model behavior, perform model debugging, promote trust in AI models, and assist in collaborative human-AI decision-making.

Decision Making Disentanglement +1

HighMMT: Towards Modality and Task Generalization for High-Modality Representation Learning

1 code implementation2 Mar 2022 Paul Pu Liang, Yiwei Lyu, Xiang Fan, Shentong Mo, Dani Yogatama, Louis-Philippe Morency, Ruslan Salakhutdinov

Learning multimodal representations involves discovering correspondences and integrating information from multiple heterogeneous sources of data.

Representation Learning Time Series +1

Conditional Contrastive Learning with Kernel

1 code implementation ICLR 2022 Yao-Hung Hubert Tsai, Tianqin Li, Martin Q. Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, Ruslan Salakhutdinov

Conditional contrastive learning frameworks consider the conditional sampling procedure that constructs positive or negative data pairs conditioned on specific variables.

Contrastive Learning

C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks

1 code implementation ICLR 2022 Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez

Goal-conditioned reinforcement learning (RL) can solve tasks in a wide range of domains, including navigation and manipulation, but learning to reach distant goals remains a central challenge to the field.

ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers

1 code implementation ACL 2022 Haitian Sun, William W. Cohen, Ruslan Salakhutdinov

In addition to conditional answers, the dataset also features: (1) long context documents with information that is related in logically complex ways; (2) multi-hop questions that require compositional logical reasoning; (3) a combination of extractive questions, yes/no questions, questions with multiple answers, and not-answerable questions; (4) questions asked without knowing the answers.

Question Answering Reading Comprehension

FILM: Following Instructions in Language with Modular Methods

1 code implementation ICLR 2022 So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, Ruslan Salakhutdinov

In contrast, we propose a modular method with structured representations that (1) builds a semantic map of the scene and (2) performs exploration with a semantic search policy, to achieve the natural language goal.

Imitation Learning

Recurrent Model-Free RL can be a Strong Baseline for Many POMDPs

1 code implementation11 Oct 2021 Tianwei Ni, Benjamin Eysenbach, Ruslan Salakhutdinov

However, prior work has found that such recurrent model-free RL methods tend to perform worse than more specialized algorithms that are designed for specific types of POMDPs.

Mismatched No More: Joint Model-Policy Optimization for Model-Based RL

1 code implementation6 Oct 2021 Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov

As noted in prior work, there is an objective mismatch: models are useful if they yield good policies, but they are trained to maximize their accuracy, rather than the performance of the policies that result from them.

Model-based Reinforcement Learning

The Information Geometry of Unsupervised Reinforcement Learning

1 code implementation ICLR 2022 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

In this work, we show that unsupervised skill discovery algorithms based on mutual information maximization do not learn skills that are optimal for every possible reward function.

Contrastive Learning reinforcement-learning +2

Learning Visual-Linguistic Adequacy, Fidelity, and Fluency for Novel Object Captioning

no code implementations29 Sep 2021 Cheng-Fu Yang, Yao-Hung Hubert Tsai, Wan-Cyuan Fan, Yu-Chiang Frank Wang, Louis-Philippe Morency, Ruslan Salakhutdinov

Novel object captioning (NOC) learns image captioning models for describing objects or visual concepts which are unseen (i. e., novel) in the training captions.

Image Captioning

Robust Predictable Control

1 code implementation NeurIPS 2021 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Many of the challenges facing today's reinforcement learning (RL) algorithms, such as robustness, generalization, transfer, and computational efficiency are closely related to compression.

Decision Making

MultiBench: Multiscale Benchmarks for Multimodal Representation Learning

2 code implementations15 Jul 2021 Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle A. Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency

In order to accelerate progress towards understudied modalities and tasks while ensuring real-world robustness, we release MultiBench, a systematic and unified large-scale benchmark spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas.

Representation Learning

Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data

no code implementations ACL 2021 Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency

Using computational models, we find that language and multimodal representations of mobile typed text (spanning typed characters, words, keystroke timings, and app usage) are predictive of daily mood.

Towards Understanding and Mitigating Social Biases in Language Models

1 code implementation24 Jun 2021 Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov

As machine learning methods are deployed in real-world settings such as healthcare, legal systems, and social science, it is crucial to recognize how they shape social biases and stereotypes in these sensitive decision-making processes.

Decision Making Fairness +2

Online Sub-Sampling for Reinforcement Learning with General Function Approximation

no code implementations14 Jun 2021 Dingwen Kong, Ruslan Salakhutdinov, Ruosong Wang, Lin F. Yang

In this paper, by applying online sub-sampling techniques, we develop an algorithm that takes $\widetilde{O}(\mathrm{poly}(dH))$ computation time per round on average, and enjoys nearly the same regret bound.

reinforcement-learning

HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units

4 code implementations14 Jun 2021 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed

Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation.

Ranked #3 on Speech Recognition on LibriSpeech test-other (using extra training data)

Representation Learning Speech Recognition

Integrating Auxiliary Information in Self-supervised Learning

no code implementations5 Jun 2021 Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency

Our approach contributes as follows: 1) Comparing to conventional self-supervised representations, the auxiliary-information-infused self-supervised representations bring the performance closer to the supervised representations; 2) The presented Cl-InfoNCE can also work with unsupervised constructed clusters (e. g., k-means clusters) and outperform strong clustering-based self-supervised learning approaches, such as the Prototypical Contrastive Learning (PCL) method; 3) We show that Cl-InfoNCE may be a better approach to leverage the data clustering information, by comparing it to the baseline approach - learning to predict the clustering assignments with cross-entropy loss.

Contrastive Learning Self-Supervised Learning

Iterative Hierarchical Attention for Answering Complex Questions over Long Documents

no code implementations1 Jun 2021 Haitian Sun, William W. Cohen, Ruslan Salakhutdinov

We propose a new model, DocHopper, that iteratively attends to different parts of long, hierarchically structured documents to answer complex questions.

Multi-hop Question Answering Question Answering +1

Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning

2 code implementations17 May 2021 Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, Hanlin Goh

Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.

Offline RL Q-Learning +1

A Note on Connecting Barlow Twins with Negative-Sample-Free Contrastive Learning

2 code implementations28 Apr 2021 Yao-Hung Hubert Tsai, Shaojie Bai, Louis-Philippe Morency, Ruslan Salakhutdinov

In this report, we relate the algorithmic design of Barlow Twins' method to the Hilbert-Schmidt Independence Criterion (HSIC), thus establishing it as a contrastive learning approach that is free of negative samples.

Contrastive Learning Self-Supervised Learning

StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer

2 code implementations NAACL 2021 Yiwei Lyu, Paul Pu Liang, Hai Pham, Eduard Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency

Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e. g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence.

Style Transfer Text Style Transfer

Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation

no code implementations ICLR 2021 Emilio Parisotto, Ruslan Salakhutdinov

Many real-world applications such as robotics provide hard constraints on power and compute that limit the viable model complexity of Reinforcement Learning (RL) agents.

reinforcement-learning

Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification

1 code implementation NeurIPS 2021 Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov

Can we devise RL algorithms that instead enable users to specify tasks simply by providing examples of successful outcomes?

General Classification

Self-supervised Representation Learning with Relative Predictive Coding

1 code implementation ICLR 2021 Yao-Hung Hubert Tsai, Martin Q. Ma, Muqiao Yang, Han Zhao, Louis-Philippe Morency, Ruslan Salakhutdinov

This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance.

Representation Learning Self-Supervised Learning

Instabilities of Offline RL with Pre-Trained Neural Representation

no code implementations8 Mar 2021 Ruosong Wang, Yifan Wu, Ruslan Salakhutdinov, Sham M. Kakade

In offline reinforcement learning (RL), we seek to utilize offline data to evaluate (or learn) policies in scenarios where the data are collected from a distribution that substantially differs from that of the target policy to be evaluated.

Offline RL

On Proximal Policy Optimization's Heavy-tailed Gradients

no code implementations20 Feb 2021 Saurabh Garg, Joshua Zhanson, Emilio Parisotto, Adarsh Prasad, J. Zico Kolter, Zachary C. Lipton, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Pradeep Ravikumar

In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function.

Continuous Control

Reasoning Over Virtual Knowledge Bases With Open Predicate Relations

no code implementations14 Feb 2021 Haitian Sun, Pat Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, William W. Cohen

We present the Open Predicate Query Language (OPQL); a method for constructing a virtual KB (VKB) trained entirely from text.

Language Modelling Open-Domain Question Answering

The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors

no code implementations26 Jan 2021 William H. Guss, Mario Ynocente Castro, Sam Devlin, Brandon Houghton, Noboru Sean Kuno, Crissman Loomis, Stephanie Milani, Sharada Mohanty, Keisuke Nakata, Ruslan Salakhutdinov, John Schulman, Shinya Shiroshita, Nicholay Topin, Avinash Ummadisingu, Oriol Vinyals

Although deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increasing number of samples, affording only a shrinking segment of the AI community access to their development.

Decision Making Efficient Exploration +1

Uncertainty Weighted Offline Reinforcement Learning

no code implementations1 Jan 2021 Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua M. Susskind, Jian Zhang, Ruslan Salakhutdinov, Hanlin Goh

Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.

Offline RL Q-Learning +1

Feature-Robust Optimal Transport for High-Dimensional Data

no code implementations1 Jan 2021 Mathis Petrovich, Chao Liang, Ryoma Sato, Yanbin Liu, Yao-Hung Hubert Tsai, Linchao Zhu, Yi Yang, Ruslan Salakhutdinov, Makoto Yamada

To show the effectiveness of FROT, we propose using the FROT algorithm for the layer selection problem in deep neural networks for semantic correspondence.

Semantic correspondence

Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment

1 code implementation4 Dec 2020 Paul Pu Liang, Peter Wu, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov

In this work, we propose algorithms for cross-modal generalization: a learning paradigm to train a model that can (1) quickly perform new tasks in a target modality (i. e. meta-learning) and (2) doing so while being trained on a different source modality.

Meta-Learning

C-Learning: Learning to Achieve Goals via Recursive Classification

no code implementations ICLR 2021 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

This problem, which can be viewed as a reframing of goal-conditioned reinforcement learning (RL), is centered around learning a conditional probability density function over future states.

Classification Density Estimation +2

Planning with Submodular Objective Functions

no code implementations22 Oct 2020 Ruosong Wang, Hanrui Zhang, Devendra Singh Chaplot, Denis Garagić, Ruslan Salakhutdinov

We study planning with submodular objective functions, where instead of maximizing the cumulative reward, the goal is to maximize the objective value induced by a submodular function.

Case Study: Deontological Ethics in NLP

no code implementations NAACL 2021 Shrimai Prabhumoye, Brendon Boldt, Ruslan Salakhutdinov, Alan W Black

Recent work in natural language processing (NLP) has focused on ethical challenges such as understanding and mitigating bias in data and algorithms; identifying objectionable content like hate speech, stereotypes and offensive language; and building frameworks for better system design and data handling practices.

Information Obfuscation of Graph Neural Networks

1 code implementation28 Sep 2020 Peiyuan Liao, Han Zhao, Keyulu Xu, Tommi Jaakkola, Geoffrey Gordon, Stefanie Jegelka, Ruslan Salakhutdinov

While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representation learning in many applications, the neighborhood aggregation scheme exposes additional vulnerabilities to adversaries seeking to extract node-level information about sensitive attributes.

Adversarial Defense Graph Representation Learning +2

Few-Shot Learning with Intra-Class Knowledge Transfer

no code implementations22 Aug 2020 Vivek Roy, Yan Xu, Yu-Xiong Wang, Kris Kitani, Ruslan Salakhutdinov, Martial Hebert

Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models with the few-shot training samples as the seeds.

Few-Shot Learning Transfer Learning

Towards Debiasing Sentence Representations

1 code implementation ACL 2020 Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, Louis-Philippe Morency

As natural language processing methods are increasingly deployed in real-world scenarios such as healthcare, legal systems, and social science, it becomes necessary to recognize the role they potentially play in shaping social biases and stereotypes.

Linguistic Acceptability Natural Language Understanding +2

Object Goal Navigation using Goal-Oriented Semantic Exploration

2 code implementations NeurIPS 2020 Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, Ruslan Salakhutdinov

We propose a modular system called, `Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently based on the goal object category.

Robot Navigation

Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers

1 code implementation ICLR 2021 Benjamin Eysenbach, Swapnil Asawa, Shreyas Chaudhari, Sergey Levine, Ruslan Salakhutdinov

Building off of a probabilistic view of RL, we formally show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function.

Continuous Control Domain Adaptation +1

On Reward-Free Reinforcement Learning with Linear Function Approximation

no code implementations NeurIPS 2020 Ruosong Wang, Simon S. Du, Lin F. Yang, Ruslan Salakhutdinov

The sample complexity of our algorithm is polynomial in the feature dimension and the planning horizon, and is completely independent of the number of states and actions.

reinforcement-learning

Self-supervised Learning from a Multi-view Perspective

1 code implementation ICLR 2021 Yao-Hung Hubert Tsai, Yue Wu, Ruslan Salakhutdinov, Louis-Philippe Morency

In particular, we propose a composite objective that bridges the gap between prior contrastive and predictive learning objectives, and introduce an additional objective term to discard task-irrelevant information.

Image Captioning Language Modelling +3

Neural Methods for Point-wise Dependency Estimation

1 code implementation NeurIPS 2020 Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov

Since its inception, the neural estimation of mutual information (MI) has demonstrated the empirical success of modeling expected dependency between high-dimensional random variables.

Cross-Modal Retrieval Representation Learning

Feature Robust Optimal Transport for High-dimensional Data

no code implementations25 May 2020 Mathis Petrovich, Chao Liang, Ryoma Sato, Yanbin Liu, Yao-Hung Hubert Tsai, Linchao Zhu, Yi Yang, Ruslan Salakhutdinov, Makoto Yamada

To show the effectiveness of FROT, we propose using the FROT algorithm for the layer selection problem in deep neural networks for semantic correspondence.

Semantic correspondence

Neural Topological SLAM for Visual Navigation

no code implementations CVPR 2020 Devendra Singh Chaplot, Ruslan Salakhutdinov, Abhinav Gupta, Saurabh Gupta

This paper studies the problem of image-goal navigation which involves navigating to the location indicated by a goal image in a novel previously unseen environment.

Visual Navigation

Guaranteeing Reproducibility in Deep Learning Competitions

no code implementations12 May 2020 Brandon Houghton, Stephanie Milani, Nicholay Topin, William Guss, Katja Hofmann, Diego Perez-Liebana, Manuela Veloso, Ruslan Salakhutdinov

To encourage the development of methods with reproducible and robust training behavior, we propose a challenge paradigm where competitors are evaluated directly on the performance of their learning procedures rather than pre-trained agents.

Exploring Controllable Text Generation Techniques

no code implementations COLING 2020 Shrimai Prabhumoye, Alan W. black, Ruslan Salakhutdinov

In this work, we provide a new schema of the pipeline of the generation process by classifying it into five modules.

Text Generation

Topological Sort for Sentence Ordering

2 code implementations ACL 2020 Shrimai Prabhumoye, Ruslan Salakhutdinov, Alan W. black

Sentence ordering is the task of arranging the sentences of a given text in the correct order.

Sentence Ordering

Politeness Transfer: A Tag and Generate Approach

1 code implementation ACL 2020 Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W. black, Shrimai Prabhumoye

This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning.

Style Transfer TAG

Learning to Explore using Active Neural SLAM

2 code implementations ICLR 2020 Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Abhinav Gupta, Ruslan Salakhutdinov

The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies).

PointGoal Navigation

A Closer Look at Accuracy vs. Robustness

1 code implementation NeurIPS 2020 Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Ruslan Salakhutdinov, Kamalika Chaudhuri

Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning.

On Emergent Communication in Competitive Multi-Agent Teams

1 code implementation4 Mar 2020 Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur

Several recent works have found the emergence of grounded compositional language in the communication protocols developed by mostly cooperative multi-agent systems when learned end-to-end to maximize performance on a downstream task.

Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement

1 code implementation NeurIPS 2020 Benjamin Eysenbach, Xinyang Geng, Sergey Levine, Ruslan Salakhutdinov

In this paper, we show that hindsight relabeling is inverse RL, an observation that suggests that we can use inverse RL in tandem for RL algorithms to efficiently solve many tasks.

reinforcement-learning

Differentiable Reasoning over a Virtual Knowledge Base

1 code implementation ICLR 2020 Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen

In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus.

Re-Ranking

Learning Not to Learn in the Presence of Noisy Labels

no code implementations16 Feb 2020 Liu Ziyin, Blair Chen, Ru Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda

Learning in the presence of label noise is a challenging yet important task: it is crucial to design models that are robust in the presence of mislabeled datasets.

Text Classification

Capsules with Inverted Dot-Product Attention Routing

3 code implementations ICLR 2020 Yao-Hung Hubert Tsai, Nitish Srivastava, Hanlin Goh, Ruslan Salakhutdinov

We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent's state and the child's vote.

Image Classification

Think Locally, Act Globally: Federated Learning with Local and Global Representations

1 code implementation6 Jan 2020 Paul Pu Liang, Terrance Liu, Liu Ziyin, Nicholas B. Allen, Randy P. Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency

To this end, we propose a new federated learning algorithm that jointly learns compact local representations on each device and a global model across all devices.

Federated Learning Representation Learning +1

Geometric Capsule Autoencoders for 3D Point Clouds

no code implementations6 Dec 2019 Nitish Srivastava, Hanlin Goh, Ruslan Salakhutdinov

The pose encodes where the entity is, while the feature encodes what it is.

Worst Cases Policy Gradients

1 code implementation9 Nov 2019 Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov

Recent advances in deep reinforcement learning have demonstrated the capability of learning complex control policies from many types of environments.

reinforcement-learning

Multiple Futures Prediction

1 code implementation4 Nov 2019 Yichuan Charlie Tang, Ruslan Salakhutdinov

Towards these goals, we introduce a probabilistic framework that efficiently learns latent variables to jointly model the multi-step future motions of agents in a scene.

motion prediction

Enhanced Convolutional Neural Tangent Kernels

no code implementations3 Nov 2019 Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S. Du, Wei Hu, Ruslan Salakhutdinov, Sanjeev Arora

An exact algorithm to compute CNTK (Arora et al., 2019) yielded the finding that classification accuracy of CNTK on CIFAR-10 is within 6-7% of that of that of the corresponding CNN architecture (best figure being around 78%) which is interesting performance for a fixed kernel.

Data Augmentation

Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel

no code implementations IJCNLP 2019 Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov

This new formulation gives us a better way to understand individual components of the Transformer{'}s attention, such as the better way to integrate the positional embedding.

Machine Translation Translation

Learning Data Manipulation for Augmentation and Weighting

1 code implementation NeurIPS 2019 Zhiting Hu, Bowen Tan, Ruslan Salakhutdinov, Tom Mitchell, Eric P. Xing

In this work, we propose a new method that supports learning different manipulation schemes with the same gradient-based algorithm.

Data Augmentation reinforcement-learning +1

Complex Transformer: A Framework for Modeling Complex-Valued Sequence

1 code implementation22 Oct 2019 Muqiao Yang, Martin Q. Ma, Dongyu Li, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov

While deep learning has received a surge of interest in a variety of fields in recent years, major deep learning models barely use complex numbers.

Music Transcription

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

3 code implementations ICLR 2020 Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, Dingli Yu

On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance.

Few-Shot Image Classification General Classification +2

On Universal Approximation by Neural Networks with Uniform Guarantees on Approximation of Infinite Dimensional Maps

no code implementations3 Oct 2019 William H. Guss, Ruslan Salakhutdinov

Additionally, we provide the first lower-bound on the minimal number of input and output units required by a finite approximation to an infinite neural network to guarantee that it can uniformly approximate any nonlinear operator using samples from its inputs and outputs.

LSMI-Sinkhorn: Semi-supervised Mutual Information Estimation with Optimal Transport

1 code implementation5 Sep 2019 Yanbin Liu, Makoto Yamada, Yao-Hung Hubert Tsai, Tam Le, Ruslan Salakhutdinov, Yi Yang

To estimate the mutual information from data, a common practice is preparing a set of paired samples $\{(\mathbf{x}_i,\mathbf{y}_i)\}_{i=1}^n \stackrel{\mathrm{i. i. d.

Mutual Information Estimation

Transformer Dissection: A Unified Understanding of Transformer's Attention via the Lens of Kernel

1 code implementation EMNLP 2019 Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov

This new formulation gives us a better way to understand individual components of the Transformer's attention, such as the better way to integrate the positional embedding.

Machine Translation Translation

Learning Neural Networks with Adaptive Regularization

1 code implementation NeurIPS 2019 Han Zhao, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Geoffrey J. Gordon

Feed-forward neural networks can be understood as a combination of an intermediate representation and a linear hypothesis.

Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization

no code implementations ACL 2019 Paul Pu Liang, Zhun Liu, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency

Our method is based on the observation that high-dimensional multimodal time series data often exhibit correlations across time and modalities which leads to low-rank tensor representations.

Question Answering Sentiment Analysis +2

Deep Gamblers: Learning to Abstain with Portfolio Theory

2 code implementations NeurIPS 2019 Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda

We deal with the \textit{selective classification} problem (supervised-learning problem with a rejection option), where we want to achieve the best performance at a certain level of coverage of the data.

Classification General Classification

XLNet: Generalized Autoregressive Pretraining for Language Understanding

23 code implementations NeurIPS 2019 Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

Audio Question Answering Chinese Reading Comprehension +9

"My Way of Telling a Story": Persona based Grounded Story Generation

no code implementations14 Jun 2019 Shrimai Prabhumoye, Khyathi Raghavi Chandu, Ruslan Salakhutdinov, Alan W. black

To this end, we propose five models which are incremental extensions to the baseline model to perform the task at hand.

Story Generation Visual Storytelling

Search on the Replay Buffer: Bridging Planning and Reinforcement Learning

1 code implementation NeurIPS 2019 Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

We introduce a general control algorithm that combines the strengths of planning and reinforcement learning to effectively solve these tasks.

reinforcement-learning

Efficient Exploration via State Marginal Matching

1 code implementation12 Jun 2019 Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric Xing, Sergey Levine, Ruslan Salakhutdinov

The SMM objective can be viewed as a two-player, zero-sum game between a state density model and a parametric policy, an idea that we use to build an algorithm for optimizing the SMM objective.

Efficient Exploration Unsupervised Reinforcement Learning

Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels

1 code implementation NeurIPS 2019 Simon S. Du, Kangcheng Hou, Barnabás Póczos, Ruslan Salakhutdinov, Ruosong Wang, Keyulu Xu

While graph kernels (GKs) are easy to train and enjoy provable theoretical guarantees, their practical performances are limited by their expressive power, as the kernel function often depends on hand-crafted combinatorial features of graphs.

Graph Classification

Strong and Simple Baselines for Multimodal Utterance Embeddings

1 code implementation NAACL 2019 Paul Pu Liang, Yao Chong Lim, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency

Human language is a rich multimodal signal consisting of spoken words, facial expressions, body gestures, and vocal intonations.

Cross-Task Knowledge Transfer for Visually-Grounded Navigation

no code implementations ICLR 2019 Devendra Singh Chaplot, Lisa Lee, Ruslan Salakhutdinov, Devi Parikh, Dhruv Batra

Recent efforts on training visual navigation agents conditioned on language using deep reinforcement learning have been successful in learning policies for two different tasks: learning to follow navigational instructions and embodied question answering.

Disentanglement Embodied Question Answering +3

On Exact Computation with an Infinitely Wide Neural Net

2 code implementations NeurIPS 2019 Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang

An attraction of such ideas is that a pure kernel-based method is used to capture the power of a fully-trained deep net of infinite width.

Gaussian Processes

The MineRL 2019 Competition on Sample Efficient Reinforcement Learning using Human Priors

1 code implementation22 Apr 2019 William H. Guss, Cayden Codel, Katja Hofmann, Brandon Houghton, Noboru Kuno, Stephanie Milani, Sharada Mohanty, Diego Perez Liebana, Ruslan Salakhutdinov, Nicholay Topin, Manuela Veloso, Phillip Wang

To that end, we introduce: (1) the Minecraft ObtainDiamond task, a sequential decision making environment requiring long-term planning, hierarchical control, and efficient exploration methods; and (2) the MineRL-v0 dataset, a large-scale collection of over 60 million state-action pairs of human demonstrations that can be resimulated into embodied trajectories with arbitrary modifications to game state and visuals.

Decision Making Efficient Exploration +1

Video Relationship Reasoning using Gated Spatio-Temporal Energy Graph

1 code implementation CVPR 2019 Yao-Hung Hubert Tsai, Santosh Divvala, Louis-Philippe Morency, Ruslan Salakhutdinov, Ali Farhadi

Visual relationship reasoning is a crucial yet challenging task for understanding rich interactions across visual concepts.

Concurrent Meta Reinforcement Learning

1 code implementation7 Mar 2019 Emilio Parisotto, Soham Ghosh, Sai Bhargav Yalamanchi, Varsha Chinnaobireddy, Yuhuai Wu, Ruslan Salakhutdinov

In this multi-agent setting, a set of parallel agents are executed in the same environment and each of these "rollout" agents are given the means to communicate with each other.

Efficient Exploration Meta-Learning +3

The Omniglot challenge: a 3-year progress report

7 code implementations9 Feb 2019 Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum

Three years ago, we released the Omniglot dataset for one-shot learning, along with five challenge tasks and a computational model that addresses these tasks.

General Classification One-Shot Learning

Embodied Multimodal Multitask Learning

no code implementations4 Feb 2019 Devendra Singh Chaplot, Lisa Lee, Ruslan Salakhutdinov, Devi Parikh, Dhruv Batra

In this paper, we propose a multitask model capable of jointly learning these multimodal tasks, and transferring knowledge of words and their grounding in visual objects across the tasks.

Disentanglement Embodied Question Answering +3

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

27 code implementations ACL 2019 Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov

Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling.

Language Modelling

Connecting the Dots Between MLE and RL for Sequence Prediction

no code implementations24 Nov 2018 Bowen Tan, Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Eric Xing

Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency.

Imitation Learning Machine Translation +2

On the Complexity of Exploration in Goal-Driven Navigation

1 code implementation16 Nov 2018 Maruan Al-Shedivat, Lisa Lee, Ruslan Salakhutdinov, Eric Xing

Next, we propose to measure the complexity of each environment by constructing dependency graphs between the goals and analytically computing \emph{hitting times} of a random walk in the graph.

Point Cloud GAN

1 code implementation13 Oct 2018 Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnabas Poczos, Ruslan Salakhutdinov

In this paper, we first show a straightforward extension of existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data.

Object Recognition

AutoLoss: Learning Discrete Schedules for Alternate Optimization

1 code implementation4 Oct 2018 Haowen Xu, Hao Zhang, Zhiting Hu, Xiaodan Liang, Ruslan Salakhutdinov, Eric Xing

Many machine learning problems involve iteratively and alternately optimizing different task objectives with respect to different sets of parameters.

Image Generation Machine Translation +2

AutoLoss: Learning Discrete Schedule for Alternate Optimization

no code implementations ICLR 2019 Haowen Xu, Hao Zhang, Zhiting Hu, Xiaodan Liang, Ruslan Salakhutdinov, Eric Xing

Many machine learning problems involve iteratively and alternately optimizing different task objectives with respect to different sets of parameters.

Image Generation Machine Translation +1

Connecting the Dots Between MLE and RL for Sequence Generation

no code implementations ICLR Workshop drlStructPred 2019 Bowen Tan*, Zhiting Hu*, Zichao Yang, Ruslan Salakhutdinov, Eric P. Xing

We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters.

Machine Translation Text Summarization +1

Style Transfer Through Multilingual and Feedback-Based Back-Translation

no code implementations17 Sep 2018 Shrimai Prabhumoye, Yulia Tsvetkov, Alan W. black, Ruslan Salakhutdinov

Style transfer is the task of transferring an attribute of a sentence (e. g., formality) while maintaining its semantic content.

Style Transfer Translation

Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text

1 code implementation EMNLP 2018 Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, William W. Cohen

In this paper we look at a more practical setting, namely QA over the combination of a KB and entity-linked text, which is appropriate when an incomplete KB is available with a large text corpus.

Graph Representation Learning Open-Domain Question Answering

Learning Cognitive Models using Neural Networks

no code implementations21 Jun 2018 Devendra Singh Chaplot, Christopher MacLellan, Ruslan Salakhutdinov, Kenneth Koedinger

Secondly, for domains where a cognitive model is available, we show that representations learned through CogRL can be used to get accurate estimates of skill difficulty and learning rate parameters without using any student performance data.

Model Discovery

Gated Path Planning Networks

2 code implementations ICML 2018 Lisa Lee, Emilio Parisotto, Devendra Singh Chaplot, Eric Xing, Ruslan Salakhutdinov

Value Iteration Networks (VINs) are effective differentiable path planning modules that can be used by agents to perform navigation while still maintaining end-to-end differentiability of the entire architecture.

Learning Factorized Multimodal Representations

2 code implementations ICLR 2019 Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov

Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction.

Representation Learning

GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations

1 code implementation14 Jun 2018 Zhilin Yang, Jake Zhao, Bhuwan Dhingra, Kaiming He, William W. Cohen, Ruslan Salakhutdinov, Yann Lecun

We also show that the learned graphs are generic enough to be transferred to different embeddings on which the graphs have not been trained (including GloVe embeddings, ELMo embeddings, and task-specific RNN hidden unit), or embedding-free units such as image pixels.

Image Classification Natural Language Inference +4

Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex

1 code implementation6 Jun 2018 Hongyang Zhang, Junru Shao, Ruslan Salakhutdinov

We show that one cause for such success is due to the fact that the multi-branch architecture is less non-convex in terms of duality gap.

How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?

no code implementations NeurIPS 2018 Simon S. Du, Yining Wang, Xiyu Zhai, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Aarti Singh

It is widely believed that the practical success of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) owes to the fact that CNNs and RNNs use a more compact parametric representation than their Fully-Connected Neural Network (FNN) counterparts, and consequently require fewer training examples to accurately estimate their parameters.

Style Transfer Through Back-Translation

3 code implementations ACL 2018 Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, Alan W. black

We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties.

Style Transfer Text Style Transfer +1

Structured Control Nets for Deep Reinforcement Learning

1 code implementation ICML 2018 Mario Srouji, Jian Zhang, Ruslan Salakhutdinov

The proposed Structured Control Net (SCN) splits the generic MLP into two separate sub-modules: a nonlinear control module and a linear control module.

Decision Making reinforcement-learning

Post Selection Inference with Incomplete Maximum Mean Discrepancy Estimator

no code implementations ICLR 2019 Makoto Yamada, Denny Wu, Yao-Hung Hubert Tsai, Ichiro Takeuchi, Ruslan Salakhutdinov, Kenji Fukumizu

In the paper, we propose a post selection inference (PSI) framework for divergence measure, which can select a set of statistically significant features that discriminate two distributions.

Change Point Detection

On Characterizing the Capacity of Neural Networks using Algebraic Topology

no code implementations ICLR 2018 William H. Guss, Ruslan Salakhutdinov

The learnability of different neural architectures can be characterized directly by computable measures of data complexity.

Transformation Autoregressive Networks

no code implementations ICML 2018 Junier B. Oliva, Avinava Dubey, Manzil Zaheer, Barnabás Póczos, Ruslan Salakhutdinov, Eric P. Xing, Jeff Schneider

Further, through a comprehensive study over both real world and synthetic data, we show for that jointly leveraging transformations of variables and autoregressive conditional models, results in a considerable improvement in performance.

Density Estimation Outlier Detection

Active Neural Localization

1 code implementation ICLR 2018 Devendra Singh Chaplot, Emilio Parisotto, Ruslan Salakhutdinov

The results on the 2D environments show the effectiveness of the learned policy in an idealistic setting while results on the 3D environments demonstrate the model's capability of learning the policy and perceptual model jointly from raw-pixel based RGB observations.

Game of Doom

Knowledge-based Word Sense Disambiguation using Topic Models

no code implementations5 Jan 2018 Devendra Singh Chaplot, Ruslan Salakhutdinov

In this paper, we leverage the formalism of topic model to design a WSD system that scales linearly with the number of words in the context.

Topic Models Word Sense Disambiguation

Learning Deep Generative Models With Discrete Latent Variables

no code implementations ICLR 2018 Hengyuan Hu, Ruslan Salakhutdinov

There have been numerous recent advancements on learning deep generative models with latent variables thanks to the reparameterization trick that allows to train deep directed models effectively.

Density Estimation

Discovering Order in Unordered Datasets: Generative Markov Networks

no code implementations ICLR 2018 Yao-Hung Hubert Tsai, Han Zhao, Nebojsa Jojic, Ruslan Salakhutdinov

The assumption that data samples are independently identically distributed is the backbone of many learning algorithms.

Breaking the Softmax Bottleneck: A High-Rank RNN Language Model

9 code implementations ICLR 2018 Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, William W. Cohen

We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck.

Language Modelling Word Embeddings

Learning Markov Chain in Unordered Dataset

no code implementations ICLR 2018 Yao-Hung Hubert Tsai, Han Zhao, Ruslan Salakhutdinov, Nebojsa Jojic

In this technical report, we introduce OrderNet that can be used to extract the order of data instances in an unsupervised way.

Improving One-Shot Learning through Fusing Side Information

no code implementations23 Oct 2017 Yao-Hung Hubert Tsai, Ruslan Salakhutdinov

We introduce two statistical approaches for fusing side information into data representation learning to improve one-shot learning.

One-Shot Learning Representation Learning

A Generic Approach for Escaping Saddle points

no code implementations5 Sep 2017 Sashank J. Reddi, Manzil Zaheer, Suvrit Sra, Barnabas Poczos, Francis Bach, Ruslan Salakhutdinov, Alexander J. Smola

A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points.

Second-order methods

Block-Normalized Gradient Method: An Empirical Study for Training Deep Neural Network

2 code implementations ICLR 2018 Adams Wei Yu, Lei Huang, Qihang Lin, Ruslan Salakhutdinov, Jaime Carbonell

In this paper, we propose a generic and simple strategy for utilizing stochastic gradient information in optimization.

Gated-Attention Architectures for Task-Oriented Language Grounding

1 code implementation22 Jun 2017 Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, Ruslan Salakhutdinov

To perform tasks specified by natural language instructions, autonomous agents need to extract semantically meaningful representations of language and map it to visual elements and actions in the environment.

Imitation Learning

On Unifying Deep Generative Models

no code implementations ICLR 2018 Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Eric P. Xing

Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as emerging families for generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively.

Good Semi-supervised Learning that Requires a Bad GAN

1 code implementation NeurIPS 2017 Zihang Dai, Zhilin Yang, Fan Yang, William W. Cohen, Ruslan Salakhutdinov

Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time.

General Classification Semi-Supervised Image Classification

Geometry of Optimization and Implicit Regularization in Deep Learning

1 code implementation8 May 2017 Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, Nathan Srebro

We argue that the optimization plays a crucial role in generalization of deep learning models through implicit regularization.

Question Answering from Unstructured Text by Retrieval and Comprehension

no code implementations26 Mar 2017 Yusuke Watanabe, Bhuwan Dhingra, Ruslan Salakhutdinov

Open domain Question Answering (QA) systems must interact with external knowledge sources, such as web pages, to find relevant information.

Open-Domain Question Answering

Learning Robust Visual-Semantic Embeddings

no code implementations ICCV 2017 Yao-Hung Hubert Tsai, Liang-Kang Huang, Ruslan Salakhutdinov

Many of the existing methods for learning joint embedding of images and text use only supervised information from paired images and its textual attributes.

Generalized Few-Shot Learning Representation Learning

Deep Sets

2 code implementations NeurIPS 2017 Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola

Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong.

Anomaly Detection Outlier Detection +1

Linguistic Knowledge as Memory for Recurrent Neural Networks

no code implementations7 Mar 2017 Bhuwan Dhingra, Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov

We introduce a model that encodes such graphs as explicit memory in recurrent neural networks, and use it to model coreference relations in text.

Reading Comprehension

Toward Controlled Generation of Text

3 code implementations ICML 2017 Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, Eric P. Xing

Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain.

A Comparative Study of Word Embeddings for Reading Comprehension

no code implementations2 Mar 2017 Bhuwan Dhingra, Hanxiao Liu, Ruslan Salakhutdinov, William W. Cohen

The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures.

Reading Comprehension Word Embeddings

Improved Variational Autoencoders for Text Modeling using Dilated Convolutions

3 code implementations ICML 2017 Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, Taylor Berg-Kirkpatrick

Recent work on generative modeling of text has found that variational auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM language models (Bowman et al., 2015).

Text Generation

Neural Map: Structured Memory for Deep Reinforcement Learning

1 code implementation ICLR 2018 Emilio Parisotto, Ruslan Salakhutdinov

In this paper, we develop a memory system with an adaptable write operator that is customized to the sorts of 3D environments that DRL agents typically interact with.

reinforcement-learning

Semi-Supervised QA with Generative Domain-Adaptive Nets

no code implementations ACL 2017 Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, William W. Cohen

In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models.

Domain Adaptation Question Answering +1

The More You Know: Using Knowledge Graphs for Image Classification

no code implementations CVPR 2017 Kenneth Marino, Ruslan Salakhutdinov, Abhinav Gupta

One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world.

Classification General Classification +3

Spatially Adaptive Computation Time for Residual Networks

1 code implementation CVPR 2017 Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, Ruslan Salakhutdinov

This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image.

Classification General Classification +3

On the Quantitative Analysis of Decoder-Based Generative Models

2 code implementations14 Nov 2016 Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, Roger Grosse

The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities.

Words or Characters? Fine-grained Gating for Reading Comprehension

1 code implementation6 Nov 2016 Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, Ruslan Salakhutdinov

Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension.

Question Answering Reading Comprehension +1

Stochastic Variational Deep Kernel Learning

no code implementations NeurIPS 2016 Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing

We propose a novel deep kernel learning model and stochastic variational inference procedure which generalizes deep kernel learning approaches to enable classification, multi-task learning, additive covariance structures, and stochastic gradient training.

Gaussian Processes General Classification +2

On Multiplicative Integration with Recurrent Neural Networks

no code implementations NeurIPS 2016 Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, Ruslan Salakhutdinov

We introduce a general and simple structural design called Multiplicative Integration (MI) to improve recurrent neural networks (RNNs).

Language Modelling