Search Results for author: Prajit Ramachandran

Found 12 papers, 7 papers with code

Seq-NMS for Video Object Detection

1 code implementation26 Feb 2016 Wei Han, Pooya Khorrami, Tom Le Paine, Prajit Ramachandran, Mohammad Babaeizadeh, Honghui Shi, Jianan Li, Shuicheng Yan, Thomas S. Huang

Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip.

General Classification Object +4

Unsupervised Pretraining for Sequence to Sequence Learning

no code implementations EMNLP 2017 Prajit Ramachandran, Peter J. Liu, Quoc V. Le

We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models.

Abstractive Text Summarization Machine Translation +1

Fast Wavenet Generation Algorithm

6 code implementations29 Nov 2016 Tom Le Paine, Pooya Khorrami, Shiyu Chang, Yang Zhang, Prajit Ramachandran, Mark A. Hasegawa-Johnson, Thomas S. Huang

This paper presents an efficient implementation of the Wavenet generation process called Fast Wavenet.

Stein Variational Policy Gradient

no code implementations7 Apr 2017 Yang Liu, Prajit Ramachandran, Qiang Liu, Jian Peng

Policy gradient methods have been successfully applied to many complex reinforcement learning problems.

Bayesian Inference Continuous Control +3

Searching for Activation Functions

21 code implementations ICLR 2018 Prajit Ramachandran, Barret Zoph, Quoc V. Le

The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.

Image Classification

Backprop Evolution

no code implementations8 Aug 2018 Maximilian Alber, Irwan Bello, Barret Zoph, Pieter-Jan Kindermans, Prajit Ramachandran, Quoc Le

The back-propagation algorithm is the cornerstone of deep learning.

Diversity and Depth in Per-Example Routing Models

no code implementations ICLR 2019 Prajit Ramachandran, Quoc V. Le

Both architectural diversity and routing depth can increase the representational power of a routing network.

Multi-Task Learning

Stand-Alone Self-Attention in Vision Models

8 code implementations NeurIPS 2019 Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jonathon Shlens

The natural question that arises is whether attention can be a stand-alone primitive for vision models instead of serving as just an augmentation on top of convolutions.

object-detection Object Detection

Revisiting Spatial Invariance with Low-Rank Local Connectivity

no code implementations ICML 2020 Gamaleldin F. Elsayed, Prajit Ramachandran, Jonathon Shlens, Simon Kornblith

Convolutional neural networks are among the most successful architectures in deep learning with this success at least partially attributable to the efficacy of spatial invariance as an inductive bias.

Inductive Bias

Revisiting Fundamentals of Experience Replay

2 code implementations ICML 2020 William Fedus, Prajit Ramachandran, Rishabh Agarwal, Yoshua Bengio, Hugo Larochelle, Mark Rowland, Will Dabney

Experience replay is central to off-policy algorithms in deep reinforcement learning (RL), but there remain significant gaps in our understanding.

DQN Replay Dataset Q-Learning +1

Scaling Local Self-Attention for Parameter Efficient Visual Backbones

7 code implementations CVPR 2021 Ashish Vaswani, Prajit Ramachandran, Aravind Srinivas, Niki Parmar, Blake Hechtman, Jonathon Shlens

Self-attention models have recently been shown to have encouraging improvements on accuracy-parameter trade-offs compared to baseline convolutional models such as ResNet-50.

Image Classification Instance Segmentation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.