Search Results for author: Aditya Ramesh

Found 18 papers, 10 papers with code

Exploring through Random Curiosity with General Value Functions

no code implementations18 Nov 2022 Aditya Ramesh, Louis Kirsch, Sjoerd van Steenkiste, Jürgen Schmidhuber

Furthermore, RC-GVF significantly outperforms previous methods in the absence of ground-truth episodic counts in the partially observable MiniGrid environments.

Efficient Exploration

The Benefits of Model-Based Generalization in Reinforcement Learning

no code implementations4 Nov 2022 Kenny Young, Aditya Ramesh, Louis Kirsch, Jürgen Schmidhuber

Model-Based Reinforcement Learning (RL) is widely believed to have the potential to improve sample efficiency by allowing an agent to synthesize large amounts of imagined experience.

Model-based Reinforcement Learning reinforcement-learning +1

General Policy Evaluation and Improvement by Learning to Identify Few But Crucial States

1 code implementation4 Jul 2022 Francesco Faccio, Aditya Ramesh, Vincent Herrmann, Jean Harb, Jürgen Schmidhuber

In continuous control problems with infinitely many states, our value function minimizes its prediction error by simultaneously learning a small set of `probing states' and a mapping from actions produced in probing states to the policy's return.

Continuous Control Zero-Shot Learning

Hierarchical Text-Conditional Image Generation with CLIP Latents

2 code implementations13 Apr 2022 Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen

Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style.

Ranked #20 on Text-to-Image Generation on COCO (using extra training data)

Conditional Image Generation Zero-Shot Text-to-Image Generation

GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

1 code implementation20 Dec 2021 Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, Mark Chen

Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity.

Ranked #23 on Text-to-Image Generation on COCO (using extra training data)

Image Inpainting Zero-Shot Text-to-Image Generation

Zero-Shot Text-to-Image Generation

9 code implementations24 Feb 2021 Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever

Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset.

Ranked #35 on Text-to-Image Generation on COCO (using extra training data)

Text to image generation Zero-Shot Text-to-Image Generation

Recurrent Neural-Linear Posterior Sampling for Non-Stationary Contextual Bandits

1 code implementation9 Jul 2020 Aditya Ramesh, Paulo Rauber, Jürgen Schmidhuber

An agent in a non-stationary contextual bandit problem should balance between exploration and the exploitation of (periodic or structured) patterns present in its previous experiences.

Multi-Armed Bandits

CompressNet: Generative Compression at Extremely Low Bitrates

no code implementations14 Jun 2020 Suraj Kiran Raman, Aditya Ramesh, Vijayakrishna Naganoor, Shubham Dash, Giridharan Kumaravelu, Honglak Lee

Compressing images at extremely low bitrates (< 0. 1 bpp) has always been a challenging task since the quality of reconstruction significantly reduces due to the strong imposed constraint on the number of bits allocated for the compressed data.

A Spectral Regularizer for Unsupervised Disentanglement

no code implementations4 Dec 2018 Aditya Ramesh, Youngduck Choi, Yann Lecun

A generative model with a disentangled representation allows for independent control over different aspects of the output.

Disentanglement

Backpropagation for Implicit Spectral Densities

1 code implementation1 Jun 2018 Aditya Ramesh, Yann Lecun

We introduce a tool that allows us to do this even when the likelihood is not explicitly set, by instead using the implicit likelihood of the model.

Disentangling factors of variation in deep representation using adversarial training

no code implementations NeurIPS 2016 Michael F. Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, Yann Lecun

The only available source of supervision during the training process comes from our ability to distinguish among different observations belonging to the same category.

Disentangling factors of variation in deep representations using adversarial training

3 code implementations10 Nov 2016 Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann Lecun

During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class.

Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.