no code implementations • CVPR 2023 • Ajay Jain, Amber Xie, Pieter Abbeel
We show that a text-conditioned diffusion model trained on pixel representations of images can be used to generate SVG-exportable vector graphics.
4 code implementations • 29 Sep 2022 • Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall
Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss.
Ranked #5 on
Text to 3D
on T$^3$Bench
1 code implementation • 3 Aug 2022 • Qiyang Li, Ajay Jain, Pieter Abbeel
Autoregressive generative models can estimate complex continuous data distributions, like trajectory rollouts in an RL environment, image intensities, and audio.
4 code implementations • CVPR 2022 • Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole
Our method, Dream Fields, can generate the geometry and color of a wide range of objects without 3D supervision.
2 code implementations • ICCV 2021 • Ajay Jain, Matthew Tancik, Pieter Abbeel
We present DietNeRF, a 3D neural scene representation estimated from a few images.
1 code implementation • EMNLP 2021 • Paras Jain, Ajay Jain, Tianjun Zhang, Pieter Abbeel, Joseph E. Gonzalez, Ion Stoica
Recent work learns contextual representations of source code by reconstructing tokens from their context.
Ranked #1 on
Method name prediction
on CodeSearchNet
1 code implementation • 22 Jun 2020 • Ajay Jain, Pieter Abbeel, Deepak Pathak
For tasks such as image completion, these models are unable to use much of the observed context.
Ranked #1 on
Image Generation
on MNIST
69 code implementations • NeurIPS 2020 • Jonathan Ho, Ajay Jain, Pieter Abbeel
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics.
Ranked #2 on
Image Generation
on LSUN Bedroom
1 code implementation • NeurIPS 2020 • Scott Emmons, Ajay Jain, Michael Laskin, Thanard Kurutach, Pieter Abbeel, Deepak Pathak
To operate effectively in the real world, agents should be able to act from high-dimensional raw sensory input such as images and achieve diverse goals across long time-horizons.
no code implementations • 17 Oct 2019 • Ajay Jain, Sergio Casas, Renjie Liao, Yuwen Xiong, Song Feng, Sean Segal, Raquel Urtasun
Particularly difficult is the prediction of human behavior.
2 code implementations • 7 Oct 2019 • Paras Jain, Ajay Jain, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Kurt Keutzer, Ion Stoica, Joseph E. Gonzalez
We formalize the problem of trading-off DNN training time and memory requirements as the tensor rematerialization optimization problem, a generalization of prior checkpointing strategies.
no code implementations • ICML Workshop Deep_Phenomen 2019 • Kavya Ravichandran, Ajay Jain, Alexander Rakhlin
In a typical deep learning approach to a computer vision task, Convolutional Neural Networks (CNNs) are used to extract features at varying levels of abstraction from an image and compress a high dimensional input into a lower dimensional decision space through a series of transformations.
no code implementations • 28 Jan 2019 • Paras Jain, Xiangxi Mo, Ajay Jain, Alexey Tumanov, Joseph E. Gonzalez, Ion Stoica
Current trends in Machine Learning~(ML) inference on hardware accelerated devices (e. g., GPUs, TPUs) point to alarmingly low utilization.