Search Results for author: Jonathan Lorraine

Found 14 papers, 2 papers with code

LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis

no code implementations22 Mar 2024 Kevin Xie, Jonathan Lorraine, Tianshi Cao, Jun Gao, James Lucas, Antonio Torralba, Sanja Fidler, Xiaohui Zeng

Recent text-to-3D generation approaches produce impressive 3D results but require time-consuming optimization that can take up to an hour per prompt.

Text to 3D

Using Large Language Models for Hyperparameter Optimization

no code implementations7 Dec 2023 Michael R. Zhang, Nishkrit Desai, Juhan Bae, Jonathan Lorraine, Jimmy Ba

This paper studies using foundational large language models (LLMs) to make decisions during hyperparameter optimization (HPO).

Bayesian Optimization Decision Making +1

Graph Metanetworks for Processing Diverse Neural Architectures

no code implementations7 Dec 2023 Derek Lim, Haggai Maron, Marc T. Law, Jonathan Lorraine, James Lucas

However, those works developed architectures tailored to specific networks such as MLPs and CNNs without normalization layers, and generalizing such architectures to other types of networks can be challenging.

ATT3D: Amortized Text-to-3D Object Synthesis

no code implementations ICCV 2023 Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki Takikawa, Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James Lucas

Text-to-3D modelling has seen exciting progress by combining generative text-to-image models with image-to-3D methods like Neural Radiance Fields.

Image to 3D Object +1

Lyapunov Exponents for Diversity in Differentiable Games

no code implementations24 Dec 2021 Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz, Jakob Foerster

We generalize this idea to non-conservative, multi-agent gradient systems by proposing a method - denoted Generalized Ridge Rider (GRR) - for finding arbitrary bifurcation points.

Input Convex Gradient Networks

no code implementations23 Nov 2021 Jack Richter-Powell, Jonathan Lorraine, Brandon Amos

The gradients of convex functions are expressive models of non-trivial vector fields.

Meta-Learning to Improve Pre-Training

no code implementations NeurIPS 2021 Aniruddh Raghu, Jonathan Lorraine, Simon Kornblith, Matthew McDermott, David Duvenaud

Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural networks, and has led to significant performance improvements in many domains.

Data Augmentation Hyperparameter Optimization +1

Complex Momentum for Optimization in Games

no code implementations16 Feb 2021 Jonathan Lorraine, David Acuna, Paul Vicol, David Duvenaud

We generalize gradient descent with momentum for optimization in differentiable games to have complex-valued momentum.

Optimizing Millions of Hyperparameters by Implicit Differentiation

9 code implementations6 Nov 2019 Jonathan Lorraine, Paul Vicol, David Duvenaud

We propose an algorithm for inexpensive gradient-based hyperparameter optimization that combines the implicit function theorem (IFT) with efficient inverse Hessian approximations.

Data Augmentation Hyperparameter Optimization

JacNet: Learning Functions with Structured Jacobians

no code implementations1 Jul 2019 Jonathan Lorraine, Safwan Hossain

Neural networks are trained to learn an approximate mapping from an input domain to a target domain.

Understanding Neural Architecture Search Techniques

no code implementations31 Mar 2019 George Adam, Jonathan Lorraine

This reduction in computation is enabled via weight sharing such as in Efficient Neural Architecture Search (ENAS).

Computational Efficiency Graph Similarity +1

Stochastic Hyperparameter Optimization through Hypernetworks

1 code implementation ICLR 2018 Jonathan Lorraine, David Duvenaud

Machine learning models are often tuned by nesting optimization of model weights inside the optimization of hyperparameters.

BIG-bench Machine Learning Hyperparameter Optimization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.