no code implementations • 3 Nov 2023 • Natalie Maus, Zhiyuan Jerry Lin, Maximilian Balandat, Eytan Bakshy
To effectively tackle these challenges, we introduce Joint Composite Latent Space Bayesian Optimization (JoCo), a novel framework that jointly trains neural network encoders and probabilistic models to adaptively compress high-dimensional input and output spaces into manageable latent representations.
no code implementations • 25 May 2023 • Natalie Maus, Yimeng Zeng, Daniel Allen Anderson, Phillip Maffettone, Aaron Solomon, Peyton Greenside, Osbert Bastani, Jacob R. Gardner
Furthermore, it is challenging to adapt pure generative approaches to other settings, e. g., when constraints exist.
1 code implementation • 8 Feb 2023 • Natalie Maus, Patrick Chao, Eric Wong, Jacob Gardner
Prompting interfaces allow users to quickly adjust the output of generative models in both vision and language.
1 code implementation • 20 Oct 2022 • Natalie Maus, Kaiwen Wu, David Eriksson, Jacob Gardner
Bayesian optimization (BO) is a popular approach for sample-efficient optimization of black-box objective functions.
1 code implementation • 28 Jan 2022 • Natalie Maus, Haydn T. Jones, Juston S. Moore, Matt J. Kusner, John Bradshaw, Jacob R. Gardner
By reformulating the encoder to function as both an encoder for the DAE globally and as a deep kernel for the surrogate model within a trust region, we better align the notion of local optimization in the latent space with local optimization in the input space.