1 code implementation • CVPR 2024 • Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, Nikhil Naik
Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences.
1 code implementation • ICCV 2023 • Bram Wallace, Akash Gokul, Stefano Ermon, Nikhil Naik
Classifier guidance -- using the gradients of an image classifier to steer the generations of a diffusion model -- has the potential to dramatically expand the creative control over image generation and editing.
2 code implementations • CVPR 2023 • Bram Wallace, Akash Gokul, Nikhil Naik
EDICT enables mathematically exact inversion of real and model-generated images by maintaining two coupled noise vectors which are used to invert each other in an alternating fashion.
1 code implementation • 14 Apr 2022 • Samar Khanna, Bram Wallace, Kavita Bala, Bharath Hariharan
Geographic variance in satellite imagery impacts the ability of machine learning models to generalise to new regions.
no code implementations • 19 Oct 2021 • Bram Wallace, Devansh Arpit, Huan Wang, Caiming Xiong
Pretraining convolutional neural networks via self-supervision, and applying them in transfer learning, is an incredibly fast-growing field that is rapidly and iteratively improving performance across practically all image domains.
1 code implementation • CVPR 2021 • Bram Wallace, Ziyang Wu, Bharath Hariharan
The problem of expert model selection deals with choosing the appropriate pretrained network ("expert") to transfer to a target task.
1 code implementation • ECCV 2020 • Bram Wallace, Bharath Hariharan
There has been little to no work with these methods on other smaller domains, such as satellite, textural, or biological imagery.
no code implementations • ICCV 2019 • Bram Wallace, Bharath Hariharan
To address this problem, we present a new model architecture that reframes single-view 3D reconstruction as learnt, category agnostic refinement of a provided, category-specific prior.