Search Results for author: Yuanzhen Li

Found 24 papers, 10 papers with code

Probing the 3D Awareness of Visual Foundation Models

1 code implementation12 Apr 2024 Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanzhen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, Varun Jampani

Given that such models can classify, delineate, and localize objects in 2D, we ask whether they also represent their 3D structure?

3D Congealing: 3D-Aware Image Alignment in the Wild

no code implementations2 Apr 2024 Yunzhi Zhang, Zizhang Li, Amit Raj, Andreas Engelhardt, Yuanzhen Li, Tingbo Hou, Jiajun Wu, Varun Jampani

The framework optimizes for the canonical representation together with the pose for each input image, and a per-image coordinate map that warps 2D pixel coordinates to the 3D canonical frame to account for the shape matching.

Pose Estimation

SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild

no code implementations18 Jan 2024 Andreas Engelhardt, Amit Raj, Mark Boss, Yunzhi Zhang, Abhishek Kar, Yuanzhen Li, Deqing Sun, Ricardo Martin Brualla, Jonathan T. Barron, Hendrik P. A. Lensch, Varun Jampani

We present SHINOBI, an end-to-end framework for the reconstruction of shape, material, and illumination from object images captured with varying lighting, pose, and background.

Inverse Rendering Object

Alchemist: Parametric Control of Material Properties with Diffusion Models

no code implementations5 Dec 2023 Prafull Sharma, Varun Jampani, Yuanzhen Li, Xuhui Jia, Dmitry Lagun, Fredo Durand, William T. Freeman, Mark Matthews

We propose a method to control material attributes of objects like roughness, metallic, albedo, and transparency in real images.

ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs

1 code implementation22 Nov 2023 Viraj Shah, Nataniel Ruiz, Forrester Cole, Erika Lu, Svetlana Lazebnik, Yuanzhen Li, Varun Jampani

Experiments on a wide range of subject and style combinations show that ZipLoRA can generate compelling results with meaningful improvements over baselines in subject and style fidelity while preserving the ability to recontextualize.

RealFill: Reference-Driven Generation for Authentic Image Completion

no code implementations28 Sep 2023 Luming Tang, Nataniel Ruiz, Qinghao Chu, Yuanzhen Li, Aleksander Holynski, David E. Jacobs, Bharath Hariharan, Yael Pritch, Neal Wadhwa, Kfir Aberman, Michael Rubinstein

Once personalized, RealFill is able to complete a target image with visually compelling contents that are faithful to the original scene.

HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models

2 code implementations13 Jul 2023 Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Wei Wei, Tingbo Hou, Yael Pritch, Neal Wadhwa, Michael Rubinstein, Kfir Aberman

By composing these weights into the diffusion model, coupled with fast finetuning, HyperDreamBooth can generate a person's face in various contexts and styles, with high subject details while also preserving the model's crucial knowledge of diverse styles and semantic modifications.

Diffusion Personalization Tuning Free

Background Prompting for Improved Object Depth

no code implementations8 Jun 2023 Manel Baradad, Yuanzhen Li, Forrester Cole, Michael Rubinstein, Antonio Torralba, William T. Freeman, Varun Jampani

To infer object depth on a real image, we place the segmented object into the learned background prompt and run off-the-shelf depth networks.


StraIT: Non-autoregressive Generation with Stratified Image Transformer

no code implementations1 Mar 2023 Shengju Qian, Huiwen Chang, Yuanzhen Li, Zizhao Zhang, Jiaya Jia, Han Zhang

We propose Stratified Image Transformer(StraIT), a pure non-autoregressive(NAR) generative model that demonstrates superiority in high-quality image synthesis over existing autoregressive(AR) and diffusion models(DMs).

Image Generation

Debiasing Vision-Language Models via Biased Prompts

1 code implementation31 Jan 2023 Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba, Stefanie Jegelka

Machine learning models have been shown to inherit biases from their training datasets.

Muse: Text-To-Image Generation via Masked Generative Transformers

4 code implementations2 Jan 2023 Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, Dilip Krishnan

Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding.

 Ranked #1 on Text-to-Image Generation on MS-COCO (FID metric)

Language Modelling Large Language Model +1

Hi-LASSIE: High-Fidelity Articulated Shape and Skeleton Discovery from Sparse Image Ensemble

1 code implementation CVPR 2023 Chun-Han Yao, Wei-Chih Hung, Yuanzhen Li, Michael Rubinstein, Ming-Hsuan Yang, Varun Jampani

Automatically estimating 3D skeleton, shape, camera viewpoints, and part articulation from sparse in-the-wild image ensembles is a severely under-constrained and challenging problem.

DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation

10 code implementations CVPR 2023 Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, Kfir Aberman

Once the subject is embedded in the output domain of the model, the unique identifier can be used to synthesize novel photorealistic images of the subject contextualized in different scenes.

Diffusion Personalization Image Generation

Simplified Transfer Learning for Chest Radiography Models Using Less Data

1 code implementation Radiology 2022 Andrew B. Sellergren, Christina Chen, Zaid Nabulsi, Yuanzhen Li, Aaron Maschinot, Aaron Sarna, Jenny Huang, Charles Lau, Sreenivasa Raju Kalidindi, Mozziyar Etemadi, Florencia Garcia-Vicente, David Melnick, Yun Liu, Krish Eswaran, Daniel Tse, Neeral Beladia, Dilip Krishnan, Shravya Shetty

Supervised contrastive learning enabled performance comparable to state-of-the-art deep learning models in multiple clinical tasks by using as few as 45 images and is a promising method for predictive modeling with use of small data sets and for predicting outcomes in shifting patient populations.

Contrastive Learning Transfer Learning

LASSIE: Learning Articulated Shapes from Sparse Image Ensemble via 3D Part Discovery

no code implementations7 Jul 2022 Chun-Han Yao, Wei-Chih Hung, Yuanzhen Li, Michael Rubinstein, Ming-Hsuan Yang, Varun Jampani

In this work, we propose a practical problem setting to estimate 3D pose and shape of animals given only a few (10-30) in-the-wild images of a particular animal species (say, horse).

Deep Image-based Illumination Harmonization

no code implementations CVPR 2022 Zhongyun Bao, Chengjiang Long, Gang Fu, Daquan Liu, Yuanzhen Li, Jiaming Wu, Chunxia Xiao

Specifically, we firstly apply a physically-based rendering method to construct a large-scale, high-quality dataset (named IH) for our task, which contains various types of foreground objects and background scenes with different lighting conditions.


A Many-Objective Evolutionary Algorithm Based on Decomposition and Local Dominance

no code implementations13 Jul 2018 Yingyu Zhang, Yuanzhen Li, Quan-Ke Panb, P. N. Suganthan

Recent studies show that a well designed combination of the decomposition method and the domination method can improve the performance , i. e., convergence and diversity, of a MOEA.

Evolutionary Algorithms

A mullti- or many- objective evolutionary algorithm with global loop update

no code implementations25 Jan 2018 Yingyu Zhang, Bing Zeng, Yuanzhen Li, Junqing Li

The decomposition-based MOEAs emphasize convergence and diversity in a simple model and have made a great success in dealing with theoretical and practical multi- or many-objective optimization problems.

Evolutionary Algorithms

Cannot find the paper you are looking for? You can Submit a new open access paper.