Search Results for author: Haoyang Yang

Found 3 papers, 2 papers with code

Interactive Visual Learning for Stable Diffusion

no code implementations22 Apr 2024 Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Polo Chau

Diffusion-based generative models' impressive ability to create convincing images has garnered global attention.

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

1 code implementation4 May 2023 Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng Chau

Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements.

Image Generation

DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models

2 code implementations26 Oct 2022 Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, Duen Horng Chau

With recent advancements in diffusion models, users can generate high-quality images by writing text prompts in natural language.

Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.