Search Results for author: Tong-Yee Lee

Found 15 papers, 8 papers with code

Interactive Visual Assessment for Text-to-Image Generation Models

no code implementations23 Nov 2024 Xiaoyue Mi, Fan Tang, Juan Cao, Qiang Sheng, Ziyao Huang, Peng Li, Yang Liu, Tong-Yee Lee

To address these limitations, we propose DyEval, an LLM-powered dynamic interactive visual assessment framework that facilitates collaborative evaluation between humans and generative models for text-to-image systems.

Logical Reasoning Text-to-Image Generation

Computer-aided Colorization State-of-the-science: A Survey

1 code implementation3 Oct 2024 Yu Cao, Xin Duan, Xiangqiao Meng, P. Y. Mok, Ping Li, Tong-Yee Lee

This paper reviews published research in the field of computer-aided colorization technology.

Colorization Survey

Break-for-Make: Modular Low-Rank Adaptations for Composable Content-Style Customization

no code implementations28 Mar 2024 Yu Xu, Fan Tang, Juan Cao, Yuxin Zhang, Oliver Deussen, WeiMing Dong, Jintao Li, Tong-Yee Lee

Based on the adapters broken apart for separate training content and style, we then make the entity parameter space by reconstructing the content and style PLPs matrices, followed by fine-tuning the combined adapter to generate the target object with the desired appearance.

Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework

1 code implementation CVPR 2024 Ziyao Huang, Fan Tang, Yong Zhang, Xiaodong Cun, Juan Cao, Jintao Li, Tong-Yee Lee

We adopt a two-stage training strategy for the diffusion model, effectively binding movements with specific appearances.

Denoising

Image Collage on Arbitrary Shape via Shape-Aware Slicing and Optimization

no code implementations17 Nov 2023 Dong-Yi Wu, Thi-Ngoc-Hanh Le, Sheng-Yi Yao, Yun-Chen Lin, Tong-Yee Lee

In this paper, we present a shape slicing algorithm and an optimization scheme that can create image collages of arbitrary shapes in an informative and visually pleasing manner given an input shape and an image collection.

Regenerating Arbitrary Video Sequences with Distillation Path-Finding

1 code implementation13 Nov 2023 Thi-Ngoc-Hanh Le, Sheng-Yi Yao, Chun-Te Wu, Tong-Yee Lee

The critical contrast of our approach versus prior work and existing commercial applications is that novel sequences with arbitrary starting frame are produced by our system with a consistent degree in both content and motion direction.

Feature Correlation

Retargeting video with an end-to-end framework

no code implementations8 Nov 2023 Thi-Ngoc-Hanh Le, HuiGuang Huang, Yi-Ru Chen, Tong-Yee Lee

Plus, being tolerant of different video content, avoiding important objects from shrinking, and the ability to play with arbitrary ratios are the limitations that need to be resolved in these systems requiring investigation.

Image Retargeting

ProSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models

3 code implementations25 May 2023 Yuxin Zhang, WeiMing Dong, Fan Tang, Nisha Huang, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Oliver Deussen, Changsheng Xu

We apply ProSpect in various personalized attribute-aware image generation applications, such as image-guided or text-driven manipulations of materials, style, and layout, achieving previously unattainable results from a single image input without fine-tuning the diffusion models.

Attribute Disentanglement +1

AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models

1 code implementation20 Mar 2023 Yu Cao, Xiangqiao Meng, P. Y. Mok, Xueting Liu, Tong-Yee Lee, Ping Li

Through multiple quantitative metrics evaluated on our dataset and a user study, we demonstrate AnimeDiffusion outperforms state-of-the-art GANs-based models for anime face line drawing colorization.

Colorization Image Reconstruction

A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive Learning

1 code implementation9 Mar 2023 Yuxin Zhang, Fan Tang, WeiMing Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu

Our framework consists of three key components, i. e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.

Contrastive Learning Representation Learning +1

Region-Aware Diffusion for Zero-shot Text-driven Image Editing

1 code implementation23 Feb 2023 Nisha Huang, Fan Tang, WeiMing Dong, Tong-Yee Lee, Changsheng Xu

Different from current mask-based image editing methods, we propose a novel region-aware diffusion model (RDM) for entity-level image editing, which could automatically locate the region of interest and replace it following given text prompts.

Image Manipulation

Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning

1 code implementation19 May 2022 Yuxin Zhang, Fan Tang, WeiMing Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu

Our framework consists of three key components, i. e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.

Contrastive Learning Image Stylization +1

Image Retargetability

no code implementations12 Feb 2018 Fan Tang, Wei-Ming Dong, Yiping Meng, Chongyang Ma, Fuzhang Wu, Xinrui Li, Tong-Yee Lee

In this work, we introduce the notion of image retargetability to describe how well a particular image can be handled by content-aware image retargeting.

Image Retargeting

Image Retargeting by Content-Aware Synthesis

no code implementations26 Mar 2014 Weiming Dong, Fuzhang Wu, Yan Kong, Xing Mei, Tong-Yee Lee, Xiaopeng Zhang

We propose to retarget the textural regions by content-aware synthesis and non-textural regions by fast multi-operators.

Image Retargeting

Cannot find the paper you are looking for? You can Submit a new open access paper.