no code implementations • 18 Jan 2025 • Cheuk Hang Leung, Yiyan Huang, Yijun Li, Qi Wu
The proposed distributionally robust estimators are established using the Inverse Probability Weighting (IPW) method extended from the discrete one for policy evaluation and learning under continuous treatments.
1 code implementation • 6 Jan 2025 • Luozhou Wang, Yijun Li, Zhifei Chen, Jui-Hsien Wang, Zhifei Zhang, He Zhang, Zhe Lin, Yingcong Chen
Text-to-video generative models have made significant strides, enabling diverse applications in entertainment, advertising, and education.
no code implementations • 27 Dec 2024 • Shaoteng Liu, Tianyu Wang, Jui-Hsien Wang, Qing Liu, Zhifei Zhang, Joon-Young Lee, Yijun Li, Bei Yu, Zhe Lin, Soo Ye Kim, Jiaya Jia
Large-scale video generation models have the inherent ability to realistically model natural scenes.
no code implementations • 10 Dec 2024 • Xi Chen, Zhifei Zhang, He Zhang, Yuqian Zhou, Soo Ye Kim, Qing Liu, Yijun Li, Jianming Zhang, Nanxuan Zhao, Yilin Wang, Hui Ding, Zhe Lin, Hengshuang Zhao
We introduce UniReal, a unified framework designed to address various image generation and editing tasks.
no code implementations • 5 Dec 2024 • Yusuf Dalva, Yijun Li, Qing Liu, Nanxuan Zhao, Jianming Zhang, Zhe Lin, Pinar Yanardag
In this paper, we propose a novel image generation pipeline based on Latent Diffusion Models (LDMs) that generates images with two layers: a foreground layer (RGBA) with transparency information and a background layer (RGB).
no code implementations • 26 Nov 2024 • Jinrui Yang, Qing Liu, Yijun Li, Soo Ye Kim, Daniil Pakhomov, Mengwei Ren, Jianming Zhang, Zhe Lin, Cihang Xie, Yuyin Zhou
Layered representations, which allow for independent editing of image components, are essential for user-driven content creation, yet existing approaches often struggle to decompose image into plausible layers with accurately retained transparent visual effects such as shadows and reflections.
no code implementations • 18 Oct 2024 • Yuhan Liang, Yijun Li, Yumeng Niu, Qianhe Shen, Hangyu Liu
The neural network model achieved a high accuracy of 98% in these challenging classification tasks, while the XGBoost model reached a success rate of 85. 26% in prediction tasks.
no code implementations • 1 Oct 2024 • Yuheng Li, Haotian Liu, Mu Cai, Yijun Li, Eli Shechtman, Zhe Lin, Yong Jae Lee, Krishna Kumar Singh
In this paper, we introduce a model designed to improve the prediction of image-text alignment, targeting the challenge of compositional understanding in current visual-language models.
no code implementations • 24 May 2024 • Guibao Shen, Luozhou Wang, Jiantao Lin, Wenhang Ge, Chaozhe Zhang, Xin Tao, Yuan Zhang, Pengfei Wan, Zhongyuan Wang, Guangyong Chen, Yijun Li, Ying-Cong Chen
In this paper, we introduce the Scene Graph Adapter(SG-Adapter), leveraging the structured representation of scene graphs to rectify inaccuracies in the original text embeddings.
no code implementations • CVPR 2024 • Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Niraj K. Jha, Yuchen Liu
Specifically, for single-denoising-step pruning, we develop a novel ranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify redundant tokens, and a similarity-based recovery method to restore tokens for the convolution operation.
1 code implementation • 29 Mar 2024 • Luozhou Wang, Ziyang Mai, Guibao Shen, Yixun Liang, Xin Tao, Pengfei Wan, Di Zhang, Yijun Li, Yingcong Chen
In this work, we present a novel approach for motion customization in video generation, addressing the widespread gap in the exploration of motion representation within video generative models.
1 code implementation • 28 Feb 2024 • Yiyan Huang, Cheuk Hang Leung, Siyi Wang, Yijun Li, Qi Wu
To address these challenges, this paper introduces a Distributionally Robust Metric (DRM) for CATE estimator selection.
no code implementations • CVPR 2024 • Ryan D. Burgert, Brian L. Price, Jason Kuen, Yijun Li, Michael S. Ryoo
Using our method we generate a dataset of 150000 objects with alpha.
1 code implementation • 16 Dec 2023 • Yijun Li, Cheuk Hang Leung, Xiangqian Sun, Chaoqun Wang, Yiyan Huang, Xing Yan, Qi Wu, Dongdong Wang, Zhixiang Huang
Consumer credit services offered by e-commerce platforms provide customers with convenient loan access during shopping and have the potential to stimulate sales.
no code implementations • 10 Dec 2023 • Zhipeng Bao, Yijun Li, Krishna Kumar Singh, Yu-Xiong Wang, Martial Hebert
Despite recent significant strides achieved by diffusion-based Text-to-Image (T2I) models, current systems are still less capable of ensuring decent compositional generation aligned with text prompts, particularly for the multi-object generation.
no code implementations • 22 Sep 2023 • Yijun Li, Mengzhuo Guo, Miłosz Kadziński, Qingpeng Zhang
This study presents novel preference learning approaches to multiple criteria sorting problems in the presence of temporal criteria.
no code implementations • 26 Aug 2023 • Chaoqun Wang, Yijun Li, Xiangqian Sun, Qi Wu, Dongdong Wang, Zhixiang Huang
The tensorized LSTM assigns each variable with a unique hidden state making up a matrix $\mathbf{h}_t$, and the standard LSTM models all the variables with a shared hidden state $\mathbf{H}_t$.
no code implementations • 4 Jul 2023 • Zhen Zhu, Yijun Li, Weijie Lyu, Krishna Kumar Singh, Zhixin Shu, Soeren Pirk, Derek Hoiem
We investigate how to generate multimodal image outputs, such as RGB, depth, and surface normals, with a single generative model.
1 code implementation • 26 Jun 2023 • Luozhou Wang, Guibao Shen, Wenhang Ge, Guangyong Chen, Yijun Li, Ying-Cong Chen
However, these models are learned based on the premise of perfect alignment between the text and extra conditions.
1 code implementation • 15 Jun 2023 • Yijun Li, Cheuk Hang Leung, Qi Wu
Multivariate sequential data collected in practice often exhibit temporal irregularities, including nonuniform time intervals and component misalignment.
no code implementations • 10 Mar 2023 • Ziqian Wu, Xingzhe He, Yijun Li, Cheng Yang, Rui Liu, Shiying Xiong, Bo Zhu
We present a lightweighted neural PDE representation to discover the hidden structure and predict the solution of different nonlinear PDEs.
2 code implementations • 6 Feb 2023 • Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, Jun-Yan Zhu
However, it is still challenging to directly apply these models for editing real images for two reasons.
Ranked #16 on
Text-based Image Editing
on PIE-Bench
no code implementations • ICCV 2023 • Manuel Ladron De Guevara, Jose Echevarria, Yijun Li, Yannick Hold-Geoffroy, Cameron Smith, Daichi Ito
We present a novel method for automatic vectorized avatar generation from a single portrait image.
no code implementations • 21 Nov 2022 • Lana X. Garmire, Yijun Li, Qianhui Huang, Chuan Xu, Sarah Teichmann, Naftali Kaminski, Matteo Pellegrini, Quan Nguyen, Andrew E. Teschendorff
Deciphering cell type heterogeneity is crucial for systematically understanding tissue homeostasis and its dysregulation in diseases.
no code implementations • 4 Nov 2022 • Yuheng Li, Yijun Li, Jingwan Lu, Eli Shechtman, Yong Jae Lee, Krishna Kumar Singh
We introduce a new method for diverse foreground generation with explicit control over various factors.
no code implementations • 24 Aug 2022 • Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Richard Zhang, S. Y. Kung
While concatenating GAN inversion and a 3D-aware, noise-to-image GAN is a straight-forward solution, it is inefficient and may lead to noticeable drop in editing quality.
no code implementations • 22 Jun 2022 • Marian Lupascu, Ryan Murdock, Ionut Mironică, Yijun Li
In this work, we propose a complete framework that generates visual art.
1 code implementation • CVPR 2022 • Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, Krishna Kumar Singh
We propose a new method to invert and edit such complex images in the latent space of GANs, such as StyleGAN2.
no code implementations • 18 Mar 2022 • Yijun Li, Stefan Stanojevic, Lana X. Garmire
Spatial transcriptomics (ST) has advanced significantly in the last few years.
no code implementations • 18 Jan 2022 • Stefan Stanojevic, Yijun Li, Lana X. Garmire
Recently developed technologies to generate single-cell genomic data have made a revolutionary impact in the field of biology.
no code implementations • ICCV 2021 • Yuheng Li, Yijun Li, Jingwan Lu, Eli Shechtman, Yong Jae Lee, Krishna Kumar Singh
We propose a new approach for high resolution semantic image synthesis.
1 code implementation • 14 Sep 2021 • Bing He, Yao Xiao, Haodong Liang, Qianhui Huang, Yuheng Du, Yijun Li, David Garmire, Duxin Sun, Lana X. Garmire
Intercellular heterogeneity is a major obstacle to successful precision medicine.
3 code implementations • CVPR 2021 • Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang
Training generative models, such as GANs, on a target domain containing limited examples (e. g., 10) can easily result in overfitting.
Ranked #3 on
10-shot image generation
on Babies
no code implementations • CVPR 2021 • Pei Wang, Yijun Li, Krishna Kumar Singh, Jingwan Lu, Nuno Vasconcelos
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images from only a single training sample.
1 code implementation • CVPR 2021 • Pei Wang, Yijun Li, Nuno Vasconcelos
Extensive research in neural style transfer methods has shown that the correlation between features extracted by a pre-trained VGG network has a remarkable ability to capture the visual style of an image.
1 code implementation • CVPR 2021 • Yuchen Liu, Zhixin Shu, Yijun Li, Zhe Lin, Federico Perazzi, S. Y. Kung
We then propose a novel content-aware method to guide the processes of both pruning and distillation.
no code implementations • NeurIPS 2020 • Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman
Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.
Ranked #4 on
10-shot image generation
on Babies
1 code implementation • 12 Aug 2020 • Wenqing Chu, Wei-Chih Hung, Yi-Hsuan Tsai, Yu-Ting Chang, Yijun Li, Deng Cai, Ming-Hsuan Yang
Caricature is an artistic drawing created to abstract or exaggerate facial features of a person.
1 code implementation • ECCV 2020 • Hung-Yu Tseng, Matthew Fisher, Jingwan Lu, Yijun Li, Vladimir Kim, Ming-Hsuan Yang
People often create art by following an artistic workflow involving multiple stages that inform the overall design.
1 code implementation • CVPR 2020 • Huan Wang, Yijun Li, Yuehai Wang, Haoji Hu, Ming-Hsuan Yang
In this work, we present a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the convolutional filters.
no code implementations • 25 Dec 2019 • Yijun Li, Lu Jiang, Ming-Hsuan Yang
Image extrapolation aims at expanding the narrow field of view of a given image patch.
1 code implementation • CVPR 2019 • Yijun Li, Chen Fang, Aaron Hertzmann, Eli Shechtman, Ming-Hsuan Yang
We propose a high-quality photo-to-pencil translation method with fine-grained control over the drawing style.
1 code implementation • ECCV 2018 • Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang
Existing video prediction methods mainly rely on observing multiple historical frames or focus on predicting the next one-frame.
12 code implementations • ECCV 2018 • Yijun Li, Ming-Yu Liu, Xueting Li, Ming-Hsuan Yang, Jan Kautz
Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic.
no code implementations • 11 Oct 2017 • Yijun Li, Jia-Bin Huang, Narendra Ahuja, Ming-Hsuan Yang
In contrast to existing methods that consider only the guidance image, the proposed algorithm can selectively transfer salient structures that are consistent with both guidance and target images.
15 code implementations • NeurIPS 2017 • Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang
The whitening and coloring transforms reflect a direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer.
2 code implementations • CVPR 2017 • Yijun Li, Sifei Liu, Jimei Yang, Ming-Hsuan Yang
In this paper, we propose an effective face completion algorithm using a deep generative model.
no code implementations • CVPR 2017 • Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang
Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis.
no code implementations • 17 Jun 2015 • Wei Liu, Yijun Li, Xiaogang Chen, Jie Yang, Qiang Wu, Jingyi Yu
A popular solution is upsampling the obtained noisy low resolution depth map with the guidance of the companion high resolution color image.