Search Results for author: Zhengyang Liang

Found 4 papers, 2 papers with code

AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning

4 code implementations10 Jul 2023 Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, Bo Dai

Once trained, the motion module can be inserted into a personalized T2I model to form a personalized animation generator.

Image Animation

SwinGNN: Rethinking Permutation Invariance in Diffusion Models for Graph Generation

1 code implementation4 Jul 2023 Qi Yan, Zhengyang Liang, Yang song, Renjie Liao, Lele Wang

Diffusion models based on permutation-equivariant networks can learn permutation-invariant distributions for graph data.

Denoising Graph Generation

A HYPOTHESIS FOR THE COGNITIVE DIFFICULTY OF IMAGES

no code implementations29 Sep 2021 Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Xin Jin, Quanshi Zhang

This paper proposes a hypothesis to analyze the underlying reason for the cognitive difficulty of an image from two perspectives, i. e. a cognitive image usually makes a DNN strongly activated by cognitive concepts; discarding massive non-cognitive concepts may also help the DNN focus on cognitive concepts.

A Hypothesis for the Aesthetic Appreciation in Neural Networks

no code implementations31 Jul 2021 Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Quanshi Zhang

This paper proposes a hypothesis for the aesthetic appreciation that aesthetic images make a neural network strengthen salient concepts and discard inessential concepts.

Cannot find the paper you are looking for? You can Submit a new open access paper.