Search Results for author: Yiyang Ma

Found 10 papers, 5 papers with code

JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation

1 code implementation12 Nov 2024 Yiyang Ma, Xingchao Liu, Xiaokang Chen, Wen Liu, Chengyue Wu, Zhiyu Wu, Zizheng Pan, Zhenda Xie, Haowei Zhang, Xingkai Yu, Liang Zhao, Yisong Wang, Jiaying Liu, Chong Ruan

To further improve the performance of our unified model, we adopt two key strategies: (i) decoupling the understanding and generation encoders, and (ii) aligning their representations during unified training.

Language Modelling Large Language Model

Correcting Diffusion-Based Perceptual Image Compression with Privileged End-to-End Decoder

no code implementations7 Apr 2024 Yiyang Ma, Wenhan Yang, Jiaying Liu

We build a diffusion model and design a novel paradigm that combines the diffusion model and an end-to-end decoder, and the latter is responsible for transmitting the privileged information extracted at the encoder side.

Decoder Image Compression

Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote Sensing Imagery

1 code implementation25 Jan 2024 Jialu Sui, Yiyang Ma, Wenhan Yang, Xiaokang Zhang, Man-on Pun, Jiaying Liu

The presence of cloud layers severely compromises the quality and effectiveness of optical remote sensing (RS) images.

Cloud Removal Image Generation

Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution

no code implementations24 May 2023 Yiyang Ma, Huan Yang, Wenhan Yang, Jianlong Fu, Jiaying Liu

Diffusion models, as a kind of powerful generative model, have given impressive results on image super-resolution (SR) tasks.

Efficient Exploration Image Super-Resolution

AI Illustrator: Translating Raw Descriptions into Images by Prompt-based Cross-Modal Generation

1 code implementation7 Sep 2022 Yiyang Ma, Huan Yang, Bei Liu, Jianlong Fu, Jiaying Liu

To address this issue, we propose a Prompt-based Cross-Modal Generation Framework (PCM-Frame) to leverage two powerful pre-trained models, including CLIP and StyleGAN.

Image Generation

Meta-Interpolation: Time-Arbitrary Frame Interpolation via Dual Meta-Learning

no code implementations27 Jul 2022 Shixing Yu, Yiyang Ma, Wenhan Yang, Wei Xiang, Jiaying Liu

Extensive qualitative and quantitative evaluations, as well as ablation studies, demonstrate that, via introducing meta-learning in our framework in such a well-designed way, our method not only achieves superior performance to state-of-the-art frame interpolation approaches but also owns an extended capacity to support the interpolation at an arbitrary time-step.

Meta-Learning Optical Flow Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.