Search Results for author: Pan Xie

Found 12 papers, 3 papers with code

Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis

no code implementations21 Apr 2024 Yuxi Ren, Xin Xia, Yanzuo Lu, Jiacheng Zhang, Jie Wu, Pan Xie, Xing Wang, Xuefeng Xiao

Current distillation techniques often dichotomize into two distinct aspects: i) ODE Trajectory Preservation; and ii) ODE Trajectory Reformulation.

Image Generation

UniFL: Improve Stable Diffusion via Unified Feedback Learning

no code implementations8 Apr 2024 Jiacheng Zhang, Jie Wu, Yuxi Ren, Xin Xia, Huafeng Kuang, Pan Xie, Jiashi Li, Xuefeng Xiao, Weilin Huang, Min Zheng, Lean Fu, Guanbin Li

Diffusion models have revolutionized the field of image generation, leading to the proliferation of high-quality models and diverse downstream applications.

Image Generation

ByteEdit: Boost, Comply and Accelerate Generative Image Editing

no code implementations7 Apr 2024 Yuxi Ren, Jie Wu, Yanzuo Lu, Huafeng Kuang, Xin Xia, Xionghui Wang, Qianqian Wang, Yixing Zhu, Pan Xie, Shiyin Wang, Xuefeng Xiao, Yitong Wang, Min Zheng, Lean Fu

Recent advancements in diffusion-based generative image editing have sparked a profound revolution, reshaping the landscape of image outpainting and inpainting tasks.

Image Outpainting

ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models

1 code implementation4 Mar 2024 Jiaxiang Cheng, Pan Xie, Xin Xia, Jiashi Li, Jie Wu, Yuxi Ren, Huixia Li, Xuefeng Xiao, Min Zheng, Lean Fu

Especially, after learning a deep understanding of pure resolution priors, ResAdapter trained on the general dataset, generates resolution-free images with personalized diffusion models while preserving their original style domain.

Image Generation

Modeling Balanced Explicit and Implicit Relations with Contrastive Learning for Knowledge Concept Recommendation in MOOCs

no code implementations13 Feb 2024 Hengnian Gu, Zhiyi Duan, Pan Xie, Dongdai Zhou

To address this issue, we propose a novel framework based on contrastive learning, which can represent and balance the explicit and implicit relations for knowledge concept recommendation in MOOCs (CL-KCRec).

Contrastive Learning Implicit Relations

Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models

1 code implementation23 May 2023 Weifeng Chen, Yatai Ji, Jie Wu, Hefeng Wu, Pan Xie, Jiashi Li, Xin Xia, Xuefeng Xiao, Liang Lin

Based on a pre-trained conditional text-to-image (T2I) diffusion model, our model aims to generate videos conditioned on a sequence of control signals, such as edge or depth maps.

Optical Flow Estimation Style Transfer +4

G2P-DDM: Generating Sign Pose Sequence from Gloss Sequence with Discrete Diffusion Model

no code implementations19 Aug 2022 Pan Xie, Qipeng Zhang, Taiyi Peng, Hao Tang, Yao Du, Zexian Li

Our approach focuses on the transformation of sign gloss sequences into their corresponding sign pose sequences (G2P).

Denoising Quantization +1

MvSR-NAT: Multi-view Subset Regularization for Non-Autoregressive Machine Translation

no code implementations19 Aug 2021 Pan Xie, Zexian Li, Xiaohui Hu

Conditional masked language models (CMLM) have shown impressive progress in non-autoregressive machine translation (NAT).

Machine Translation Sentence +1

Multi-Scale Local-Temporal Similarity Fusion for Continuous Sign Language Recognition

no code implementations27 Jul 2021 Pan Xie, Zhi Cui, Yao Du, Mengyi Zhao, Jianwei Cui, Bin Wang, Xiaohui Hu

Continuous sign language recognition (cSLR) is a public significant task that transcribes a sign language video into an ordered gloss sequence.

Sign Language Recognition

PiSLTRc: Position-informed Sign Language Transformer with Content-aware Convolution

no code implementations27 Jul 2021 Pan Xie, Mengyi Zhao, Xiaohui Hu

Since the superiority of Transformer in learning long-term dependency, the sign language Transformer model achieves remarkable progress in Sign Language Recognition (SLR) and Translation (SLT).

Position Sign Language Recognition +1

Infusing Sequential Information into Conditional Masked Translation Model with Self-Review Mechanism

1 code implementation COLING 2020 Pan Xie, Zhi Cui, Xiuyin Chen, Xiaohui Hu, Jianwei Cui, Bin Wang

Concretely, we insert a left-to-right mask to the same decoder of CMTM, and then induce it to autoregressively review whether each generated word from CMTM is supposed to be replaced or kept.

Knowledge Distillation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.