Search Results for author: Songhua Liu

Found 24 papers, 17 papers with code

OminiControl: Minimal and Universal Control for Diffusion Transformer

2 code implementations22 Nov 2024 Zhenxiong Tan, Songhua Liu, Xingyi Yang, Qiaochu Xue, Xinchao Wang

In this paper, we introduce OminiControl, a highly versatile and parameter-efficient framework that integrates image conditions into pre-trained Diffusion Transformer (DiT) models.

Teddy: Efficient Large-Scale Dataset Distillation via Taylor-Approximated Matching

no code implementations10 Oct 2024 Ruonan Yu, Songhua Liu, Jingwen Ye, Xinchao Wang

Addressing these concerns, this paper introduces Teddy, a Taylor-approximated dataset distillation framework designed to handle large-scale dataset and enhance efficiency.

Dataset Distillation

LinFusion: 1 GPU, 1 Minute, 16K Image

1 code implementation3 Sep 2024 Songhua Liu, Weihao Yu, Zhenxiong Tan, Xinchao Wang

Modern diffusion models, particularly those utilizing a Transformer-based UNet for denoising, rely heavily on self-attention operations to manage complex spatial relationships, thus achieving impressive generation performance.

16k Causal Inference +1

Heavy Labels Out! Dataset Distillation with Label Space Lightening

no code implementations15 Aug 2024 Ruonan Yu, Songhua Liu, Zigeng Chen, Jingwen Ye, Xinchao Wang

Extensive experiments demonstrate that with only about 0. 003% of the original storage required for a complete set of soft labels, we achieve comparable performance to current state-of-the-art dataset distillation methods on large-scale datasets.

Dataset Distillation

Video-Infinity: Distributed Long Video Generation

no code implementations24 Jun 2024 Zhenxiong Tan, Xingyi Yang, Songhua Liu, Xinchao Wang

Specifically, we propose two coherent mechanisms: Clip parallelism and Dual-scope attention.

Video Generation

Distilled Datamodel with Reverse Gradient Matching

no code implementations CVPR 2024 Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang

To investigate the impact of changes in training data on a pre-trained model, a common approach is leave-one-out retraining.

MindBridge: A Cross-Subject Brain Decoding Framework

1 code implementation CVPR 2024 Shizun Wang, Songhua Liu, Zhenxiong Tan, Xinchao Wang

Currently, brain decoding is confined to a per-subject-per-model paradigm, limiting its applicability to the same individual for whom the decoding model is trained.

Brain Decoding Data Augmentation +2

Mutual-modality Adversarial Attack with Semantic Perturbation

no code implementations20 Dec 2023 Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang

Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.

Adversarial Attack

SG-Former: Self-guided Transformer with Evolving Token Reallocation

1 code implementation ICCV 2023 Sucheng Ren, Xingyi Yang, Songhua Liu, Xinchao Wang

At the heart of our approach is to utilize a significance map, which is estimated through hybrid-scale self-attention and evolves itself during training, to reallocate tokens based on the significance of each region.

Distribution Shift Inversion for Out-of-Distribution Prediction

1 code implementation CVPR 2023 Runpeng Yu, Songhua Liu, Xingyi Yang, Xinchao Wang

Machine learning society has witnessed the emergence of a myriad of Out-of-Distribution (OoD) algorithms, which address the distribution shift between the training and the testing distribution by searching for a unified predictor or invariant feature representation.

Domain Generalization

Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot Artistic Style Transfer

no code implementations CVPR 2023 Hao Tang, Songhua Liu, Tianwei Lin, Shaoli Huang, Fu Li, Dongliang He, Xinchao Wang

On the other hand, different from the vanilla version, we adopt a learnable scaling operation on content features before content-style feature interaction, which better preserves the original similarity between a pair of content features while ensuring the stylization quality.

Meta-Learning Style Transfer

Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate

1 code implementation19 Apr 2023 Songhua Liu, Jingwen Ye, Xinchao Wang

Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.

Style Transfer

Partial Network Cloning

1 code implementation CVPR 2023 Jingwen Ye, Songhua Liu, Xinchao Wang

Unlike prior methods that update all or at least part of the parameters in the target network throughout the knowledge transfer process, PNC conducts partial parametric "cloning" from a source network and then injects the cloned module to the target, without modifying its parameters.

Transfer Learning

Dataset Distillation: A Comprehensive Review

1 code implementation17 Jan 2023 Ruonan Yu, Songhua Liu, Xinchao Wang

Recent success of deep learning is largely attributed to the sheer amount of data used for training deep neural networks. Despite the unprecedented success, the massive data, unfortunately, significantly increases the burden on storage and transmission and further gives rise to a cumbersome model training process.

Dataset Condensation Dataset Distillation

Few-Shot Dataset Distillation via Translative Pre-Training

1 code implementation ICCV 2023 Songhua Liu, Xinchao Wang

We pre-train the translator on some large datasets like ImageNet so that it requires only a limited number of adaptation steps on the target dataset.

Dataset Distillation

Slimmable Dataset Condensation

no code implementations CVPR 2023 Songhua Liu, Jingwen Ye, Runpeng Yu, Xinchao Wang

In this paper, we explore the problem of slimmable dataset condensation, to extract a smaller synthetic dataset given only previous condensation results.

Dataset Condensation Dataset Distillation

Dataset Factorization for Condensation

1 code implementation NIPS 2022 Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang

In this paper, we study dataset distillation (DD), from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.

Dataset Distillation Diversity +2

Dataset Distillation via Factorization

3 code implementations30 Oct 2022 Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang

In this paper, we study \xw{dataset distillation (DD)}, from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.

Dataset Distillation Hallucination +1

Deep Model Reassembly

1 code implementation24 Oct 2022 Xingyi Yang, Daquan Zhou, Songhua Liu, Jingwen Ye, Xinchao Wang

Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints.

Transfer Learning

Learning with Recoverable Forgetting

1 code implementation17 Jul 2022 Jingwen Ye, Yifang Fu, Jie Song, Xingyi Yang, Songhua Liu, Xin Jin, Mingli Song, Xinchao Wang

Life-long learning aims at learning a sequence of tasks without forgetting the previously acquired knowledge.

General Knowledge Transfer Learning

DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation

1 code implementation13 Jul 2022 Songhua Liu, Jingwen Ye, Sucheng Ren, Xinchao Wang

Prior approaches, despite the promising results, have relied on either estimating dense attention to compute per-point matching, which is limited to only coarse scales due to the quadratic memory cost, or fixing the number of correspondences to achieve linear complexity, which lacks flexibility.

Face Generation Style Transfer

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction

2 code implementations ICCV 2021 Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Ruifeng Deng, Xin Li, Errui Ding, Hao Wang

Neural painting refers to the procedure of producing a series of strokes for a given image and non-photo-realistically recreating it using neural networks.

Object Detection Reinforcement Learning (RL) +1

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer

3 code implementations ICCV 2021 Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Meiling Wang, Xin Li, Zhengxing Sun, Qian Li, Errui Ding

Finally, the content feature is normalized so that they demonstrate the same local feature statistics as the calculated per-point weighted style feature statistics.

Style Transfer Video Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.