Search Results for author: Jingwen Ye

Found 22 papers, 17 papers with code

Ungeneralizable Examples

no code implementations22 Apr 2024 Jingwen Ye, Xinchao Wang

The training of contemporary deep learning models heavily relies on publicly available data, posing a risk of unauthorized access to online data and raising concerns about data privacy.

Distilled Datamodel with Reverse Gradient Matching

no code implementations22 Apr 2024 Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang

To investigate the impact of changes in training data on a pre-trained model, a common approach is leave-one-out retraining.

Mutual-modality Adversarial Attack with Semantic Perturbation

no code implementations20 Dec 2023 Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang

Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.

Adversarial Attack

Improving Expressivity of GNNs with Subgraph-specific Factor Embedded Normalization

1 code implementation31 May 2023 KaiXuan Chen, Shunyu Liu, Tongtian Zhu, Tongya Zheng, Haofei Zhang, Zunlei Feng, Jingwen Ye, Mingli Song

Graph Neural Networks (GNNs) have emerged as a powerful category of learning architecture for handling graph-structured data.

Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate

1 code implementation19 Apr 2023 Songhua Liu, Jingwen Ye, Xinchao Wang

Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.

Style Transfer

Partial Network Cloning

1 code implementation CVPR 2023 Jingwen Ye, Songhua Liu, Xinchao Wang

Unlike prior methods that update all or at least part of the parameters in the target network throughout the knowledge transfer process, PNC conducts partial parametric "cloning" from a source network and then injects the cloned module to the target, without modifying its parameters.

Transfer Learning

Slimmable Dataset Condensation

no code implementations CVPR 2023 Songhua Liu, Jingwen Ye, Runpeng Yu, Xinchao Wang

In this paper, we explore the problem of slimmable dataset condensation, to extract a smaller synthetic dataset given only previous condensation results.

Dataset Condensation

Dataset Factorization for Condensation

1 code implementation NIPS 2022 Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang

In this paper, we study dataset distillation (DD), from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.

Hallucination Informativeness

Dataset Distillation via Factorization

3 code implementations30 Oct 2022 Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang

In this paper, we study \xw{dataset distillation (DD)}, from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.

Hallucination Informativeness

Deep Model Reassembly

1 code implementation24 Oct 2022 Xingyi Yang, Daquan Zhou, Songhua Liu, Jingwen Ye, Xinchao Wang

Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints.

Transfer Learning

A Survey of Neural Trees

1 code implementation7 Sep 2022 Haoling Li, Jie Song, Mengqi Xue, Haofei Zhang, Jingwen Ye, Lechao Cheng, Mingli Song

This survey aims to present a comprehensive review of NTs and attempts to identify how they enhance the model interpretability.

Learning with Recoverable Forgetting

1 code implementation17 Jul 2022 Jingwen Ye, Yifang Fu, Jie Song, Xingyi Yang, Songhua Liu, Xin Jin, Mingli Song, Xinchao Wang

Life-long learning aims at learning a sequence of tasks without forgetting the previously acquired knowledge.

General Knowledge Transfer Learning

DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation

1 code implementation13 Jul 2022 Songhua Liu, Jingwen Ye, Sucheng Ren, Xinchao Wang

Prior approaches, despite the promising results, have relied on either estimating dense attention to compute per-point matching, which is limited to only coarse scales due to the quadratic memory cost, or fixing the number of correspondences to achieve linear complexity, which lacks flexibility.

Face Generation Style Transfer

Factorizing Knowledge in Neural Networks

1 code implementation4 Jul 2022 Xingyi Yang, Jingwen Ye, Xinchao Wang

The core idea of KF lies in the modularization and assemblability of knowledge: given a pretrained network model as input, KF aims to decompose it into several factor networks, each of which handles only a dedicated task and maintains task-specific knowledge factorized from the source network.

Disentanglement Transfer Learning

Spot-adaptive Knowledge Distillation

2 code implementations5 May 2022 Jie Song, Ying Chen, Jingwen Ye, Mingli Song

Knowledge distillation (KD) has become a well established paradigm for compressing deep neural networks.

Knowledge Distillation

Safe Distillation Box

1 code implementation5 Dec 2021 Jingwen Ye, Yining Mao, Jie Song, Xinchao Wang, Cheng Jin, Mingli Song

In other words, all users may employ a model in SDB for inference, but only authorized users get access to KD from the model.

Knowledge Distillation

Online Knowledge Distillation for Efficient Pose Estimation

1 code implementation ICCV 2021 Zheng Li, Jingwen Ye, Mingli Song, Ying Huang, Zhigeng Pan

However, existing pose distillation works rely on a heavy pre-trained estimator to perform knowledge transfer and require a complex two-stage learning procedure.

Knowledge Distillation Pose Estimation

DEPARA: Deep Attribution Graph for Deep Knowledge Transferability

1 code implementation CVPR 2020 Jie Song, Yixin Chen, Jingwen Ye, Xinchao Wang, Chengchao Shen, Feng Mao, Mingli Song

In this paper, we propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs.

Model Selection Transfer Learning

Amalgamating Filtered Knowledge: Learning Task-customized Student from Multi-task Teachers

1 code implementation28 May 2019 Jingwen Ye, Xinchao Wang, Yixin Ji, Kairi Ou, Mingli Song

Many well-trained Convolutional Neural Network(CNN) models have now been released online by developers for the sake of effortless reproducing.

Neural Style Transfer: A Review

8 code implementations11 May 2017 Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, Mingli Song

We first propose a taxonomy of current algorithms in the field of NST.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.