Search Results for author: Xingyi Yang

Found 24 papers, 20 papers with code

SG-Former: Self-guided Transformer with Evolving Token Reallocation

1 code implementation ICCV 2023 Sucheng Ren, Xingyi Yang, Songhua Liu, Xinchao Wang

At the heart of our approach is to utilize a significance map, which is estimated through hybrid-scale self-attention and evolves itself during training, to reallocate tokens based on the significance of each region.

Diffusion Model as Representation Learner

1 code implementation ICCV 2023 Xingyi Yang, Xinchao Wang

In this paper, we conduct an in-depth investigation of the representation power of DPMs, and propose a novel knowledge transfer method that leverages the knowledge acquired by generative DPMs for recognition tasks.

Denoising Image Classification +3

Distribution Shift Inversion for Out-of-Distribution Prediction

1 code implementation CVPR 2023 Runpeng Yu, Songhua Liu, Xingyi Yang, Xinchao Wang

Machine learning society has witnessed the emergence of a myriad of Out-of-Distribution (OoD) algorithms, which address the distribution shift between the training and the testing distribution by searching for a unified predictor or invariant feature representation.

Domain Generalization

Anything-3D: Towards Single-view Anything Reconstruction in the Wild

1 code implementation19 Apr 2023 Qiuhong Shen, Xingyi Yang, Xinchao Wang

3D reconstruction from a single-RGB image in unconstrained real-world scenarios presents numerous challenges due to the inherent diversity and complexity of objects and environments.

3D Reconstruction Semantic Segmentation

Diffusion Probabilistic Model Made Slim

no code implementations CVPR 2023 Xingyi Yang, Daquan Zhou, Jiashi Feng, Xinchao Wang

Despite the recent visually-pleasing results achieved, the massive computational cost has been a long-standing flaw for diffusion probabilistic models (DPMs), which, in turn, greatly limits their applications on resource-limited platforms.

Image Generation Unconditional Image Generation

Dataset Factorization for Condensation

1 code implementation NIPS 2022 Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang

In this paper, we study dataset distillation (DD), from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.

Hallucination Informativeness

Dataset Distillation via Factorization

3 code implementations30 Oct 2022 Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang

In this paper, we study \xw{dataset distillation (DD)}, from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.

Hallucination Informativeness

Deep Model Reassembly

1 code implementation24 Oct 2022 Xingyi Yang, Daquan Zhou, Songhua Liu, Jingwen Ye, Xinchao Wang

Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints.

Transfer Learning

Learning with Recoverable Forgetting

1 code implementation17 Jul 2022 Jingwen Ye, Yifang Fu, Jie Song, Xingyi Yang, Songhua Liu, Xin Jin, Mingli Song, Xinchao Wang

Life-long learning aims at learning a sequence of tasks without forgetting the previously acquired knowledge.

General Knowledge Transfer Learning

Factorizing Knowledge in Neural Networks

1 code implementation4 Jul 2022 Xingyi Yang, Jingwen Ye, Xinchao Wang

The core idea of KF lies in the modularization and assemblability of knowledge: given a pretrained network model as input, KF aims to decompose it into several factor networks, each of which handles only a dedicated task and maintains task-specific knowledge factorized from the source network.

Disentanglement Transfer Learning

Neural Point Process for Learning Spatiotemporal Event Dynamics

1 code implementation12 Dec 2021 ZiHao Zhou, Xingyi Yang, Ryan Rossi, Handong Zhao, Rose Yu

The key construction of our approach is the nonparametric space-time intensity function, governed by a latent process.

Point Processes Variational Inference

Neural Point Process for Forecasting Spatiotemporal Events

no code implementations1 Jan 2021 ZiHao Zhou, Xingyi Yang, Xinyi He, Ryan Rossi, Handong Zhao, Rose Yu

To the best of our knowledge, this is the first neural point process model that can jointly predict both the space and time of events.

Density Estimation Point Processes

Stochastic Gradient Variance Reduction by Solving a Filtering Problem

1 code implementation22 Dec 2020 Xingyi Yang

Deep neural networks (DNN) are typically optimized using stochastic gradient descent (SGD).

Stochastic Optimization

DSRNA: Differentiable Search of Robust Neural Architectures

no code implementations CVPR 2021 Ramtin Hosseini, Xingyi Yang, Pengtao Xie

To address this problem, we propose methods to perform differentiable search of robust neural architectures.

Transfer Learning or Self-supervised Learning? A Tale of Two Pretraining Paradigms

1 code implementation19 Jun 2020 Xingyi Yang, Xuehai He, Yuxiao Liang, Yue Yang, Shanghang Zhang, Pengtao Xie

There has not been a clear understanding on what properties of data and tasks render one approach outperforms the other.

Self-Supervised Learning Transfer Learning

XRayGAN: Consistency-preserving Generation of X-ray Images from Radiology Reports

1 code implementation17 Jun 2020 Xingyi Yang, Nandiraju Gireesh, Eric Xing, Pengtao Xie

To address this problem, we develop methods to generate view-consistent, high-fidelity, and high-resolution X-ray images from radiology reports to facilitate radiology training of medical students.

COVID-CT-Dataset: A CT Scan Dataset about COVID-19

19 code implementations30 Mar 2020 Xingyi Yang, Xuehai He, Jinyu Zhao, Yichen Zhang, Shanghang Zhang, Pengtao Xie

Using this dataset, we develop diagnosis methods based on multi-task learning and self-supervised learning, that achieve an F1 of 0. 90, an AUC of 0. 98, and an accuracy of 0. 89.

Computed Tomography (CT) COVID-19 Diagnosis +2

Cannot find the paper you are looking for? You can Submit a new open access paper.