Search Results for author: Liangliang Shi

Found 8 papers, 0 papers with code

AutoMix: Mixup Networks for Sample Interpolation via Cooperative Barycenter Learning

no code implementations ECCV 2020 Jianchao Zhu, Liangliang Shi, Junchi Yan, Hongyuan Zha

This paper proposes new ways of sample mixing by thinking of the process as generation of barycenter in a metric space for data augmentation.

Data Augmentation

Double-Bounded Optimal Transport for Advanced Clustering and Classification

no code implementations21 Jan 2024 Liangliang Shi, Zhaoqi Shen, Junchi Yan

Even with vanilla Softmax trained features, our extensive experimental results show that our method can achieve good results with our improved inference scheme in the testing stage.

Clustering

On the Expressiveness, Predictability and Interpretability of Neural Temporal Point Processes

no code implementations29 Sep 2021 Liangliang Shi, Fangyu Ding, Junchi Yan, Yanjie Duan, Guangjian Tian

Despite the fast advance in neural temporal point processes (NTPP) which enjoys high model capacity, there are still some standing gaps to fill including model expressiveness, predictability, and interpretability, especially with the wide application of event sequence modeling.

Point Processes

Improving Generative Adversarial Networks via Adversarial Learning in Latent Space

no code implementations29 Sep 2021 Yang Li, Yichuan Mo, Liangliang Shi, Junchi Yan, Xiaolu Zhang, Jun Zhou

Although many efforts have been made in terms of backbone architecture design, loss function, and training techniques, few results have been obtained on how the sampling in latent space can affect the final performance, and existing works on latent space mainly focus on controllability.

Adversarial Robustness via Adaptive Label Smoothing

no code implementations29 Sep 2021 Qibing Ren, Liangliang Shi, Lanjun Wang, Junchi Yan

We first show both theoretically and empirically that strong smoothing in AT increases local smoothness of the loss surface which is beneficial for robustness but sacrifices the training loss which influences the accuracy of samples near the decision boundary.

Adversarial Robustness

IID-GAN: an IID Sampling Perspective for Regularizing Mode Collapse

no code implementations1 Jun 2021 Yang Li, Liangliang Shi, Junchi Yan

Based on this observation, considering a necessary condition of IID generation that the inverse samples from target data should also be IID in the source distribution, we propose a new loss to encourage the closeness between inverse samples of real data and the Gaussian source in latent space to regularize the generation to be IID from the target distribution.

InvertGAN: Reducing mode collapse with multi-dimensional Gaussian Inversion

no code implementations1 Jan 2021 Liangliang Shi, Yang Li, Junchi Yan

Generative adversarial networks have shown their ability in capturing high-dimensional complex distributions and generating realistic data samples e. g. images.

Cannot find the paper you are looking for? You can Submit a new open access paper.