no code implementations • 31 May 2023 • Zhaocheng Liu, Zhongxiang Fan, Jian Liang, Dongying Kong, Han Li
However, it is still unknown whether a multi-epoch training paradigm could achieve better results, as the best performance is usually achieved by one-epoch training.
no code implementations • CVPR 2023 • Junchi Yu, Jian Liang, Ran He
Recent works employ different graph editions to generate augmented environments and learn an invariant GNN for generalization.
1 code implementation • 27 Mar 2023 • Jian Liang, Ran He, Tieniu Tan
Test-time adaptation (TTA), an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
1 code implementation • CVPR 2023 • Fangrui Lv, Jian Liang, Shuang Li, Jinming Zhang, Di Liu
A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization.
no code implementations • 22 Mar 2023 • Puning Yang, Jian Liang, Jie Cao, Ran He
Out-of-distribution (OOD) detection is a crucial aspect of deploying machine learning models in open-world applications.
no code implementations • 19 Mar 2023 • Lijun Sheng, Jian Liang, Ran He, Zilei Wang, Tieniu Tan
To address this issue, we propose a model preprocessing framework, named AdaptGuard, to improve the security of model adaptation algorithms.
no code implementations • 17 Mar 2023 • Yuhe Ding, Jian Liang, Jie Cao, Aihua Zheng, Ran He
Briefly, MODIFY first trains a generative model in the target domain and then translates a source input to the target domain via the provided style model.
no code implementations • 17 Mar 2023 • Zhengbo Wang, Jian Liang, Zilei Wang, Tieniu Tan
To address this issue, we present a novel transductive ZSL method that produces semantic attributes of the unseen data and imposes them on the generative process.
no code implementations • 9 Feb 2023 • Yuhe Ding, Jian Liang, Bo Jiang, Aihua Zheng, Ran He
Existing cross-domain keypoint detection methods always require accessing the source data during adaptation, which may violate the data privacy law and pose serious security concerns.
1 code implementation • The Eleventh International Conference on Learning Representations (ICLR 2023) 2023 • Yifan Zhang, Xue Wang, Jian Liang, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan
A fundamental challenge for machine learning models is how to generalize learned models for out-of-distribution (OOD) data.
Ranked #3 on
Domain Adaptation
on Office-Home
1 code implementation • 5 Jan 2023 • Boqiang Xu, Lingxiao He, Jian Liang, Zhenan Sun
To reduce the interference of the noise during feature matching, we mainly focus on visible regions that appear in both images and develop a visibility graph to calculate the similarity.
1 code implementation • 21 Oct 2022 • Jiyang Guan, Jian Liang, Ran He
To reduce the training time, we further develop SAC-m that selects CutMix Augmented samples as model inputs, without the need for training the surrogate models or generating adversarial examples.
no code implementations • 10 Oct 2022 • Kun Yan, Lei Ji, Chenfei Wu, Jian Liang, Ming Zhou, Nan Duan, Shuai Ma
Panorama synthesis aims to generate a visual scene with all 360-degree views and enables an immersive virtual world.
2 code implementations • 1 Oct 2022 • Yujun Shi, Jian Liang, Wenqing Zhang, Vincent Y. F. Tan, Song Bai
To remedy this problem caused by the data heterogeneity, we propose {\sc FedDecorr}, a novel method that can effectively mitigate dimensional collapse in federated learning.
1 code implementation • 18 Aug 2022 • Yi-Fan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, DaCheng Tao, Xing Xie
Our bound motivates two strategies to reduce the gap: the first one is ensembling multiple classifiers to enrich the hypothesis space, then we propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
1 code implementation • 20 Jul 2022 • Chenfei Wu, Jian Liang, Xiaowei Hu, Zhe Gan, JianFeng Wang, Lijuan Wang, Zicheng Liu, Yuejian Fang, Nan Duan
In this paper, we present NUWA-Infinity, a generative model for infinite visual synthesis, which is defined as the task of generating arbitrarily-sized high-resolution images or long-duration videos.
Ranked #1 on
Image Outpainting
on LHQC
no code implementations • 19 Jun 2022 • Junchi Yu, Jian Liang, Ran He
Extensive experiments on both node-level and graph-level benchmarks shows that the proposed DPS achieves impressive performance for various graph domain generalization tasks.
no code implementations • 10 Jun 2022 • Ziming Yang, Jian Liang, Chaoyou Fu, Mandi Luo, Xiao-Yu Zhang
Secondly, we devise a face synthesis module (FSM) to generate a large number of images with stochastic combinations of disentangled identities and attributes for enriching the attribute diversity of synthetic images.
no code implementations • 6 Jun 2022 • Xingchen Liu, Yawen Li, Yingxia Shao, Ang Li, Jian Liang
Based on this, we propose a car review text sentiment analysis model based on adversarial training and whole word mask BERT(ATWWM-BERT).
no code implementations • 1 Jun 2022 • Jie Shi, Chenfei Wu, Jian Liang, Xiang Liu, Nan Duan
Our work proposes a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
1 code implementation • 29 May 2022 • Yuhe Ding, Lijun Sheng, Jian Liang, Aihua Zheng, Ran He
First of all, to avoid additional parameters and explore the information in the source model, ProxyMix defines the weights of the classifier as the class prototypes and then constructs a class-balanced proxy source domain by the nearest neighbors of the prototypes to bridge the unseen source domain and the target domain.
no code implementations • 25 Apr 2022 • Xiaochen Li, Rui Zhong, Jian Liang, Xialong Liu, Yu Zhang
Rich user behavior information is of great importance for capturing and understanding user interest in click-through rate (CTR) prediction.
1 code implementation • CVPR 2022 • Fangrui Lv, Jian Liang, Shuang Li, Bin Zang, Chi Harold Liu, Ziteng Wang, Di Liu
Specifically, we assume that each input is constructed from a mix of causal factors (whose relationship with the label is invariant across domains) and non-causal factors (category-independent), and only the former cause the classification judgments.
no code implementations • 16 Dec 2021 • Jian Liang, Dapeng Hu, Jiashi Feng, Ran He
To achieve bilateral adaptation in the target domain, we further maximize localized mutual information to align known samples with the source classifier and employ an entropic loss to push unknown samples far away from the source classification boundary, respectively.
Ranked #2 on
Universal Domain Adaptation
on VisDA2017
1 code implementation • 16 Dec 2021 • Boqiang Xu, Jian Liang, Lingxiao He, Zhenan Sun
Meanwhile, META considers the relevance of an unseen target sample and source domains via normalization statistics and develops an aggregation module to adaptively integrate multiple experts for mimicking unseen target domain.
1 code implementation • CVPR 2022 • Yujun Shi, Kuangqi Zhou, Jian Liang, Zihang Jiang, Jiashi Feng, Philip Torr, Song Bai, Vincent Y. F. Tan
Specifically, we experimentally show that directly encouraging CIL Learner at the initial phase to output similar representations as the model jointly trained on all classes can greatly boost the CIL performance.
1 code implementation • NeurIPS 2021 • Fangrui Lv, Jian Liang, Kaixiong Gong, Shuang Li, Chi Harold Liu, Han Li, Di Liu, Guoren Wang
Domain adaptation (DA) attempts to transfer the knowledge from a labeled source domain to an unlabeled target domain that follows different distribution from the source.
1 code implementation • 6 Dec 2021 • Jian Liang, Fangrui Lv, Di Liu, Zehui Dai, Xu Tian, Shuang Li, Fei Wang, Han Li
Challenges of the problem include 1) how to align large-scale entities between sources to share information and 2) how to mitigate negative transfer from joint learning multi-source data.
1 code implementation • 24 Nov 2021 • Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, Nan Duan
To cover language, image, and video at the same time for different scenarios, a 3D transformer encoder-decoder framework is designed, which can not only deal with videos as 3D data but also adapt to texts and images as 1D and 2D data, respectively.
Ranked #1 on
Text-to-Video Generation
on Kinetics
no code implementations • 29 Sep 2021 • Sen Cui, Jingfeng Zhang, Jian Liang, Masashi Sugiyama, ChangShui Zhang
However, an ensemble still wastes the limited capacity of multiple models.
1 code implementation • NeurIPS 2021 • Sen Cui, Weishen Pan, Jian Liang, ChangShui Zhang, Fei Wang
In this paper, we propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients (data sources).
1 code implementation • 18 Aug 2021 • Sen Cui, Jian Liang, Weishen Pan, Kun Chen, ChangShui Zhang, Fei Wang
Federated learning (FL) refers to the paradigm of learning models over a collaborative research network involving multiple clients without sacrificing privacy.
1 code implementation • ICCV 2021 • Shuang Li, Mixue Xie, Fangrui Lv, Chi Harold Liu, Jian Liang, Chen Qin, Wei Li
To tackle this issue, we propose Semantic Concentration for Domain Adaptation (SCDA), which encourages the model to concentrate on the most principal features via the pair-wise adversarial alignment of prediction distributions.
3 code implementations • 11 Aug 2021 • Lingxiao He, Wu Liu, Jian Liang, Kecheng Zheng, Xingyu Liao, Peng Cheng, Tao Mei
Instead, we aim to explore multiple labeled datasets to learn generalized domain-invariant representations for person re-id, which is expected universally effective for each new-coming re-id scenario.
Generalizable Person Re-identification
Knowledge Distillation
+1
1 code implementation • 5 Aug 2021 • Weijiang Yu, Jian Liang, Lei Ji, Lu Li, Yuejian Fang, Nong Xiao, Nan Duan
Firstly, we develop multi-commonsense learning for semantic-level reasoning by jointly training different commonsense types in a unified network, which encourages the interaction between the clues of multiple commonsense descriptions, event-wise captions and videos.
no code implementations • 22 Jun 2021 • Yuxi Wang, Jian Liang, Zhaoxiang Zhang
It is the first work to use negative pseudo labels during self-training for domain adaptation.
no code implementations • NeurIPS 2021 • Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, Jiashi Feng
Motivated by the above findings, we propose a novel and simple algorithm called Classifier Calibration with Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated gaussian mixture model.
3 code implementations • CVPR 2022 • Jian Liang, Dapeng Hu, Jiashi Feng, Ran He
To ease the burden of labeling, unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target).
no code implementations • 25 Mar 2021 • Kekai Sheng, Ke Li, Xiawu Zheng, Jian Liang, WeiMing Dong, Feiyue Huang, Rongrong Ji, Xing Sun
However, considering that the configuration of attention, i. e., the type and the position of attention module, affects the performance significantly, it is more generalized to optimize the attention configuration automatically to be specialized for arbitrary UDA scenario.
Ranked #1 on
Unsupervised Domain Adaptation
on Duke to Market
1 code implementation • NeurIPS 2021 • Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng
In this paper, we investigate whether applying contrastive learning to fine-tuning would bring further benefits, and analytically find that optimizing the contrastive loss benefits both discriminative representation learning and model optimization during fine-tuning.
no code implementations • 10 Feb 2021 • Jian Liang, Andrei Alexandru, Yu-Jiang Bi, Terrence Draper, Keh-Fei Liu, Yi-Bo Yang
We show that we can resolve the flavor content of the sea quarks and constrain their masses using the Dirac spectral density.
High Energy Physics - Lattice High Energy Physics - Phenomenology
no code implementations • 1 Jan 2021 • Zhong Cao, Jiang Lu, Jian Liang, ChangShui Zhang
Recently, self-supervised learning (SSL) algorithms have been applied to Few-shot learning(FSL).
no code implementations • ICLR 2021 • Ziang Yan, Yiwen Guo, Jian Liang, ChangShui Zhang
To craft black-box adversarial examples, adversaries need to query the victim model and take proper advantage of its feedback.
2 code implementations • 14 Dec 2020 • Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, Jiashi Feng
Furthermore, we propose a new labeling transfer strategy, which separates the target data into two splits based on the confidence of predictions (labeling information), and then employ semi-supervised learning to improve the accuracy of less-confident predictions in the target domain.
1 code implementation • 13 Dec 2020 • Shuang Li, Fangrui Lv, Binhui Xie, Chi Harold Liu, Jian Liang, Chen Qin
Motivated by the observation that target samples cannot always be separated distinctly by the decision boundary, here in the proposed BCDM, we design a novel classifier determinacy disparity (CDD) metric, which formulates classifier discrepancy as the class relevance of distinct target predictions and implicitly introduces constraint on the target feature discriminability.
no code implementations • 1 Nov 2020 • Yue Zhang, Yajie Zou, Jinjun Tang, Jian Liang
To capture the stochastic time series of lane-changing behavior, this study proposes a temporal convolutional network (TCN) to predict the long-term lane-changing trajectory and behavior.
no code implementations • 15 Oct 2020 • Guanhua Zhang, Bing Bai, Jian Liang, Kun Bai, Conghui Zhu, Tiejun Zhao
Recent studies show that crowd-sourced Natural Language Inference (NLI) datasets may suffer from significant biases like annotation artifacts.
no code implementations • 12 Oct 2020 • Jian Liang, Kun Chen, Ming Lin, ChangShui Zhang, Fei Wang
FMR is an effective scheme for handling sample heterogeneity, where a single regression model is not enough for capturing the complexities of the conditional distribution of the observed samples given the features.
no code implementations • 11 Oct 2020 • Jian Liang, Yuren Cao, Shuang Li, Bing Bai, Hao Li, Fei Wang, Kun Bai
We further extend our method to a meta-learning framework to pursue more thorough domain-difference elimination.
no code implementations • 6 Sep 2020 • Chang Wang, Jian Liang, Mingkai Huang, Bing Bai, Kun Bai, Hao Li
We present HDP-VFL, the first hybrid differentially private (DP) framework for vertical federated learning (VFL) to demonstrate that it is possible to jointly learn a generalized linear model (GLM) from vertically partitioned data with only a negligible cost, w. r. t.
1 code implementation • 25 Aug 2020 • Yinghua Zhang, Yangqiu Song, Jian Liang, Kun Bai, Qiang Yang
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
no code implementations • 11 Jul 2020 • Zhao Kang, Xiao Lu, Jian Liang, Kun Bai, Zenglin Xu
In this work, we propose a new representation learning method that explicitly models and leverages sample relations, which in turn is used as supervision to guide the representation learning.
2 code implementations • CVPR 2021 • Jian Liang, Dapeng Hu, Jiashi Feng
ATDOC alleviates the classifier bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.
no code implementations • 10 Jun 2020 • Bing Bai, Jian Liang, Guanhua Zhang, Hao Li, Kun Bai, Fei Wang
In this paper, we demonstrate that one root cause of this phenomenon is the combinatorial shortcuts, which means that, in addition to the highlighted parts, the attention weights themselves may carry extra information that could be utilized by downstream models after attention layers.
1 code implementation • 9 Jun 2020 • Jian Liang, Bing Bai, Yuren Cao, Kun Bai, Fei Wang
A popular way of performing model interpretation is Instance-wise Feature Selection (IFS), which provides an importance score of each feature representing the data samples to explain how the model generates the specific output.
1 code implementation • 27 May 2020 • Junqi Zhang, Bing Bai, Ye Lin, Jian Liang, Kun Bai, Fei Wang
In this paper, we report our recent practice at Tencent for user modeling based on mobile app usage.
no code implementations • 30 Mar 2020 • Dapeng Hu, Jian Liang, Qibin Hou, Hanshu Yan, Yunpeng Chen, Shuicheng Yan, Jiashi Feng
To successfully align the multi-modal data structures across domains, the following works exploit discriminative information in the adversarial training process, e. g., using multiple class-wise discriminators and introducing conditional information in input or output of the domain discriminator.
1 code implementation • ECCV 2020 • Jian Liang, Yunbo Wang, Dapeng Hu, Ran He, Jiashi Feng
On one hand, negative transfer results in misclassification of target samples to the classes only present in the source domain.
Ranked #2 on
Partial Domain Adaptation
on DomainNet
1 code implementation • ICML 2020 • Jian Liang, Dapeng Hu, Jiashi Feng
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Ranked #1 on
Domain Adaptation
on USPS-to-MNIST
no code implementations • 24 Jul 2019 • Jian Liang, Zhe Xu, Peter Li
We propose a new forward-backward stochastic differential equation solver for high-dimensional derivatives pricing problems by combining deep learning solver with least square regression technique widely used in the least square Monte Carlo method for the valuation of American options.
no code implementations • CVPR 2019 • Jian Liang, Ran He, Zhenan Sun, Tieniu Tan
Conventional domain adaptation methods usually resort to deep neural networks or subspace learning to find invariant representations across domains.
1 code implementation • CVPR 2019 • Jian Liang, Yuren Cao, Chenbin Zhang, Shiyu Chang, Kun Bai, Zenglin Xu
Authentication is a task aiming to confirm the truth between data instances and personal identities.
2 code implementations • ACL 2019 • Guanhua Zhang, Bing Bai, Jian Liang, Kun Bai, Shiyu Chang, Mo Yu, Conghui Zhu, Tiejun Zhao
Natural Language Sentence Matching (NLSM) has gained substantial attention from both academics and the industry, and rich public datasets contribute a lot to this process.
1 code implementation • 18 Sep 2018 • Jian Liang, Ziqi Liu, Jiayu Zhou, Xiaoqian Jiang, Chang-Shui Zhang, Fei Wang
Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together.
1 code implementation • CVPR 2018 • Lingxiao He, Jian Liang, Haiqing Li, Zhenan Sun
Experimental results on two partial person datasets demonstrate the efficiency and effectiveness of the proposed method in comparison with several state-of-the-art partial person re-id approaches.
no code implementations • 22 May 2017 • Yanbo Fan, Jian Liang, Ran He, Bao-Gang Hu, Siwei Lyu
In multi-view clustering, different views may have different confidence levels when learning a consensus representation.
no code implementations • 28 Feb 2017 • Ziang Yan, Jian Liang, Weishen Pan, Jin Li, Chang-Shui Zhang
Object detection when provided image-level labels instead of instance-level labels (i. e., bounding boxes) during training is an important problem in computer vision, since large scale image datasets with instance-level labels are extremely costly to obtain.
no code implementations • 1 Jun 2016 • Yanbo Fan, Ran He, Jian Liang, Bao-Gang Hu
In this paper, we focus on the minimizer function, and study a group of new regularizer, named self-paced implicit regularizer that is deduced from robust loss function.