1 code implementation • 4 Mar 2024 • Yangbo Jiang, Zhiwei Jiang, Le Han, Zenan Huang, Nenggan Zheng
In this paper, we investigate the statistical moments of feature maps within a neural network.
1 code implementation • 2 Feb 2024 • Chao Liu, Ting zhao, Nenggan Zheng
Curvilinear structures, which include line-like continuous objects, are fundamental geometrical elements in image-based applications.
no code implementations • 17 May 2023 • Xiaofeng Liu, Jiaxin Gao, Yaohua Liu, Risheng Liu, Nenggan Zheng
Recently significant progress has been made in human action recognition and behavior prediction using deep learning techniques, leading to improved vision-based semantic understanding.
1 code implementation • 14 May 2023 • Zenan Huang, Haobo Wang, Junbo Zhao, Nenggan Zheng
Understanding the dynamics of time series data typically requires identifying the unique latent factors for data generation, \textit{a. k. a.
1 code implementation • 1 Jan 2023 • Zenan Huang, Jun Wen, Siheng Chen, Linchao Zhu, Nenggan Zheng
Domain adaptation methods reduce domain shift typically by learning domain-invariant features.
1 code implementation • ICCV 2023 • Zenan Huang, Haobo Wang, Junbo Zhao, Nenggan Zheng
In this work, we first characterize that this failure of conventional ML models in DG is attributed to an inadequate identification of causal structures.
no code implementations • 6 Feb 2022 • Weijie Liu, Chao Zhang, Nenggan Zheng, Hui Qian
In this paper, we propose a novel criterion to measure the graph matching accuracy, structural inconsistency (SI), which is defined based on the network topological structure.
no code implementations • 12 Nov 2021 • Weijie Liu, Chao Zhang, Nenggan Zheng, Hui Qian
Optimal transport (OT) naturally arises in a wide range of machine learning applications but may often become the computational bottleneck.
no code implementations • CVPR 2021 • Dongsheng Ruan, Daiyin Wang, Yuan Zheng, Nenggan Zheng, Min Zheng
These approaches commonly learn the relationship between global contexts and attention activations by using fully-connected layers or linear transformations.
no code implementations • 2 Dec 2020 • Weijie Liu, Chao Zhang, Jiahao Xie, Zebang Shen, Hui Qian, Nenggan Zheng
Graph matching finds the correspondence of nodes across two graphs and is a basic task in graph-based machine learning.
no code implementations • 7 Nov 2020 • Jun Wen, Changjian Shui, Kun Kuang, Junsong Yuan, Zenan Huang, Zhefeng Gong, Nenggan Zheng
To address this issue, we intervene in the learning of feature discriminability using unlabeled target data to guide it to get rid of the domain-specific part and be safely transferable.
1 code implementation • 8 Jun 2020 • Zhedong Zheng, Nenggan Zheng, Yi Yang
To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space.
no code implementations • 31 Oct 2019 • Weijie Liu, Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil, Zebang Shen, Nenggan Zheng
In this paper, we focus on solving a class of constrained non-convex non-concave saddle point problems in a decentralized manner by a group of nodes in a network.
no code implementations • 6 Sep 2019 • Dongsheng Ruan, Jun Wen, Nenggan Zheng, Min Zheng
In this work, we first revisit the SE block, and then present a detailed empirical study of the relationship between global context and attention distribution, based on which we propose a simple yet effective module, called Linear Context Transform (LCT) block.
no code implementations • 24 Jun 2019 • Jun Wen, Nenggan Zheng, Junsong Yuan, Zhefeng Gong, Changyou Chen
By imposing distribution matching on both features and labels (via uncertainty), label distribution mismatching in source and target data is effectively alleviated, encouraging the classifier to produce consistent predictions across domains.
no code implementations • 12 Nov 2018 • Jun Wen, Risheng Liu, Nenggan Zheng, Qian Zheng, Zhefeng Gong, Junsong Yuan
In this paper, we present a method for learning domain-invariant local feature patterns and jointly aligning holistic and local feature statistics.
no code implementations • 10 Nov 2018 • Ming Zhang, Nenggan Zheng, De Ma, Gang Pan, Zonghua Gu
A Spiking Neural Network (SNN) can be trained indirectly by first training an Artificial Neural Network (ANN) with the conventional backpropagation algorithm, then converting it into an SNN.