no code implementations • 13 Aug 2024 • Hongzhou Chen, Lianghua He, Yihang Liu, Longzhen Yang
To further explore the semantic consistency between visual and neural signals.
1 code implementation • 17 Jul 2024 • Tianpei Zou, Sanqing Qu, Zhijun Li, Alois Knoll, Lianghua He, Guang Chen, Changjun Jiang
HGL comprises three complementary modules from local, global to temporal learning in a bottom-up manner. Technically, we first construct a local geometry learning module for pseudo-label generation.
no code implementations • 23 May 2024 • Yitao Peng, Lianghua He, Die Hu
To solve these problems, we propose a weakly supervised interpretable fundus disease localization method called hierarchical salient patch identification (HSPI) that can achieve interpretable disease localization using only image-level labels and a neural network classifier (NNC).
1 code implementation • CVPR 2024 • Boyang Peng, Sanqing Qu, Yong Wu, Tianpei Zou, Lianghua He, Alois Knoll, Guang Chen, Changjun Jiang
In this paper, we target a practical setting where only a well-trained source model is available and investigate how we can realize IP protection.
2 code implementations • CVPR 2024 • Sanqing Qu, Tianpei Zou, Lianghua He, Florian Röhrbein, Alois Knoll, Guang Chen, Changjun Jiang
Besides, LEAD is also appealing in that it is complementary to most existing methods.
Ranked #1 on Universal Domain Adaptation on VisDA2017
1 code implementation • 5 Feb 2024 • Hamed Amini Amirkolaee, Miaojing Shi, Lianghua He, Mark Mulligan
Experimental results show that AdaTreeFormer significantly surpasses the state of the art, \eg in the cross domain from the Yosemite to Jiangsu dataset, it achieves a reduction of 15. 9 points in terms of the absolute counting errors and an increase of 10. 8\% in the accuracy of the detected trees' locations.
no code implementations • 4 Dec 2023 • Yitao Peng, Lianghua He, Die Hu, Yihang Liu, Longzhen Yang, Shaohua Shang
Due to the unique multi-instance learning of medical images and the difficulty in identifying decision-making regions, many interpretability models that have been proposed still have problems of insufficient accuracy and interpretability in medical image disease diagnosis.
1 code implementation • 23 Jun 2023 • Chengmei Yang, Shuai Jiang, Bowei He, Chen Ma, Lianghua He
Specifically, our method consists of an entity-guided relation proto-decoder to classify the relations firstly and a relation-guided entity proto-decoder to extract entities based on the classified relations.
1 code implementation • IEEE Transactions on Multimedia 2023 • Tianli Sun, Haonan Chen, Guosheng Hu, Lianghua He, Cairong Zhao
In addition, we demonstrate the utilization of visualization result in three ways: (1) We visualize attention with respect to connectionist temporal classification (CTC) loss to train an ASR model with adversarial attention erasing regularization, which effectively decreases the word error rate (WER) of the model and improves its generalization capability.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 12 Jan 2023 • Yitao Peng, Longzhen Yang, Yihang Liu, Lianghua He
Saliency methods generating visual explanatory maps representing the importance of image pixels for model classification is a popular technique for explaining neural network decisions.
1 code implementation • CVPR 2023 • Jiafeng Li, Ying Wen, Lianghua He
The proposed SCConv consists of two units: spatial reconstruction unit (SRU) and channel reconstruction unit (CRU).
no code implementations • 15 Oct 2022 • Yitao Peng, Yihang Liu, Longzhen Yang, Lianghua He
It decouples the inference and interpretation modules of a prototype-based network by avoiding the use of prototype activation to explain the network's decisions in order to simultaneously improve the accuracy and interpretability of the neural network.
no code implementations • 17 Jul 2022 • Yitao Peng, Longzhen Yang, Yihang Liu, Lianghua He
We applied the MDM method to the interpretable neural networks ProtoPNet and XProtoNet, which improved the performance of model in the explainable prototype search.
1 code implementation • 28 May 2022 • Longzhen Yang, Yihang Liu, Yitao Peng, Lianghua He
In this work, we will show that the inferior standard of accuracy draws from human annotations (leave-one-out) are not appropriate for machine-generated captions.
2 code implementations • 15 Mar 2022 • Guanyu Cai, Yixiao Ge, Binjie Zhang, Alex Jinpeng Wang, Rui Yan, Xudong Lin, Ying Shan, Lianghua He, XiaoHu Qie, Jianping Wu, Mike Zheng Shou
Recent dominant methods for video-language pre-training (VLP) learn transferable representations from the raw pixels in an end-to-end manner to achieve advanced performance on downstream video-language retrieval.
no code implementations • 27 May 2021 • Guanyu Cai, Lianghua He
In the first stage, we propose the local Lipschitzness regularization as the objective function to align different domains by exploiting intra-domain knowledge, which explores a promising direction for non-adversarial adaptive semantic segmentation.
1 code implementation • ICCV 2021 • Guanyu Cai, Jun Zhang, Xinyang Jiang, Yifei Gong, Lianghua He, Fufu Yu, Pai Peng, Xiaowei Guo, Feiyue Huang, Xing Sun
However, the performance of existing methods suffers in real life since the user is likely to provide an incomplete description of an image, which often leads to results filled with false positives that fit the incomplete description.
1 code implementation • 21 Nov 2019 • Ying Wen, Kai Xie, Lianghua He
The encoder-decoder networks are commonly used in medical image segmentation due to their remarkable performance in hierarchical feature fusion.
no code implementations • 19 Nov 2019 • Qiang Ren, Shaohua Shang, Lianghua He
Capsule network is the most recent exciting advancement in the deep learning field and represents positional information by stacking features into vectors.
1 code implementation • 26 May 2019 • Guanyu Cai, Lianghua He, Mengchu Zhou, Hesham Alhumade, Die Hu
When constructing a deep end-to-end model, to ensure the effectiveness and stability of unsupervised domain adaptation, three critical factors are considered in our proposed optimization strategy, i. e., the sample amount of a target domain, dimension and batchsize of samples.
Ranked #1 on Domain Adaptation on SVNH-to-MNIST
1 code implementation • 25 Jan 2019 • Haifeng Shi, Guanyu Cai, Yuqin Wang, Shaohua Shang, Lianghua He
All the generative paths share the same decoder network while in each path the decoder network is fed with a concatenation of a different pre-computed amplified one-hot vector and the inputted Gaussian noise.
no code implementations • 25 Apr 2018 • Guanyu Cai, Yuqin Wang, Mengchu Zhou, Lianghua He
Domain adaptation is widely used in learning problems lacking labels.