1 code implementation • 30 Jan 2023 • Hong-Yu Zhou, Yunxiang Fu, Zhicheng Zhang, Cheng Bian, Yizhou Yu
Protein representation learning has primarily benefited from the remarkable development of language models (LMs).
1 code implementation • 30 Jan 2023 • Hong-Yu Zhou, Chenyu Lian, Liansheng Wang, Yizhou Yu
Modern studies in radiograph representation learning rely on either self-supervision to encode invariant semantics or associated radiology reports to incorporate medical expertise, while the complementarity between them is barely noticed.
no code implementations • 11 Jan 2023 • Hong-Yu Zhou, Chixiang Lu, Liansheng Wang, Yizhou Yu
Self-supervised representation learning has been extremely successful in medical image analysis, as it requires no human annotations to provide transferable representations for downstream tasks.
1 code implementation • 2 Jan 2023 • Hong-Yu Zhou, Chixiang Lu, Chaoqi Chen, Sibei Yang, Yizhou Yu
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views.
no code implementations • 27 Oct 2022 • Jiansen Guo, Hong-Yu Zhou, Liansheng Wang, Yizhou Yu
These phenomena indicate the potential of UNet-2022 to become the model of choice for medical image segmentation.
no code implementations • 27 Sep 2022 • Chaoqi Chen, Yushuang Wu, Qiyuan Dai, Hong-Yu Zhou, Mutian Xu, Sibei Yang, Xiaoguang Han, Yizhou Yu
Graph Neural Networks (GNNs) have gained momentum in graph representation learning and boosted the state of the art in a variety of areas, such as data mining (\emph{e. g.,} social network analysis and recommender systems), computer vision (\emph{e. g.,} object detection and point cloud learning), and natural language processing (\emph{e. g.,} relation extraction and sequence learning), to name a few.
1 code implementation • 1 Sep 2022 • Zhixiong Yang, Junwen Pan, Yanzhan Yang, Xiaozhou Shi, Hong-Yu Zhou, Zhicheng Zhang, Cheng Bian
The overall framework, namely as Prototype-aware Contrastive learning (ProCo), is unified as a single-stage pipeline in an end-to-end manner to alleviate the imbalanced problem in medical image classification, which is also a distinct progress than existing works as they follow the traditional two-stage pipeline.
no code implementations • 6 Jun 2022 • Chaoqi Chen, Jiongcheng Li, Hong-Yu Zhou, Xiaoguang Han, Yue Huang, Xinghao Ding, Yizhou Yu
However, both the global and local alignment approaches fail to capture the topological relations among different foreground objects as the explicit dependencies and interactions between and within domains are neglected.
1 code implementation • CVPR 2022 • Yangji He, Weihan Liang, Dongyang Zhao, Hong-Yu Zhou, Weifeng Ge, Yizhou Yu, Wenqiang Zhang
To improve data efficiency, we propose hierarchically cascaded transformers that exploit intrinsic image structures through spectral tokens pooling and optimize the learnable parameters through latent attribute surrogates.
Ranked #1 on
Few-Shot Learning
on Mini-Imagenet 5-way (1-shot)
(5 way 1~2 shot metric)
1 code implementation • 5 Jan 2022 • Shu Zhang, Zihao Li, Hong-Yu Zhou, Jiechao Ma, Yizhou Yu
The difficulties in both data acquisition and annotation substantially restrict the sample sizes of training datasets for 3D medical imaging applications.
Ranked #1 on
Medical Object Detection
on DeepLesion
1 code implementation • 4 Nov 2021 • Hong-Yu Zhou, Xiaoyu Chen, Yinghao Zhang, Ruibang Luo, Liansheng Wang, Yizhou Yu
Pre-training lays the foundation for recent successes in radiograph analysis supported by deep learning.
2 code implementations • ICCV 2021 • Hong-Yu Zhou, Chixiang Lu, Sibei Yang, Xiaoguang Han, Yizhou Yu
From this perspective, we introduce Preservational Learning to reconstruct diverse image contexts in order to preserve more information in learned representations.
2 code implementations • 7 Sep 2021 • Hong-Yu Zhou, Jiansen Guo, Yinghao Zhang, Lequan Yu, Liansheng Wang, Yizhou Yu
Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community.
Ranked #1 on
Medical Image Segmentation
on Synapse
no code implementations • 11 Aug 2021 • Hong-Yu Zhou, Chixiang Lu, Sibei Yang, Yizhou Yu
Vision transformers have attracted much attention from computer vision researchers as they are not restricted to the spatial inductive bias of ConvNets.
1 code implementation • CVPR 2021 • Sibei Yang, Meng Xia, Guanbin Li, Hong-Yu Zhou, Yizhou Yu
In this paper, we tackle the challenge by jointly performing compositional visual reasoning and accurate segmentation in a single stage via the proposed novel Bottom-Up Shift (BUS) and Bidirectional Attentive Refinement (BIAR) modules.
no code implementations • 3 Jun 2021 • Hong-Yu Zhou, Chengdi Wang, Haofeng Li, Gang Wang, Shu Zhang, Weimin Li, Yizhou Yu
Semi-Supervised classification and segmentation methods have been widely investigated in medical image analysis.
no code implementations • 30 Mar 2021 • Hong-Yu Zhou, Hualuo Liu, Shilei Cao, Dong Wei, Chixiang Lu, Yizhou Yu, Kai Ma, Yefeng Zheng
In this paper, we show that such process can be integrated into the one-shot segmentation task which is a very challenging but meaningful topic.
1 code implementation • 26 Feb 2021 • Luyan Liu, Zhiwei Wen, Songwei Liu, Hong-Yu Zhou, Hongwei Zhu, Weicheng Xie, Linlin Shen, Kai Ma, Yefeng Zheng
Considering the scarcity of medical data, most datasets in medical image analysis are an order of magnitude smaller than those of natural images.
no code implementations • 29 Jul 2020 • Shuang Yu, Hong-Yu Zhou, Kai Ma, Cheng Bian, Chunyan Chu, Hanruo Liu, Yefeng Zheng
However, when being used for model training, only the final ground-truth label is utilized, while the critical information contained in the raw multi-rater gradings regarding the image being an easy/hard case is discarded.
no code implementations • 20 Jul 2020 • Munan Ning, Cheng Bian, Donghuan Lu, Hong-Yu Zhou, Shuang Yu, Chenglang Yuan, Yang Guo, Yaohua Wang, Kai Ma, Yefeng Zheng
Primary angle closure glaucoma (PACG) is the leading cause of irreversible blindness among Asian people.
1 code implementation • 15 Jul 2020 • Hong-Yu Zhou, Shuang Yu, Cheng Bian, Yifan Hu, Kai Ma, Yefeng Zheng
In deep learning era, pretrained models play an important role in medical image analysis, in which ImageNet pretraining has been widely adopted as the best way.
1 code implementation • 3 Jul 2020 • Bin-Bin Gao, Xin-Xin Liu, Hong-Yu Zhou, Jianxin Wu, Xin Geng
The effectiveness of our approach has been demonstrated on both facial age and attractiveness estimation tasks.
Ranked #1 on
Age Estimation
on ChaLearn 2016
1 code implementation • 3 Jul 2020 • Bin-Bin Gao, Hong-Yu Zhou
To bridge the gap between global and local streams, we propose a multi-class attentional region module which aims to make the number of attentional regions as small as possible and keep the diversity of these regions as high as possible.
Ranked #2 on
Multi-Label Classification
on PASCAL VOC 2012
no code implementations • 7 May 2020 • Codruta O. Ancuti, Cosmin Ancuti, Florin-Alexandru Vasluianu, Radu Timofte, Jing Liu, Haiyan Wu, Yuan Xie, Yanyun Qu, Lizhuang Ma, Ziling Huang, Qili Deng, Ju-Chin Chao, Tsung-Shan Yang, Peng-Wen Chen, Po-Min Hsu, Tzu-Yi Liao, Chung-En Sun, Pei-Yuan Wu, Jeonghyeok Do, Jongmin Park, Munchurl Kim, Kareem Metwaly, Xuelu Li, Tiantong Guo, Vishal Monga, Mingzhao Yu, Venkateswararao Cherukuri, Shiue-Yuan Chuang, Tsung-Nan Lin, David Lee, Jerome Chang, Zhan-Han Wang, Yu-Bang Chang, Chang-Hong Lin, Yu Dong, Hong-Yu Zhou, Xiangzhen Kong, Sourya Dipta Das, Saikat Dutta, Xuan Zhao, Bing Ouyang, Dennis Estrada, Meiqi Wang, Tianqi Su, Siyi Chen, Bangyong Sun, Vincent Whannou de Dravo, Zhe Yu, Pratik Narang, Aryan Mehra, Navaneeth Raghunath, Murari Mandal
We focus on the proposed solutions and their results evaluated on NH-Haze, a novel dataset consisting of 55 pairs of real haze free and nonhomogeneous hazy images recorded outdoor.
no code implementations • 13 Dec 2018 • Hong-Yu Zhou, Avital Oliver, Jianxin Wu, Yefeng Zheng
While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label).
1 code implementation • 13 Jul 2018 • Bin-Bin Gao, Hong-Yu Zhou, Jianxin Wu, Xin Geng
Age estimation performance has been greatly improved by using convolutional neural network.
no code implementations • 17 Apr 2018 • Chen-Wei Xie, Hong-Yu Zhou, Jianxin Wu
To be specific, our approach outperforms the previous state-of-the-art model named DeepLab v3 by 1. 5% on the PASCAL VOC 2012 val set and 0. 6% on the test set by replacing the Atrous Spatial Pyramid Pooling (ASPP) module in DeepLab v3 with the proposed Vortex Pooling.
2 code implementations • 22 Sep 2017 • Wenhao Zheng, Hong-Yu Zhou, Ming Li, Jianxin Wu
Appropriate comments of code snippets provide insight for code functionality, which are helpful for program comprehension.
no code implementations • ICCV 2017 • Hong-Yu Zhou, Bin-Bin Gao, Jianxin Wu
In this paper, we propose Adaptive Feeding (AF) to combine a fast (but less accurate) detector and an accurate (but slow) detector, by adaptively determining whether an image is easy or hard and choosing an appropriate detector for it.
no code implementations • 20 Jul 2017 • Hong-Yu Zhou, Bin-Bin Gao, Jianxin Wu
The difficulty of image recognition has gradually increased from general category recognition to fine-grained recognition and to the recognition of some subtle attributes such as temperature and geolocation.