no code implementations • 3 Jan 2025 • Wang Lituan, Zhang Lei, Wang Yan, Wang Zhenbin, Zhang Zhenwei, Zhang Yi
Specifically, we first propose the Bounded Polygon Annotation (BPAnno) by simply labeling two polygons for a lesion.
1 code implementation • 28 Dec 2024 • Shengbo Tan, Rundong Xue, Shipeng Luo, Zeyu Zhang, Xinran Wang, Lei Zhang, Daji Ergu, Zhang Yi, Yang Zhao, Ying Cai
Hepatic vessels in computed tomography scans often suffer from image fragmentation and noise interference, making it difficult to maintain vessel integrity and posing significant challenges for vessel segmentation.
1 code implementation • 9 Dec 2024 • Quansong He, Xiaojun Yao, Jun Wu, Zhang Yi, Tao He
In recent years, advanced U-like networks have demonstrated remarkable performance in medical image segmentation tasks.
1 code implementation • 30 Aug 2024 • Zeyu Zhang, Nengmin Yi, Shengbo Tan, Ying Cai, Yi Yang, Lei Xu, Qingtai Li, Zhang Yi, Daji Ergu, Yang Zhao
Additionally, we customize the second-order nmODE to improve the model's resistance to noise in MRI.
1 code implementation • 19 Jun 2024 • Kaishen Wang, Xun Xia, Jian Liu, Zhang Yi, Tao He
In this paper, we delve into the distinction between layer attention and the general attention mechanism, noting that existing layer attention methods achieve layer interaction on fixed feature maps in a static manner.
no code implementations • 15 May 2024 • Xiaolin Qin, Jiacen Liu, Qianlei Wang, Shaolin Zhang, Fei Zhu, Zhang Yi
Glass largely blurs the boundary between the real world and the reflection.
no code implementations • 14 Jul 2022 • Xia Yuan, Jianping Gou, Baosheng Yu, Jiali Yu, Zhang Yi
Specifically, we design the intra-class compactness constraint on the intermediate representation at different levels to encourage the intra-class representations to be closer to each other, and eventually the learned representation becomes more discriminative.~Unlike the traditional DDL methods, during the classification stage, our DDLIC performs a layer-wise greedy optimization in a similar way to the training stage.
no code implementations • 29 Sep 2021 • Liu Zhi, Xiaojie Guo, Zhang Yi
Semantic segmentation aims to map each pixel of an image into its correspond-ing semantic label.
14 code implementations • 3 Jan 2021 • Jing Xu, Yu Pan, Xinglin Pan, Steven Hoi, Zhang Yi, Zenglin Xu
The ResNet and its variants have achieved remarkable successes in various computer vision tasks.
Ranked #3 on
Medical Image Classification
on NCT-CRC-HE-100K
no code implementations • 24 Feb 2018 • Yanan Sun, Gary G. Yen, Zhang Yi
Inverted Generational Distance (IGD) has been widely considered as a reliable performance indicator to concurrently quantify the convergence and diversity of multi- and many-objective evolutionary algorithms.
no code implementations • 24 Feb 2018 • Yanan Sun, Gary G. Yen, Zhang Yi
Finally, by assigning the Pareto-optimal solutions to the uniformly distributed reference vectors, a set of solutions with excellent diversity and convergence is obtained.
no code implementations • 13 Dec 2017 • Yanan Sun, Gary G. Yen, Zhang Yi
Specifically, error classification rate on MNIST with $1. 15\%$ is reached by the proposed algorithm consistently, which is a very promising result against state-of-the-art unsupervised DL algorithms.
no code implementations • 25 Sep 2017 • Xi Peng, Jiashi Feng, Shijie Xiao, Jiwen Lu, Zhang Yi, Shuicheng Yan
In this paper, we present a deep extension of Sparse Subspace Clustering, termed Deep Sparse Subspace Clustering (DSSC).
no code implementations • 29 Nov 2015 • Zhang Yi, Xiao Yanghua, Hwang Seung-won, Wang Wei
However, as such increase of recall often invites false positives and decreases precision in return, we propose the following two techniques: First, we identify concepts with different relatedness to generate linear orderings and pairwise ordering constraints.
no code implementations • 26 Feb 2015 • Xi Peng, Can-Yi Lu, Zhang Yi, Huajin Tang
A lot of works have shown that frobenius-norm based representation (FNR) is competitive to sparse representation and nuclear-norm based representation (NNR) in numerous tasks such as subspace clustering.
no code implementations • 17 Nov 2014 • Xi Peng, Jiwen Lu, Zhang Yi, Rui Yan
In this paper, we address two challenging problems in unsupervised subspace learning: 1) how to automatically identify the feature dimension of the learned subspace (i. e., automatic subspace learning), and 2) how to learn the underlying subspace in the presence of Gaussian noise (i. e., robust subspace learning).
no code implementations • 31 Oct 2014 • Jie Chen, Haixian Zhang, Hua Mao, Yongsheng Sang, Zhang Yi
We propose a symmetric low-rank representation (SLRR) method for subspace clustering, which assumes that a data set is approximately drawn from the union of multiple subspaces.
no code implementations • 22 Sep 2014 • Xi Peng, Rui Yan, Bo Zhao, Huajin Tang, Zhang Yi
Although the methods achieve a higher recognition rate than the traditional SPM, they consume more time to encode the local descriptors extracted from the image.
no code implementations • 7 Mar 2014 • Jie Chen, Hua Mao, Yongsheng Sang, Zhang Yi
In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering.
no code implementations • 25 Sep 2013 • Xi Peng, Huajin Tang, Lei Zhang, Zhang Yi, Shijie Xiao
In this paper, we propose a unified framework which makes representation-based subspace clustering algorithms feasible to cluster both out-of-sample and large-scale data.
no code implementations • CVPR 2013 • Xi Peng, Lei Zhang, Zhang Yi
To address the problems, this paper proposes out-of-sample extension of SSC, named as Scalable Sparse Subspace Clustering (SSSC), which makes SSC feasible to cluster large scale data sets.
no code implementations • 24 Apr 2013 • Liangli Zhen, Zhang Yi, Xi Peng, Dezhong Peng
There are two popular schemes to construct a similarity graph, i. e., pairwise distance based scheme and linear representation based scheme.
no code implementations • 4 Oct 2012 • Xi Peng, Lei Zhang, Zhang Yi, Kok Kiong Tan
The model of low-dimensional manifold and sparse representation are two well-known concise models that suggest each data can be described by a few characteristics.
no code implementations • 5 Sep 2012 • Xi Peng, Zhiding Yu, Huajin Tang, Zhang Yi
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i. e., intra-subspace data points).