Search Results for author: Xianhang Li

Found 13 papers, 11 papers with code

3D-TransUNet for Brain Metastases Segmentation in the BraTS2023 Challenge

no code implementations23 Mar 2024 Siwei Yang, Xianhang Li, Jieru Mei, Jieneng Chen, Cihang Xie, Yuyin Zhou

We identify that the Decoder-only 3D-TransUNet model should offer enhanced efficacy in the segmentation of brain metastases, as indicated by our 5-fold cross-validation on the training set.

Brain Tumor Segmentation Segmentation +1

Revisiting Adversarial Training at Scale

1 code implementation9 Jan 2024 Zeyu Wang, Xianhang Li, Hongru Zhu, Cihang Xie

For example, by training on DataComp-1B dataset, our AdvXL empowers a vanilla ViT-g model to substantially surpass the previous records of $l_{\infty}$-, $l_{2}$-, and $l_{1}$-robust accuracy by margins of 11. 4%, 14. 2% and 12. 9%, respectively.

3D TransUNet: Advancing Medical Image Segmentation through Vision Transformers

2 code implementations11 Oct 2023 Jieneng Chen, Jieru Mei, Xianhang Li, Yongyi Lu, Qihang Yu, Qingyue Wei, Xiangde Luo, Yutong Xie, Ehsan Adeli, Yan Wang, Matthew Lungren, Lei Xing, Le Lu, Alan Yuille, Yuyin Zhou

In this paper, we extend the 2D TransUNet architecture to a 3D network by building upon the state-of-the-art nnU-Net architecture, and fully exploring Transformers' potential in both the encoder and decoder design.

Image Segmentation Medical Image Segmentation +3

Consistency-guided Meta-Learning for Bootstrapping Semi-Supervised Medical Image Segmentation

1 code implementation21 Jul 2023 Qingyue Wei, Lequan Yu, Xianhang Li, Wei Shao, Cihang Xie, Lei Xing, Yuyin Zhou

Specifically, our approach first involves training a segmentation model on a small set of clean labeled images to generate initial labels for unlabeled data.

Image Segmentation Meta-Learning +4

CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy

2 code implementations27 Jun 2023 Xianhang Li, Zeyu Wang, Cihang Xie

The recent work CLIPA presents an inverse scaling law for CLIP training -- whereby the larger the image/text encoders used, the shorter the sequence length of image/text tokens that can be applied in training.

An Inverse Scaling Law for CLIP Training

1 code implementation NeurIPS 2023 Xianhang Li, Zeyu Wang, Cihang Xie

However, its associated training cost is prohibitively high, imposing a significant barrier to its widespread exploration.

Unleashing the Power of Visual Prompting At the Pixel Level

1 code implementation20 Dec 2022 Junyang Wu, Xianhang Li, Chen Wei, Huiyu Wang, Alan Yuille, Yuyin Zhou, Cihang Xie

This paper presents a simple and effective visual prompting method for adapting pre-trained models to downstream recognition tasks.

Visual Prompting

In Defense of Image Pre-Training for Spatiotemporal Recognition

1 code implementation3 May 2022 Xianhang Li, Huiyu Wang, Chen Wei, Jieru Mei, Alan Yuille, Yuyin Zhou, Cihang Xie

Inspired by this observation, we hypothesize that the key to effectively leveraging image pre-training lies in the decomposition of learning spatial and temporal features, and revisiting image pre-training as the appearance prior to initializing 3D kernels.

STS Video Recognition

Fast AdvProp

1 code implementation ICLR 2022 Jieru Mei, Yucheng Han, Yutong Bai, Yixiao Zhang, Yingwei Li, Xianhang Li, Alan Yuille, Cihang Xie

Specifically, our modifications in Fast AdvProp are guided by the hypothesis that disentangled learning with adversarial examples is the key for performance improvements, while other training recipes (e. g., paired clean and adversarial training samples, multi-step adversarial attackers) could be largely simplified.

Data Augmentation object-detection +1

L2B: Learning to Bootstrap Robust Models for Combating Label Noise

1 code implementation9 Feb 2022 Yuyin Zhou, Xianhang Li, Fengze Liu, Qingyue Wei, Xuxi Chen, Lequan Yu, Cihang Xie, Matthew P. Lungren, Lei Xing

Extensive experiments demonstrate that our method effectively mitigates the challenges of noisy labels, often necessitating few to no validation samples, and is well generalized to other tasks such as image segmentation.

Ranked #8 on Image Classification on Clothing1M (using clean data) (using extra training data)

Image Segmentation Learning with noisy labels +3

CT-Net: Channel Tensorization Network for Video Classification

1 code implementation ICLR 2021 Kunchang Li, Xianhang Li, Yali Wang, Jun Wang, Yu Qiao

It can learn to exploit spatial, temporal and channel attention in a high-dimensional manner, to improve the cooperative power of all the feature dimensions in our CT-Module.

Action Classification Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.