no code implementations • ECCV 2020 • Sahin Olut, Zhengyang Shen, Zhenlin Xu, Samuel Gerber, Marc Niethammer
Data augmentation or semi-supervised approaches are commonly used to cope with limited labeled training data.
1 code implementation • 8 Apr 2024 • Zijia Lu, Bing Shuai, Yanbei Chen, Zhenlin Xu, Davide Modolo
In this paper, we propose a novel concept of path consistency to learn robust object matching without using manual object identity supervision.
no code implementations • 28 Jun 2023 • Zhenlin Xu, Yi Zhu, Tiffany Deng, Abhay Mittal, Yanbei Chen, Manchen Wang, Paolo Favaro, Joseph Tighe, Davide Modolo
This paper introduces innovative benchmarks to evaluate Vision-Language Models (VLMs) in real-world zero-shot recognition tasks, focusing on the granularity and specificity of prompting text.
no code implementations • CVPR 2023 • Yanbei Chen, Manchen Wang, Abhay Mittal, Zhenlin Xu, Paolo Favaro, Joseph Tighe, Davide Modolo
Our results show that ScaleDet achieves compelling strong model performance with an mAP of 50. 7 on LVIS, 58. 8 on COCO, 46. 8 on Objects365, 76. 2 on OpenImages, and 71. 8 on ODinW, surpassing state-of-the-art detectors with the same backbone.
Ranked #1 on Object Detection on OpenImages-v6 (using extra training data)
2 code implementations • ICCV 2023 • Qin Liu, Zhenlin Xu, Gedas Bertasius, Marc Niethammer
Although this design is simple and has been proven effective, it has not yet been explored for interactive image segmentation.
Ranked #2 on Interactive Segmentation on SBD
no code implementations • 11 Oct 2022 • Berk Iskender, Zhenlin Xu, Simon Kornblith, En-Hung Chu, Maryam Khademi
Many contrastive representation learning methods learn a single global representation of an entire image.
no code implementations • 2 Oct 2022 • Zhenlin Xu, Marc Niethammer, Colin Raffel
In hopes of enabling compositional generalization, various unsupervised learning algorithms have been proposed with inductive biases that aim to induce compositional structure in learned representations (e. g. disentangled representation and emergent language learning).
1 code implementation • 21 Dec 2021 • Qin Liu, Zhenlin Xu, Yining Jiao, Marc Niethammer
We propose iSegFormer, a memory-efficient transformer that combines a Swin transformer with a lightweight multilayer perceptron (MLP) decoder.
no code implementations • 17 Aug 2020 • Xu Han, Zhengyang Shen, Zhenlin Xu, Spyridon Bakas, Hamed Akbari, Michel Bilello, Christos Davatzikos, Marc Niethammer
They are therefore not designed for the registration of images with strong pathologies for example in the context of brain tumors, and traumatic brain injuries.
2 code implementations • ICLR 2021 • Zhenlin Xu, Deyi Liu, Junlin Yang, Colin Raffel, Marc Niethammer
In this work, we show that the robustness of neural networks can be greatly improved through the use of random convolutions as data augmentation.
Ranked #110 on Domain Generalization on PACS
1 code implementation • 5 Jul 2020 • Zhengyang Shen, Zhenlin Xu, Sahin Olut, Marc Niethammer
We introduce a fluid-based image augmentation method for medical image analysis.
1 code implementation • 17 Apr 2019 • Zhenlin Xu, Marc Niethammer
Specifically, in a one-shot-scenario (with only one manually labeled image) our approach increases Dice scores (%) over an unsupervised registration network by 2. 7 and 1. 8 on the knee and brain images respectively.
2 code implementations • CVPR 2019 • Zhengyang Shen, Xu Han, Zhenlin Xu, Marc Niethammer
In contrast to existing approaches, our framework combines two registration methods: an affine registration and a vector momentum-parameterized stationary velocity field (vSVF) model.
Ranked #2 on Image Registration on Osteoarthritis Initiative