Search Results for author: Kodai Nakashima

Found 5 papers, 4 papers with code

Primitive Geometry Segment Pre-training for 3D Medical Image Segmentation

1 code implementation8 Jan 2024 Ryu Tadokoro, Ryosuke Yamada, Kodai Nakashima, Ryo Nakamura, Hirokatsu Kataoka

From experimental results, we conclude that effective pre-training can be achieved by looking at primitive geometric objects only.

Image Segmentation Medical Image Segmentation +3

SegRCDB: Semantic Segmentation via Formula-Driven Supervised Learning

1 code implementation ICCV 2023 Risa Shinoda, Ryo Hayamizu, Kodai Nakashima, Nakamasa Inoue, Rio Yokota, Hirokatsu Kataoka

SegRCDB has a high potential to contribute to semantic segmentation pre-training and investigation by enabling the creation of large datasets without manual annotation.

Segmentation Semantic Segmentation

Replacing Labeled Real-image Datasets with Auto-generated Contours

no code implementations CVPR 2022 Hirokatsu Kataoka, Ryo Hayamizu, Ryosuke Yamada, Kodai Nakashima, Sora Takashima, Xinyu Zhang, Edgar Josafat Martinez-Noriega, Nakamasa Inoue, Rio Yokota

In the present work, we show that the performance of formula-driven supervised learning (FDSL) can match or even exceed that of ImageNet-21k without the use of real images, human-, and self-supervision during the pre-training of Vision Transformers (ViTs).

Describing and Localizing Multiple Changes with Transformers

2 code implementations ICCV 2021 Yue Qiu, Shintaro Yamamoto, Kodai Nakashima, Ryota Suzuki, Kenji Iwata, Hirokatsu Kataoka, Yutaka Satoh

Change captioning tasks aim to detect changes in image pairs observed before and after a scene change and generate a natural language description of the changes.

Can Vision Transformers Learn without Natural Images?

1 code implementation24 Mar 2021 Kodai Nakashima, Hirokatsu Kataoka, Asato Matsumoto, Kenji Iwata, Nakamasa Inoue

Moreover, although the ViT pre-trained without natural images produces some different visualizations from ImageNet pre-trained ViT, it can interpret natural image datasets to a large extent.

Fairness Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.