Search Results for author: Ryo Nakamura

Found 6 papers, 4 papers with code

Primitive Geometry Segment Pre-training for 3D Medical Image Segmentation

1 code implementation8 Jan 2024 Ryu Tadokoro, Ryosuke Yamada, Kodai Nakashima, Ryo Nakamura, Hirokatsu Kataoka

From experimental results, we conclude that effective pre-training can be achieved by looking at primitive geometric objects only.

Image Segmentation Medical Image Segmentation +3

Traffic Incident Database with Multiple Labels Including Various Perspective Environmental Information

1 code implementation17 Dec 2023 Shota Nishiyama, Takuma Saito, Ryo Nakamura, Go Ohtani, Hirokatsu Kataoka, Kensho Hara

Our proposed dataset aims to improve the performance of traffic accident recognition by annotating ten types of environmental information as teacher labels in addition to the presence or absence of traffic accidents.

Pre-training Vision Transformers with Very Limited Synthesized Images

1 code implementation ICCV 2023 Ryo Nakamura, Hirokatsu Kataoka, Sora Takashima, Edgar Josafat Martinez Noriega, Rio Yokota, Nakamasa Inoue

Prior work on FDSL has shown that pre-training vision transformers on such synthetic datasets can yield competitive accuracy on a wide range of downstream tasks.

Data Augmentation

Classifying DNS Servers based on Response Message Matrix using Machine Learning

no code implementations9 Nov 2021 Keiichi Shima, Ryo Nakamura, Kazuya Okada, Tomohiro Ishihara, Daisuke Miyamoto, Yuji Sekiya

Improperly configured domain name system (DNS) servers are sometimes used as packet reflectors as part of a DoS or DDoS attack.

BIG-bench Machine Learning

Another Diversity-Promoting Objective Function for Neural Dialogue Generation

1 code implementation20 Nov 2018 Ryo Nakamura, Katsuhito Sudoh, Koichiro Yoshino, Satoshi Nakamura

Although generation-based dialogue systems have been widely researched, the response generations by most existing systems have very low diversities.

Dialogue Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.