Search Results for author: Masato Ishii

Found 8 papers, 2 papers with code

Zero-shot Domain Adaptation Based on Attribute Information

no code implementations13 Mar 2019 Masato Ishii, Takashi Takenouchi, Masashi Sugiyama

In this paper, we propose a novel domain adaptation method that can be applied without target data.

Attribute Domain Adaptation

Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics

no code implementations19 Jan 2021 Masato Ishii, Masashi Sugiyama

In this setting, we cannot access source data during adaptation, while unlabeled target data and a model pretrained with source data are given.

Source-Free Domain Adaptation

Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives

1 code implementation12 Feb 2021 Takuya Narihira, Javier Alonsogarcia, Fabien Cardinaux, Akio Hayakawa, Masato Ishii, Kazunori Iwaki, Thomas Kemp, Yoshiyuki Kobayashi, Lukas Mauch, Akira Nakamura, Yukio Obuchi, Andrew Shin, Kenji Suzuki, Stephen Tiedmann, Stefan Uhlich, Takuya Yashima, Kazuki Yoshiyama

While there exist a plethora of deep learning tools and frameworks, the fast-growing complexity of the field brings new demands and challenges, such as more flexible network design, speedy computation on distributed setting, and compatibility between different tools.

Perspectives and Prospects on Transformer Architecture for Cross-Modal Tasks with Language and Vision

no code implementations6 Mar 2021 Andrew Shin, Masato Ishii, Takuya Narihira

Transformer architectures have brought about fundamental changes to computational linguistic field, which had been dominated by recurrent neural networks for many years.

Semi-supervised learning by selective training with pseudo labels via confidence estimation

no code implementations15 Mar 2021 Masato Ishii

Since accurate estimation of the confidence is crucial in our method, we also propose a new data augmentation method, called MixConf, that enables us to obtain confidence-calibrated models even when the number of training data is small.

Data Augmentation Pseudo Label

DetOFA: Efficient Training of Once-for-All Networks for Object Detection Using Path Filter

no code implementations23 Mar 2023 Yuiko Sakuma, Masato Ishii, Takuya Narihira

We address the challenge of training a large supernet for the object detection task, using a relatively small amount of training data.

Neural Architecture Search object-detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.