no code implementations • 13 Mar 2019 • Masato Ishii, Takashi Takenouchi, Masashi Sugiyama
In this paper, we propose a novel domain adaptation method that can be applied without target data.
no code implementations • 19 Jan 2021 • Masato Ishii, Masashi Sugiyama
In this setting, we cannot access source data during adaptation, while unlabeled target data and a model pretrained with source data are given.
1 code implementation • 12 Feb 2021 • Takuya Narihira, Javier Alonsogarcia, Fabien Cardinaux, Akio Hayakawa, Masato Ishii, Kazunori Iwaki, Thomas Kemp, Yoshiyuki Kobayashi, Lukas Mauch, Akira Nakamura, Yukio Obuchi, Andrew Shin, Kenji Suzuki, Stephen Tiedmann, Stefan Uhlich, Takuya Yashima, Kazuki Yoshiyama
While there exist a plethora of deep learning tools and frameworks, the fast-growing complexity of the field brings new demands and challenges, such as more flexible network design, speedy computation on distributed setting, and compatibility between different tools.
no code implementations • 6 Mar 2021 • Andrew Shin, Masato Ishii, Takuya Narihira
Transformer architectures have brought about fundamental changes to computational linguistic field, which had been dominated by recurrent neural networks for many years.
no code implementations • 15 Mar 2021 • Masato Ishii
Since accurate estimation of the confidence is crucial in our method, we also propose a new data augmentation method, called MixConf, that enables us to obtain confidence-calibrated models even when the number of training data is small.
1 code implementation • 5 Dec 2022 • Naoki Matsunaga, Masato Ishii, Akio Hayakawa, Kenji Suzuki, Takuya Narihira
Our goal is to develop fine-grained real-image editing methods suitable for real-world applications.
no code implementations • 23 Mar 2023 • Yuiko Sakuma, Masato Ishii, Takuya Narihira
We address the challenge of training a large supernet for the object detection task, using a relatively small amount of training data.
no code implementations • 28 Mar 2023 • Hiromichi Kamata, Yuiko Sakuma, Akio Hayakawa, Masato Ishii, Takuya Narihira
We propose a high-quality 3D-to-3D conversion method, Instruct 3D-to-3D.