no code implementations • 6 Feb 2024 • Akira Ito, Masanori Yamada, Atsutoshi Kumagai
This finding shows that permutations found by WM mainly align the directions of singular vectors associated with large singular values across models.
no code implementations • 13 Dec 2023 • Tomoharu Iwata, Atsutoshi Kumagai
We propose a meta-learning method for calibrating deep kernel GPs for improving regression uncertainty estimation performance with a limited number of training data.
no code implementations • 9 Nov 2023 • Tomoharu Iwata, Atsutoshi Kumagai
The proposed method embeds labeled and unlabeled data simultaneously in a task-specific space using a neural network, and the unlabeled data's labels are estimated by adapting classification or regression models in the embedding space.
no code implementations • 14 Mar 2023 • Yasutoshi Ida, Sekitoshi Kanai, Kazuki Adachi, Atsutoshi Kumagai, Yasuhiro Fujiwara
Regularized discrete optimal transport (OT) is a powerful tool to measure the distance between two discrete distributions that have been constructed from data samples on two different domains.
no code implementations • 20 Jun 2022 • Tomoharu Iwata, Atsutoshi Kumagai
With the proposed method, the OoD detection is performed by density estimation in a latent space.
1 code implementation • 31 May 2022 • Daiki Chijiwa, Shin'ya Yamaguchi, Atsutoshi Kumagai, Yasutoshi Ida
Few-shot learning for neural networks (NNs) is an important problem that aims to train NNs with a few data.
no code implementations • 28 Apr 2022 • Kazuki Adachi, Shin'ya Yamaguchi, Atsutoshi Kumagai
Test-time adaptation (TTA), which aims to adapt models without accessing the training dataset, is one of the settings that can address this problem.
no code implementations • 27 Apr 2022 • Shin'ya Yamaguchi, Sekitoshi Kanai, Atsutoshi Kumagai, Daiki Chijiwa, Hisashi Kashima
To transfer source knowledge without these assumptions, we propose a transfer learning method that uses deep generative models and is composed of the following two stages: pseudo pre-training (PP) and pseudo semi-supervised learning (P-SSL).
no code implementations • 5 Nov 2021 • Kentaro Ohno, Atsutoshi Kumagai
In the mechanism, a forget gate, which was introduced to control information flow in a hidden state in the RNN, has recently been re-interpreted as a representative of the time scale of the state, i. e., a measure how long the RNN retains information on inputs.
no code implementations • NeurIPS 2021 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
The closed-form solution enables fast and effective adaptation to a few instances, and its differentiability enables us to train our model such that the expected test error for relative DRE can be explicitly minimized after adapting to a few instances.
no code implementations • 2 Jul 2021 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
We propose a few-shot learning method for unsupervised feature selection, which is a task to select a subset of relevant features in unlabeled data.
no code implementations • 1 Mar 2021 • Tomoharu Iwata, Atsutoshi Kumagai
With a meta-learning framework, quick adaptation to each task and its effective backpropagation are important since the model is trained by the adaptation for each epoch.
no code implementations • 5 Feb 2021 • Masanori Yamada, Sekitoshi Kanai, Tomoharu Iwata, Tomokatsu Takahashi, Yuki Yamanaka, Hiroshi Takahashi, Atsutoshi Kumagai
We theoretically and experimentally confirm that the weight loss landscape becomes sharper as the magnitude of the noise of adversarial training increases in the linear logistic regression model.
1 code implementation • NeurIPS 2020 • Tomoharu Iwata, Atsutoshi Kumagai
We propose a heterogeneous meta-learning method that trains a model on tasks with various attribute spaces, such that it can solve unseen tasks whose attribute spaces are different from the training tasks given a few labeled instances.
no code implementations • 30 Sep 2020 • Tomoharu Iwata, Atsutoshi Kumagai
Forecasting models are usually trained using time-series data in a specific target task.
1 code implementation • 27 Feb 2020 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
To learn node embeddings specialized for anomaly detection, in which there is a class imbalance due to the rarity of anomalies, the parameters of a GCN are trained to minimize the volume of a hypersphere that encloses the node embeddings of normal instances while embedding anomalous ones outside the hypersphere.
no code implementations • NeurIPS 2019 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
The proposed method can infer the anomaly detectors for target domains without re-training by introducing the concept of latent domain vectors, which are latent representations of the domains and are used for inferring the anomaly detectors.
no code implementations • 9 Jul 2018 • Atsutoshi Kumagai, Tomoharu Iwata
The proposed method can infer appropriate domain-specific models without any semantic descriptors by introducing the concept of latent domain vectors, which are latent representations for the domains and are used for inferring the models.