no code implementations • ICML 2020 • Yasutoshi Ida, Sekitoshi Kanai, Yasuhiro Fujiwara, Tomoharu Iwata, Koh Takeuchi, Hisashi Kashima
This is because coordinate descent iteratively updates all the parameters in the objective until convergence.
no code implementations • 3 Oct 2024 • Yuka Hashimoto, Tomoharu Iwata
We propose deep Koopman-layered models with learnable parameters in the form of Toeplitz matrices for analyzing the dynamics of time-series data.
no code implementations • 6 Jun 2024 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
Existing PU learning methods require many PU data, but sufficient data are often unavailable in practice.
1 code implementation • 29 May 2024 • Hiroshi Takahashi, Tomoharu Iwata, Atsutoshi Kumagai, Yuuki Yamanaka
With our approach, we can approximate the anomaly scores for normal data using the unlabeled and anomaly data.
Semi-supervised Anomaly Detection Supervised Anomaly Detection
no code implementations • 14 Feb 2024 • Yusuke Tanaka, Takaharu Yaguchi, Tomoharu Iwata, Naonori Ueda
The operator learning has received significant attention in recent years, with the aim of learning a mapping between function spaces.
no code implementations • 29 Jan 2024 • Yoshiaki Takimoto, Yusuke Tanaka, Tomoharu Iwata, Maya Okawa, Hideaki Kim, Hiroyuki Toda, Takeshi Kurashima
The point process is widely used in many applications to predict such events related to human activities.
no code implementations • 13 Dec 2023 • Tomoharu Iwata, Atsutoshi Kumagai
We propose a meta-learning method for calibrating deep kernel GPs for improving regression uncertainty estimation performance with a limited number of training data.
no code implementations • 9 Nov 2023 • Tomoharu Iwata, Atsutoshi Kumagai
The proposed method embeds labeled and unlabeled data simultaneously in a task-specific space using a neural network, and the unlabeled data's labels are estimated by adapting classification or regression models in the embedding space.
no code implementations • 20 Oct 2023 • Tomoharu Iwata, Yusuke Tanaka, Naonori Ueda
We propose a neural network-based meta-learning method to efficiently solve partial differential equation (PDE) problems.
1 code implementation • 19 Oct 2023 • Yuya Yoshikawa, Tomoharu Iwata
To improve the faithfulness, we propose insertion/deletion metric-aware explanation-based optimization (ID-ExpO), which optimizes differentiable predictors to improve both the insertion and deletion scores of the explanations while maintaining their predictive accuracy.
no code implementations • 23 Jul 2023 • Futoshi Futami, Tomoharu Iwata
Furthermore, we extend the existing analysis of Bayesian meta-learning and show the novel sensitivities among tasks for the first time.
no code implementations • 19 May 2023 • Tomoharu Iwata, Yoichi Chikahara
With our formulation, we can obtain optimal task-specific parameters in a closed form that are differentiable with respect to task-shared parameters, making it possible to perform effective meta-learning.
no code implementations • 26 Dec 2022 • Tomoharu Iwata, Yoshinobu Kawahara
Inductive biases are helpful for training neural networks especially when training data are small.
1 code implementation • 2 Nov 2022 • Shuhei A. Horiguchi, Tomoharu Iwata, Taku Tsuzuki, Yosuke Ozawa
In this paper, we investigate a simple alternative approach: tackling the problem in the original high-dimensional space using the information from the learned low-dimensional structure.
no code implementations • 4 Oct 2022 • Tomoharu Iwata
Due to the privacy protection or the difficulty of data collection, we cannot observe individual outputs for each instance, but we can observe aggregated outputs that are summed over multiple instances in a set in some real-world applications.
no code implementations • 16 Aug 2022 • Tomoharu Iwata, Yoshinobu Kawahara
With the proposed method, a policy network is trained such that the eigenvalues of a Koopman operator of controlled dynamics are close to the target eigenvalues.
1 code implementation • 7 Jul 2022 • Maya Okawa, Tomoharu Iwata
Traditionally, theoretical models of opinion dynamics have been proposed to describe the interactions between individuals (i. e., social interaction) and their impact on the evolution of collective opinions.
no code implementations • 24 Jun 2022 • Yusuke Tanaka, Toshiyuki Tanaka, Tomoharu Iwata, Takeshi Kurashima, Maya Okawa, Yasunori Akagi, Hiroyuki Toda
Since the supports may have various granularities depending on attributes (e. g., poverty rate and crime rate), modeling such data is not straightforward.
no code implementations • 20 Jun 2022 • Tomoharu Iwata, Atsutoshi Kumagai
With the proposed method, the OoD detection is performed by density estimation in a latent space.
no code implementations • 2 Jun 2022 • Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama
Bayesian deep learning plays an important role especially for its ability evaluating epistemic uncertainty (EU).
no code implementations • 14 Feb 2022 • Keisuke Kinoshita, Marc Delcroix, Tomoharu Iwata
Speaker diarization has been investigated extensively as an important central task for meeting analysis.
1 code implementation • IEEE Transactions on Neural Networks and Learning Systems 2021 • Yuya Yoshikawa, Tomoharu Iwata
In the proposed model, both the prediction and explanation for each sample are performed using an easy-to-interpret locally linear model.
no code implementations • 7 Dec 2021 • Tomoharu Iwata, Yuya Yoshikawa
For improving the interpretability, reducing the number of examples in the explanation model is important.
no code implementations • 26 Nov 2021 • Hitoshi Shimizu, Hirohiko Suwa, Tomoharu Iwata, Akinori Fujino, Hiroshi Sawada, Keiichi Yasumoto
In this study, we formulate the "Evacuation Shelter Scheduling Problem," which allocates evacuees to shelters in such a way to minimize the movement costs of the evacuees and the operation costs of the shelters.
no code implementations • 1 Nov 2021 • Tomoharu Iwata
In experiments using three text document datasets, we demonstrate that the proposed method achieves better BO performance than the existing methods.
no code implementations • 2 Jul 2021 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
We propose a few-shot learning method for unsupervised feature selection, which is a task to select a subset of relevant features in unlabeled data.
no code implementations • NeurIPS 2021 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
The closed-form solution enables fast and effective adaptation to a few instances, and its differentiability enables us to train our model such that the expected test error for relative DRE can be explicitly minimized after adapting to a few instances.
no code implementations • 29 Jun 2021 • Tomoharu Iwata
The neural network is meta-learned such that the expected imputation error is minimized when the factorized matrices are adapted to each matrix by a maximum a posteriori (MAP) estimation.
no code implementations • NeurIPS 2021 • Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama
First, we provide a new second-order Jensen inequality, which has the repulsion term based on the loss function.
no code implementations • 24 May 2021 • Maya Okawa, Tomoharu Iwata, Yusuke Tanaka, Hiroyuki Toda, Takeshi Kurashima, Hisashi Kashima
Hawkes processes offer a central tool for modeling the diffusion processes, in which the influence from the past events is described by the triggering kernel.
no code implementations • 19 Apr 2021 • Tomoharu Iwata
The proposed method trains the neural networks such that the expected test likelihood is improved when topic model parameters are estimated by maximizing the posterior probability using the priors based on the EM algorithm.
1 code implementation • EACL 2021 • Makoto Morishita, Jun Suzuki, Tomoharu Iwata, Masaaki Nagata
It is crucial to provide an inter-sentence context in Neural Machine Translation (NMT) models for higher-quality translation.
no code implementations • 1 Mar 2021 • Tomoharu Iwata, Atsutoshi Kumagai
With a meta-learning framework, quick adaptation to each task and its effective backpropagation are important since the model is trained by the adaptation for each epoch.
no code implementations • 1 Mar 2021 • Tomoharu Iwata
We propose a meta-learning method that train neural networks for obtaining representations such that clustering performance improves when the representations are clustered by the variational Bayesian (VB) inference with an infinite Gaussian mixture model.
no code implementations • 9 Feb 2021 • Tomoharu Iwata, Yoshinobu Kawahara
With the proposed method, a representation of a given short time-series is obtained by a bidirectional LSTM for extracting its properties.
no code implementations • 5 Feb 2021 • Masanori Yamada, Sekitoshi Kanai, Tomoharu Iwata, Tomokatsu Takahashi, Yuki Yamanaka, Hiroshi Takahashi, Atsutoshi Kumagai
We theoretically and experimentally confirm that the weight loss landscape becomes sharper as the magnitude of the noise of adversarial training increases in the linear logistic regression model.
no code implementations • 11 Dec 2020 • Tomoharu Iwata, Yoshinobu Kawahara
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition, enabling end-to-end learning of Koopman spectral analysis.
1 code implementation • NeurIPS 2020 • Tomoharu Iwata, Atsutoshi Kumagai
We propose a heterogeneous meta-learning method that trains a model on tasks with various attribute spaces, such that it can solve unseen tasks whose attribute spaces are different from the training tasks given a few labeled instances.
1 code implementation • EMNLP (MRL) 2021 • Takashi Wada, Tomoharu Iwata, Yuji Matsumoto, Timothy Baldwin, Jey Han Lau
We propose a new approach for learning contextualised cross-lingual word embeddings based on a small parallel corpus (e. g. a few hundred sentence pairs).
Bilingual Lexicon Induction Cross-Lingual Word Embeddings +5
no code implementations • 12 Oct 2020 • Tomoharu Iwata
Meta-learning is an important approach to improve machine learning performance with a limited number of observations for target tasks.
no code implementations • 9 Oct 2020 • Tomoharu Iwata, Yusuke Tanaka
We propose a few-shot learning method for spatial regression.
no code implementations • 30 Sep 2020 • Tomoharu Iwata, Atsutoshi Kumagai
Forecasting models are usually trained using time-series data in a specific target task.
no code implementations • 3 Jul 2020 • Yuya Yoshikawa, Tomoharu Iwata
In the proposed model, both the prediction and explanation for each sample are performed using an easy-to-interpret locally linear model.
no code implementations • 16 Jun 2020 • Yasunori Akagi, Yusuke Tanaka, Tomoharu Iwata, Takeshi Kurashima, Hiroyuki Toda
In this study, we propose a new framework in which OT is considered as a maximum a posteriori (MAP) solution of a probabilistic generative model.
no code implementations • 13 Mar 2020 • Yuya Yoshikawa, Tomoharu Iwata
Additionally, the prediction is interpretable because it is obtained by the inner product between the simplified representations and the sparse weights, where only a small number of weights are selected by our gate module in the NGSLL.
1 code implementation • 27 Feb 2020 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
To learn node embeddings specialized for anomaly detection, in which there is a class imbalance due to the rarity of anomalies, the parameters of a GCN are trained to minimize the volume of a hypersphere that encloses the node embeddings of normal instances while embedding anomalous ones outside the hypersphere.
no code implementations • NeurIPS 2019 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
The proposed method can infer the anomaly detectors for target domains without re-training by introducing the concept of latent domain vectors, which are latent representations of the domains and are used for inferring the anomaly detectors.
no code implementations • 17 Sep 2019 • Tomoharu Iwata, Takuma Otsuka
By using the neural network covariance function, we can extract nonlinear correlation among feature vectors that are shared across related tasks.
no code implementations • 11 Sep 2019 • Tomoharu Iwata, Machiko Toyoda, Shotaro Tora, Naonori Ueda
We model the anomaly score function by a neural network-based unsupervised anomaly detection method, e. g., autoencoders.
no code implementations • NeurIPS 2019 • Yusuke Tanaka, Toshiyuki Tanaka, Tomoharu Iwata, Takeshi Kurashima, Maya Okawa, Yasunori Akagi, Hiroyuki Toda
By deriving the posterior GP, we can predict the data value at any location point by considering the spatial correlations and the dependences between areal data sets, simultaneously.
1 code implementation • ACL 2019 • Takashi Wada, Tomoharu Iwata, Yuji Matsumoto
Recently, a variety of unsupervised methods have been proposed that map pre-trained word embeddings of different languages into the same space without any parallel data.
no code implementations • 21 Jun 2019 • Maya Okawa, Tomoharu Iwata, Takeshi Kurashima, Yusuke Tanaka, Hiroyuki Toda, Naonori Ueda
Though many point processes have been proposed to model events in a continuous spatio-temporal space, none of them allow for the consideration of the rich contextual factors that affect event occurrence, such as weather, social activities, geographical characteristics, and traffic.
no code implementations • 12 Apr 2019 • Tomoharu Iwata, Yuki Yamanaka
We propose a supervised anomaly detection method based on neural density estimators, where the negative log likelihood is used for the anomaly score.
no code implementations • 26 Mar 2019 • Yuki Yamanaka, Tomoharu Iwata, Hiroshi Takahashi, Masanori Yamada, Sekitoshi Kanai
Since our approach becomes able to reconstruct the normal data points accurately and fails to reconstruct the known and unknown anomalies, it can accurately discriminate both known and unknown anomalies from normal data points.
no code implementations • 23 Oct 2018 • Tomoharu Iwata, Takuma Otsuka, Hitoshi Shimizu, Hiroshi Sawada, Futoshi Naya, Naonori Ueda
In this paper, we propose a method to learn a function that outputs regulation effects given the current traffic situation as inputs.
no code implementations • 9 Oct 2018 • Tomoharu Iwata, Naonori Ueda
The estimated latent vectors contain hidden structural information of each object in the given relational dataset.
no code implementations • 21 Sep 2018 • Yusuke Tanaka, Tomoharu Iwata, Toshiyuki Tanaka, Takeshi Kurashima, Maya Okawa, Hiroyuki Toda
With the proposed model, a distribution for each auxiliary data set on the continuous space is modeled using a Gaussian process, where the representation of uncertainty considers the levels of granularity.
1 code implementation • 14 Sep 2018 • Hiroshi Takahashi, Tomoharu Iwata, Yuki Yamanaka, Masanori Yamada, Satoshi Yagi
However, KL divergence with the aggregated posterior cannot be calculated in a closed form, which prevents us from using this optimal prior.
no code implementations • 7 Sep 2018 • Takashi Wada, Tomoharu Iwata
The proposed model contains bidirectional LSTMs that perform as forward and backward language models, and these networks are shared among all the languages.
no code implementations • 9 Jul 2018 • Atsutoshi Kumagai, Tomoharu Iwata
The proposed method can infer appropriate domain-specific models without any semantic descriptors by introducing the concept of latent domain vectors, which are latent representations for the domains and are used for inferring the models.
no code implementations • 8 Feb 2018 • Akisato Kimura, Zoubin Ghahramani, Koh Takeuchi, Tomoharu Iwata, Naonori Ueda
In this paper, we propose a simple but effective method for training neural networks with a limited amount of training data.
no code implementations • 19 Jul 2017 • Tomoharu Iwata, Zoubin Ghahramani
We propose a simple method that combines neural networks and Gaussian processes.
no code implementations • NeurIPS 2016 • Tomoharu Iwata, Makoto Yamada
With the proposed model, all views of a non-anomalous instance are assumed to be generated from a single latent vector.
no code implementations • 22 Mar 2016 • Makoto Yamada, Koh Takeuchi, Tomoharu Iwata, John Shawe-Taylor, Samuel Kaski
We introduce the localized Lasso, which is suited for learning models that are both interpretable and have a high predictive power in problems with high dimensionality $d$ and small sample size $n$.
no code implementations • NeurIPS 2015 • Yuya Yoshikawa, Tomoharu Iwata, Hiroshi Sawada, Takeshi Yamada
We propose a kernel-based method for finding matching between instances across different domains, such as multilingual documents and images with annotations.
no code implementations • NeurIPS 2014 • Yuya Yoshikawa, Tomoharu Iwata, Hiroshi Sawada
With the latent SMM, a latent vector is associated with each vocabulary term, and each document is represented as a distribution of the latent vectors for words appearing in the document.
no code implementations • 13 Nov 2014 • Tomoharu Iwata, Makoto Yamada
We propose a nonparametric Bayesian probabilistic latent variable model for multi-view anomaly detection, which is the task of finding instances that have inconsistent views.
1 code implementation • 9 Aug 2014 • Tomoharu Iwata, David Duvenaud, Zoubin Ghahramani
A mixture of Gaussians fit to a single curved or heavy-tailed cluster will report that the data contains many clusters.
1 code implementation • 8 Jun 2012 • Tomoharu Iwata, David Duvenaud, Zoubin Ghahramani
A mixture of Gaussians fit to a single curved or heavy-tailed cluster will report that the data contains many clusters.
no code implementations • NeurIPS 2010 • Katsuhiko Ishiguro, Tomoharu Iwata, Naonori Ueda, Joshua B. Tenenbaum
We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks.
no code implementations • NeurIPS 2009 • Tomoharu Iwata, Takeshi Yamada, Naonori Ueda
We propose a probabilistic topic model for analyzing and extracting content-related annotations from noisy annotated discrete data such as web pages stored in social bookmarking services.