no code implementations • ICML 2020 • Yasutoshi Ida, Sekitoshi Kanai, Yasuhiro Fujiwara, Tomoharu Iwata, Koh Takeuchi, Hisashi Kashima
This is because coordinate descent iteratively updates all the parameters in the objective until convergence.
1 code implementation • 11 Jun 2023 • Junya Arai, Yasuhiro Fujiwara, Makoto Onizuka
Subgraph matching, which finds subgraphs isomorphic to a query, is the key to information retrieval from data represented as a graph.
no code implementations • 14 Mar 2023 • Yasutoshi Ida, Sekitoshi Kanai, Kazuki Adachi, Atsutoshi Kumagai, Yasuhiro Fujiwara
Regularized discrete optimal transport (OT) is a powerful tool to measure the distance between two discrete distributions that have been constructed from data samples on two different domains.
1 code implementation • NeurIPS 2021 • Masahiro Nakano, Yasuhiro Fujiwara, Akisato Kimura, Takeshi Yamada, Naonori Ueda
Our main contribution is to introduce the notion of permutons into the well-known Chinese restaurant process (CRP) for sequence partitioning: a permuton is a probability measure on $[0, 1]\times [0, 1]$ and can be regarded as a geometric interpretation of the scaling limit of permutations.
no code implementations • 2 Jul 2021 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
We propose a few-shot learning method for unsupervised feature selection, which is a task to select a subset of relevant features in unlabeled data.
no code implementations • NeurIPS 2021 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
The closed-form solution enables fast and effective adaptation to a few instances, and its differentiability enables us to train our model such that the expected test error for relative DRE can be explicitly minimized after adapting to a few instances.
no code implementations • 28 Dec 2020 • Junya Arai, Makoto Onizuka, Yasuhiro Fujiwara, Sotetsu Iwamura
That is, our algorithm generates failure patterns when a partial embedding is found unable to become an isomorphic embedding.
Databases
1 code implementation • 27 Feb 2020 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
To learn node embeddings specialized for anomaly detection, in which there is a class imbalance due to the rarity of anomalies, the parameters of a GCN are trained to minimize the volume of a hypersphere that encloses the node embeddings of normal instances while embedding anomalous ones outside the hypersphere.
no code implementations • NeurIPS 2019 • Yasutoshi Ida, Yasuhiro Fujiwara, Hisashi Kashima
Block Coordinate Descent is a standard approach to obtain the parameters of Sparse Group Lasso, and iteratively updates the parameters for each parameter group.
no code implementations • NeurIPS 2019 • Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
The proposed method can infer the anomaly detectors for target domains without re-training by introducing the concept of latent domain vectors, which are latent representations of the domains and are used for inferring the anomaly detectors.
no code implementations • 19 Sep 2019 • Sekitoshi Kanai, Yasutoshi Ida, Yasuhiro Fujiwara, Masanori Yamada, Shuichi Adachi
Furthermore, we reveal that robust CNNs with Absum are more robust against transferred attacks due to decreasing the common sensitivity and against high-frequency noise than standard regularization methods.
no code implementations • 10 Jun 2019 • Yasutoshi Ida, Yasuhiro Fujiwara
Our key idea is to introduce a priority term that identifies the importance of a layer; we can select unimportant layers according to the priority and erase them after the training.
no code implementations • NeurIPS 2018 • Sekitoshi Kanai, Yasuhiro Fujiwara, Yuki Yamanaka, Shuichi Adachi
On the basis of this analysis, we propose sigsoftmax, which is composed of a multiplication of an exponential function and sigmoid function.
no code implementations • NeurIPS 2017 • Sekitoshi Kanai, Yasuhiro Fujiwara, Sotetsu Iwamura
This problem is caused by an abrupt change in the dynamics of the GRU due to a small variation in the parameters.
no code implementations • 31 May 2016 • Yasutoshi Ida, Yasuhiro Fujiwara, Sotetsu Iwamura
Adaptive learning rate algorithms such as RMSProp are widely used for training deep neural networks.