1 code implementation • 6 Oct 2021 • Han Bao, Yoshihiro Nagano, Kento Nozawa
Recent theoretical studies have attempted to explain the benefit of the large negative sample size by upper-bounding the downstream classification loss with the contrastive loss.
no code implementations • 16 May 2021 • Haruka Asanuma, Shiro Takagi, Yoshihiro Nagano, Yuki Yoshida, Yasuhiko Igarashi, Masato Okada
Teacher-student learning is a framework in which we introduce two neural networks: one neural network is a target function in supervised learning, and the other is a learning neural network.
no code implementations • 25 Sep 2019 • Yoshihiro Nagano, Shiro Takagi, Yuki Yoshida, Masato Okada
The local learning approach extracts semantic representations for these datasets by training the embedding model from scratch for each local neighborhood, respectively.
no code implementations • 25 Sep 2019 • Shiro Takagi, Yoshihiro Nagano, Yuki Yoshida, Masato Okada
Model-agnostic meta-learning (MAML) is known as a powerful meta-learning method.
1 code implementation • 8 Feb 2019 • Yoshihiro Nagano, Shoichiro Yamaguchi, Yasuhiro Fujita, Masanori Koyama
Hyperbolic space is a geometry that is known to be well-suited for representation learning of data with an underlying hierarchical structure.
no code implementations • 12 Dec 2017 • Yoshihiro Nagano, Ryo Karakida, Masato Okada
Our study demonstrated that transient dynamics of inference first approaches a concept, and then moves close to a memory.