no code implementations • 27 Oct 2024 • Yusuke Sekikawa, Chingwei Hsu, Satoshi Ikehata, Rei Kawakami, Ikuro Sato
We propose Gumbel-NeRF, a mixture-of-expert (MoE) neural radiance fields (NeRF) model with a hindsight expert selection mechanism for synthesizing novel views of unseen objects.
1 code implementation • 19 Dec 2022 • Tatsukichi Shibuya, Nakamasa Inoue, Rei Kawakami, Ikuro Sato
Learning of the feedforward and feedback networks is sufficient to make TP methods capable of training, but is having these layer-wise autoencoders a necessary condition for TP to work?
no code implementations • 18 Nov 2022 • Aoyu Li, Ikuro Sato, Kohta Ishikawa, Rei Kawakami, Rio Yokota
Among various supervised deep metric learning methods proxy-based approaches have achieved high retrieval accuracies.
1 code implementation • 15 Nov 2022 • Hiroki Naganuma, Kartik Ahuja, Shiro Takagi, Tetsuya Motokawa, Rio Yokota, Kohta Ishikawa, Ikuro Sato, Ioannis Mitliagkas
Modern deep learning systems do not generalize well when the test data distribution is slightly different to the training data distribution.
1 code implementation • 5 Jul 2022 • Ikuro Sato, Ryota Yamada, Masayuki Tanaka, Nakamasa Inoue, Rei Kawakami
We developed a training algorithm called PoF: Post-Training of Feature Extractor that updates the feature extractor part of an already-trained deep model to search a flatter minimum.
1 code implementation • 2 Jun 2022 • Shingo Yashima, Teppei Suzuki, Kohta Ishikawa, Ikuro Sato, Rei Kawakami
Ensembles of deep neural networks demonstrate improved performance over single models.
1 code implementation • 25 Mar 2022 • Pablo Cervantes, Yusuke Sekikawa, Ikuro Sato, Koichi Shinoda
We confirm that our method with a Transformer decoder outperforms all relevant methods on HumanAct12, NTU-RGBD, and UESTC datasets in terms of realism and diversity of generated motions.
no code implementations • 29 Sep 2021 • Hiroki Naganuma, Taiji Suzuki, Rio Yokota, Masahiro Nomura, Kohta Ishikawa, Ikuro Sato
Generalization measures are intensively studied in the machine learning community for better modeling generalization gaps.
no code implementations • 13 Nov 2019 • Teppei Suzuki, Ikuro Sato
We propose a Regularization framework based on Adversarial Transformations (RAT) for semi-supervised learning.
no code implementations • 4 Jun 2019 • Ikuro Sato, Kohta Ishikawa, Guoqing Liu, Masayuki Tanaka
This study addresses an issue of co-adaptation between a feature extractor and a classifier in a neural network.
2 code implementations • ICCV 2019 • Mikihiro Tanaka, Takayuki Itamochi, Kenichi Narioka, Ikuro Sato, Yoshitaka Ushiku, Tatsuya Harada
Moreover, we regard that sentences that are easily understood are those that are comprehended correctly and quickly by humans.
no code implementations • 13 Sep 2018 • Kent Fujiwara, Ikuro Sato, Mitsuru Ambai, Yuichi Yoshida, Yoshiaki Sakakura
We present a novel compact point cloud representation that is inherently invariant to scale, coordinate change and point permutation.
no code implementations • 14 Sep 2017 • Ryuji Kamiya, Takayoshi Yamashita, Mitsuru Ambai, Ikuro Sato, Yuji Yamauchi, Hironobu Fujiyoshi
Our method replaces real-valued inner-product computations with binary inner-product computations in existing network models to accelerate computation of inference and decrease model size without the need for retraining.
no code implementations • 13 May 2015 • Ikuro Sato, Hiroki Nishimura, Kensuke Yokoi
Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning.
Ranked #7 on Image Classification on MNIST
no code implementations • 29 Jan 2015 • Kohta Ishikawa, Ikuro Sato, Mitsuru Ambai
Binary Hashing is widely used for effective approximate nearest neighbors search.