no code implementations • 18 Nov 2022 • Aoyu Li, Ikuro Sato, Kohta Ishikawa, Rei Kawakami, Rio Yokota
Among various supervised deep metric learning methods proxy-based approaches have achieved high retrieval accuracies.
1 code implementation • 15 Nov 2022 • Hiroki Naganuma, Kartik Ahuja, Shiro Takagi, Tetsuya Motokawa, Rio Yokota, Kohta Ishikawa, Ikuro Sato, Ioannis Mitliagkas
Modern deep learning systems do not generalize well when the test data distribution is slightly different to the training data distribution.
1 code implementation • 2 Jun 2022 • Shingo Yashima, Teppei Suzuki, Kohta Ishikawa, Ikuro Sato, Rei Kawakami
Ensembles of deep neural networks demonstrate improved performance over single models.
no code implementations • 29 Sep 2021 • Hiroki Naganuma, Taiji Suzuki, Rio Yokota, Masahiro Nomura, Kohta Ishikawa, Ikuro Sato
Generalization measures are intensively studied in the machine learning community for better modeling generalization gaps.
no code implementations • 29 Sep 2021 • Shinya Gongyo, Kohta Ishikawa
By considering the scale symmetry of the network and specific properties of the STEs, we find that STE with clipped Relu is superior to STEs with identity function and vanilla Relu.
no code implementations • 3 Feb 2020 • Akiyoshi Kurobe, Yusuke Sekikawa, Kohta Ishikawa, and Hideo Saito
For comparison, we also developed a novel deep learning approach (DirectNet) that directly regresses the pose between point clouds.
no code implementations • 4 Jun 2019 • Ikuro Sato, Kohta Ishikawa, Guoqing Liu, Masayuki Tanaka
This study addresses an issue of co-adaptation between a feature extractor and a classifier in a neural network.
no code implementations • ICCV 2015 • Takahiro Hasegawa, Mitsuru Ambai, Kohta Ishikawa, Gou Koutaki, Yuji Yamauchi, Takayoshi Yamashita, Hironobu Fujiyoshi
We propose a method for estimating multiple-hypothesis affine regions from a keypoint by using an anisotropic Laplacian-of-Gaussian (LoG) filter.
no code implementations • 29 Jan 2015 • Kohta Ishikawa, Ikuro Sato, Mitsuru Ambai
Binary Hashing is widely used for effective approximate nearest neighbors search.