1 code implementation • 11 Mar 2024 • Shunsuke Yasuki, Masato Taki
The reason for the high-performance of large kernel CNNs in downstream tasks has been attributed to the large effective receptive field (ERF) produced by large size kernels, but this view has not been fully tested.
1 code implementation • 25 Apr 2023 • Toshihiro Ota, Masato Taki
In the last few years, the success of Transformers in computer vision has stimulated the discovery of many alternative models that compete with Transformers, such as the MLP-Mixer.
1 code implementation • 7 Mar 2023 • Yuki Tatsunami, Masato Taki
Their computational complexity is proportional to quadratic numbers of pixels in input feature maps, resulting in slow processing, especially when dealing with high-resolution images.
no code implementations • 3 Feb 2023 • Shin-nosuke Ishikawa, Masato Todo, Masato Taki, Yasunobu Uchiyama, Kazunari Matsunaga, Peihsuan Lin, Taiki Ogihara, Masao Yasui
We present a method of explainable artificial intelligence (XAI), "What I Know (WIK)", to provide additional information to verify the reliability of a deep learning model by showing an example of an instance in a training dataset that is similar to the input data to be inferred and demonstrate it in a remote sensing image classification task.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +3
4 code implementations • 4 May 2022 • Yuki Tatsunami, Masato Taki
Here we propose Sequencer, a novel and competitive architecture alternative to ViT that provides a new perspective on these issues.
Ranked #20 on Image Classification on ImageNet V2
no code implementations • 11 Apr 2022 • Tadayoshi Matsumori, Masato Taki, Tadashi Kadowaki
Quadratic unconstrained binary optimization (QUBO) solvers can be applied to design an optimal structure to avoid resonance.
no code implementations • 28 Oct 2021 • Teppei Matsui, Masato Taki, Trung Quang Pham, Junichi Chikazoe, Koji Jimura
One of the promising approaches for explaining such a black-box system is counterfactual explanation.
2 code implementations • 9 Aug 2021 • Yuki Tatsunami, Masato Taki
This leaves open the possibility of incorporating a non-convolutional (or non-local) inductive bias into the architecture, so we used two simple ideas to incorporate inductive bias into the MLP-Mixer while taking advantage of its ability to capture global correlations.
no code implementations • 9 Sep 2017 • Masato Taki
Residual Network (ResNet) is the state-of-the-art architecture that realizes successful training of really deep neural network.