no code implementations • 12 Jul 2023 • Hiroshi Fukui, Taiki Miyagawa, Yusuke Morishita
We propose a conceptually simple and thus fast multi-object tracking (MOT) model that does not require any attached modules, such as the Kalman filter, Hungarian algorithm, transformer blocks, or graph networks.
no code implementations • 9 May 2019 • Masahiro Mitsuhara, Hiroshi Fukui, Yusuke Sakashita, Takanori Ogata, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
As a result, the fine-tuned network can output an attention map that takes into account human knowledge.
3 code implementations • CVPR 2019 • Hiroshi Fukui, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
ABN can be applicable to several image recognition tasks by introducing a branch for attention mechanism and is trainable for the visual explanation and image recognition in end-to-end manner.
no code implementations • 13 Oct 2017 • Kenji Enomoto, Ken Sakurada, Weimin WANG, Hiroshi Fukui, Masashi Matsuoka, Ryosuke Nakamura, Nobuo Kawaguchi
The networks are trained to output images that are close to the ground truth using the images synthesized with clouds over the ground truth as inputs.
Ranked #8 on Cloud Removal on SEN12MS-CR