2 code implementations • 20 Jul 2020 • Wonjik Kim, Asako Kanezaki, Masayuki Tanaka
The usage of convolutional neural networks (CNNs) for unsupervised image segmentation was investigated in this study.
4 code implementations • 16 Sep 2020 • Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, Asako Kanezaki
We present Neural A*, a novel data-driven search method for path planning problems.
1 code implementation • CVPR 2018 • Asako Kanezaki, Yasuyuki Matsushita, Yoshifumi Nishida
We propose a Convolutional Neural Network (CNN)-based model "RotationNet," which takes multi-view images of an object as input and jointly estimates its pose and object category.
1 code implementation • 24 Jan 2022 • Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki
Multi-agent path planning (MAPP) in continuous spaces is a challenging problem with significant practical importance.
1 code implementation • 28 Feb 2019 • Nevrez Imamoglu, Guanqun Ding, Yuming Fang, Asako Kanezaki, Toru Kouyama, Ryosuke Nakamura
Various saliency detection algorithms from color images have been proposed to mimic eye fixation or attentive object detection response of human observers for the same scenes.
1 code implementation • 4 Nov 2023 • Hao Zheng, Runqi Wang, Jianzhuang Liu, Asako Kanezaki
The conventional few-shot classification aims at learning a model on a large labeled base dataset and rapidly adapting to a target dataset that is from the same distribution as the base dataset.
1 code implementation • 9 Sep 2021 • Hana Hoshino, Kei Ota, Asako Kanezaki, Rio Yokota
Inverse Reinforcement Learning (IRL) is attractive in scenarios where reward engineering can be tedious.
1 code implementation • 2 Aug 2023 • Nanami Kotani, Asako Kanezaki
One of the intuitive instruction methods in robot navigation is a pointing gesture.
2 code implementations • 20 Jul 2017 • Hirokatsu Kataoka, Soma Shirakabe, Yun He, Shunya Ueta, Teppei Suzuki, Kaori Abe, Asako Kanezaki, Shin'ichiro Morita, Toshiyuki Yabe, Yoshihiro Kanehara, Hiroya Yatsuyanagi, Shinya Maruyama, Ryosuke Takasawa, Masataka Fuchida, Yudai Miyashita, Kazushige Okayasu, Yuta Matsuzaki
The paper gives futuristic challenges disscussed in the cvpaper. challenge.
no code implementations • CVPR 2016 • Katsunori Ohnishi, Atsushi Kanehira, Asako Kanezaki, Tatsuya Harada
We present a novel dataset and a novel algorithm for recognizing activities of daily living (ADL) from a first-person wearable camera.
no code implementations • 4 Jul 2018 • Nevrez Imamoglu, Wataru Shimoda, Chi Zhang, Yuming Fang, Asako Kanezaki, Keiji Yanai, Yoshifumi Nishida
Bottom-up and top-down visual cues are two types of information that helps the visual saliency models.
no code implementations • 3 Mar 2020 • Kei Ota, Yoko SASAKI, Devesh K. Jha, Yusuke Yoshiyasu, Asako Kanezaki
Specifically, we train a deep convolutional network that can predict collision-free paths based on a map of the environment-- this is then used by a reinforcement learning algorithm to learn to closely follow the path.
no code implementations • 31 Oct 2020 • Kei Ota, Devesh K. Jha, Tadashi Onishi, Asako Kanezaki, Yusuke Yoshiyasu, Yoko SASAKI, Toshisada Mariyama, Daniel Nikovski
The main novelty of the proposed approach is that it allows a robot to learn an end-to-end policy which can adapt to changes in the environment during execution.
no code implementations • 16 Feb 2021 • Kei Ota, Devesh K. Jha, Asako Kanezaki
Previous work has shown that this is mostly due to instability during training of deep RL agents when using larger networks.
no code implementations • 24 Mar 2022 • Rui Fukushima, Kei Ota, Asako Kanezaki, Yoko SASAKI, Yusuke Yoshiyasu
This paper presents a reinforcement learning method for object goal navigation (ObjNav) where an agent navigates in 3D indoor environments to reach a target object based on long-term observations of objects and scenes.
no code implementations • 22 Oct 2022 • Kei Ota, Hsiao-Yu Tung, Kevin A. Smith, Anoop Cherian, Tim K. Marks, Alan Sullivan, Asako Kanezaki, Joshua B. Tenenbaum
The world is filled with articulated objects that are difficult to determine how to use from vision alone, e. g., a door might open inwards or outwards.
no code implementations • 1 Aug 2023 • Haru Kondoh, Asako Kanezaki
However, there has been no proposal for a generalized navigation task combining these two types of tasks and using both visual and auditory information in a situation where multiple sound sources are goals.
no code implementations • 25 Sep 2023 • Kei Ota, Devesh K. Jha, Krishna Murthy Jatavallabhula, Asako Kanezaki, Joshua B. Tenenbaum
In particular, we estimate the contact patch between a grasped object and its environment using force and tactile observations to estimate the stability of the object during a contact formation.
no code implementations • 21 Mar 2024 • Leyuan Sun, Asako Kanezaki, Guillaume Caron, Yusuke Yoshiyasu
In this study, we propose a data-driven, modular-based approach, trained on a dataset that incorporates common-sense knowledge of object-to-room relationships extracted from a large language model.