Search Results for author: Asako Kanezaki

Found 19 papers, 9 papers with code

Unsupervised Learning of Image Segmentation Based on Differentiable Feature Clustering

2 code implementations20 Jul 2020 Wonjik Kim, Asako Kanezaki, Masayuki Tanaka

The usage of convolutional neural networks (CNNs) for unsupervised image segmentation was investigated in this study.

Clustering Image Segmentation +3

Path Planning using Neural A* Search

4 code implementations16 Sep 2020 Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, Asako Kanezaki

We present Neural A*, a novel data-driven search method for path planning problems.

RotationNet: Joint Object Categorization and Pose Estimation Using Multiviews from Unsupervised Viewpoints

1 code implementation CVPR 2018 Asako Kanezaki, Yasuyuki Matsushita, Yoshifumi Nishida

We propose a Convolutional Neural Network (CNN)-based model "RotationNet," which takes multi-view images of an object as input and jointly estimates its pose and object category.

3D Object Classification Object +2

CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces

1 code implementation24 Jan 2022 Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki

Multi-agent path planning (MAPP) in continuous spaces is a challenging problem with significant practical importance.

Salient object detection on hyperspectral images using features learned from unsupervised segmentation task

1 code implementation28 Feb 2019 Nevrez Imamoglu, Guanqun Ding, Yuming Fang, Asako Kanezaki, Toru Kouyama, Ryosuke Nakamura

Various saliency detection algorithms from color images have been proposed to mimic eye fixation or attentive object detection response of human observers for the same scenes.

Clustering Image Segmentation +7

Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot Classification

1 code implementation4 Nov 2023 Hao Zheng, Runqi Wang, Jianzhuang Liu, Asako Kanezaki

The conventional few-shot classification aims at learning a model on a large labeled base dataset and rapidly adapting to a target dataset that is from the same distribution as the base dataset.

Classification Cross-Domain Few-Shot +2

Recognizing Activities of Daily Living with a Wrist-mounted Camera

no code implementations CVPR 2016 Katsunori Ohnishi, Atsushi Kanehira, Asako Kanezaki, Tatsuya Harada

We present a novel dataset and a novel algorithm for recognizing activities of daily living (ADL) from a first-person wearable camera.

object-detection Object Detection

Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path

no code implementations3 Mar 2020 Kei Ota, Yoko SASAKI, Devesh K. Jha, Yusuke Yoshiyasu, Asako Kanezaki

Specifically, we train a deep convolutional network that can predict collision-free paths based on a map of the environment-- this is then used by a reinforcement learning algorithm to learn to closely follow the path.

Efficient Exploration Navigate +2

Deep Reactive Planning in Dynamic Environments

no code implementations31 Oct 2020 Kei Ota, Devesh K. Jha, Tadashi Onishi, Asako Kanezaki, Yusuke Yoshiyasu, Yoko SASAKI, Toshisada Mariyama, Daniel Nikovski

The main novelty of the proposed approach is that it allows a robot to learn an end-to-end policy which can adapt to changes in the environment during execution.

Training Larger Networks for Deep Reinforcement Learning

no code implementations16 Feb 2021 Kei Ota, Devesh K. Jha, Asako Kanezaki

Previous work has shown that this is mostly due to instability during training of deep RL agents when using larger networks.

reinforcement-learning Reinforcement Learning (RL) +1

Object Memory Transformer for Object Goal Navigation

no code implementations24 Mar 2022 Rui Fukushima, Kei Ota, Asako Kanezaki, Yoko SASAKI, Yusuke Yoshiyasu

This paper presents a reinforcement learning method for object goal navigation (ObjNav) where an agent navigates in 3D indoor environments to reach a target object based on long-term observations of objects and scenes.

Navigate Object

H-SAUR: Hypothesize, Simulate, Act, Update, and Repeat for Understanding Object Articulations from Interactions

no code implementations22 Oct 2022 Kei Ota, Hsiao-Yu Tung, Kevin A. Smith, Anoop Cherian, Tim K. Marks, Alan Sullivan, Asako Kanezaki, Joshua B. Tenenbaum

The world is filled with articulated objects that are difficult to determine how to use from vision alone, e. g., a door might open inwards or outwards.

Multi-goal Audio-visual Navigation using Sound Direction Map

no code implementations1 Aug 2023 Haru Kondoh, Asako Kanezaki

However, there has been no proposal for a generalized navigation task combining these two types of tasks and using both visual and auditory information in a situation where multiple sound sources are goals.

Navigate Visual Navigation

Tactile Estimation of Extrinsic Contact Patch for Stable Placement

no code implementations25 Sep 2023 Kei Ota, Devesh K. Jha, Krishna Murthy Jatavallabhula, Asako Kanezaki, Joshua B. Tenenbaum

In particular, we estimate the contact patch between a grasped object and its environment using force and tactile observations to estimate the stability of the object during a contact formation.

Object

Leveraging Large Language Model-based Room-Object Relationships Knowledge for Enhancing Multimodal-Input Object Goal Navigation

no code implementations21 Mar 2024 Leyuan Sun, Asako Kanezaki, Guillaume Caron, Yusuke Yoshiyasu

In this study, we propose a data-driven, modular-based approach, trained on a dataset that incorporates common-sense knowledge of object-to-room relationships extracted from a large language model.

Common Sense Reasoning Language Modelling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.