1 code implementation • 18 Apr 2023 • Kazumi Kasaura, Shuwa Miura, Tadashi Kozuno, Ryo Yonetani, Kenta Hoshino, Yohei Hosoe
This study presents a benchmark for evaluating action-constrained reinforcement learning (RL) algorithms.
no code implementations • 2 Mar 2023 • Masafumi Endo, Tatsunori Taniai, Ryo Yonetani, Genya Ishigami
Machine learning (ML) plays a crucial role in assessing traversability for autonomous rover operations on deformable terrains but suffers from inevitable prediction errors.
1 code implementation • 24 Jan 2022 • Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki
Multi-agent path planning (MAPP) in continuous spaces is a challenging problem with significant practical importance.
2 code implementations • 8 Dec 2021 • Toshinori Kitamura, Ryo Yonetani
We present ShinRL, an open-source library specialized for the evaluation of reinforcement learning (RL) algorithms from both theoretical and practical perspectives.
4 code implementations • 16 Sep 2020 • Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, Asako Kanezaki
We present Neural A*, a novel data-driven search method for path planning problems.
no code implementations • 18 Aug 2020 • Jiaxin Ma, Ryo Yonetani, Zahid Iqbal
This paper addresses the problem of decentralized learning to achieve a high-performance global model by asking a group of clients to share local models pre-trained with their own data resources.
1 code implementation • 20 Mar 2020 • Mai Nishimura, Ryo Yonetani
This work presents a deep reinforcement learning framework for interactive navigation in a crowded place.
no code implementations • 22 Nov 2019 • Hiroaki Minoura, Ryo Yonetani, Mai Nishimura, Yoshitaka Ushiku
To address this task, we have developed the patch-based density forecasting network (PDFN), which enables forecasting over a sequence of crowd density maps describing how crowded each location is in each video frame.
2 code implementations • 28 Sep 2019 • Mohammadamin Barekatain, Ryo Yonetani, Masashi Hamaya
Transfer reinforcement learning (RL) aims at improving the learning efficiency of an agent by exploiting knowledge from other source agents trained on relevant tasks.
no code implementations • 23 May 2019 • Ryo Yonetani, Tomohiro Takahashi, Atsushi Hashimoto, Yoshitaka Ushiku
This work addresses a new problem that learns generative adversarial networks (GANs) from multiple data collections that are each i) owned separately by different clients and ii) drawn from a non-identical distribution that comprises different classes.
no code implementations • 17 May 2019 • Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, Ryo Yonetani
Therefore, to mitigate the degradation induced by non-IID data, we assume that a limited number (e. g., less than 1%) of clients allow their data to be uploaded to a server, and we propose a hybrid learning mechanism referred to as Hybrid-FL, wherein the server updates the model using the data gathered from the clients and aggregates the model with the models trained by clients.
1 code implementation • 4 Mar 2019 • Navyata Sanghvi, Ryo Yonetani, Kris Kitani
Toward enabling next-generation robots capable of socially intelligent interaction with humans, we present a $\mathbf{computational\; model}$ of interactions in a social environment of multiple agents and multiple groups.
1 code implementation • 23 Apr 2018 • Takayuki Nishio, Ryo Yonetani
Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models.
1 code implementation • CVPR 2018 • Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato
We present a new task that predicts future locations of people observed in first-person videos.
no code implementations • ICCV 2017 • Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, Yoichi Sato
We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data.
no code implementations • 15 Jun 2016 • Ryo Yonetani, Kris M. Kitani, Yoichi Sato
We envision a future time when wearable cameras are worn by the masses and recording first-person point-of-view videos of everyday life.
no code implementations • CVPR 2016 • Ryo Yonetani, Kris M. Kitani, Yoichi Sato
We aim to understand the dynamics of social interactions between two people by recognizing their actions and reactions using a head-mounted camera.
no code implementations • CVPR 2015 • Ryo Yonetani, Kris M. Kitani, Yoichi Sato
We incorporate this feature into our proposed approach that computes the motion correlation over supervoxel hierarchies to localize target instances in observer videos.