Search Results for author: Jun Takamatsu

Found 6 papers, 0 papers with code

GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration

no code implementations20 Nov 2023 Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi

The computation starts by analyzing the videos with GPT-4V to convert environmental and action details into text, followed by a GPT-4-empowered task planner.

Language Modelling Object +1

Bias in Emotion Recognition with ChatGPT

no code implementations18 Oct 2023 Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi

This technical report explores the ability of ChatGPT in recognizing emotions from text, which can be the basis of various applications like interactive chatbots, data annotation, and mental health analysis.

Emotion Recognition Sentiment Analysis

Bounding Box Annotation with Visible Status

no code implementations11 Apr 2023 Takuya Kiyokawa, Naoki Shirakura, Hiroki Katayama, Keita Tomochika, Jun Takamatsu

However, because the previous method relied on moving the object within the capturing range using a fixed-point camera, the collected image dataset was limited in terms of capturing viewpoints.

Object

Robotic Waste Sorter with Agile Manipulation and Quickly Trainable Detector

no code implementations2 Apr 2021 Takuya Kiyokawa, Hiroki Katayama, Yuya Tatsuta, Jun Takamatsu, Tsukasa Ogasawara

Via experiments in an indoor experimental workplace for waste-sorting, we confirm that the proposed methods enable quick collection of the training image sets for three classes of waste items (i. e., aluminum can, glass bottle, and plastic bottle) and detection with higher performance than the methods that do not consider the differences.

Learning-from-Observation Framework: One-Shot Robot Teaching for Grasp-Manipulation-Release Household Operations

no code implementations4 Aug 2020 Naoki Wake, Riku Arakawa, Iori Yanokura, Takuya Kiyokawa, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi

In the context of one-shot robot teaching, the contributions of the paper are: 1) to propose a framework that 1) covers various tasks in grasp-manipulation-release class household operations and 2) mimics human postures during the operations.

Robotics Human-Computer Interaction

Multi-View Inpainting for RGB-D Sequence

no code implementations22 Nov 2018 Feiran Li, Gustavo Alfonso Garcia Ricardez, Jun Takamatsu, Tsukasa Ogasawara

For the left holes, we employ exemplar based multi-view inpainting method to deal with the color image and coherently use it as guidance to complete the depth correspondence.

3D Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.