no code implementations • 26 Sep 2023 • Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki SUGANO, Tetsuya OGATA
We tackled on the task of cooking scrambled eggs using real ingredients, in which the robot needs to perceive the states of the egg and adjust stirring movement in real time, while the egg is heated and the state changes continuously.
no code implementations • 30 Aug 2023 • Kazuki Hori, Kanata Suzuki, Tetsuya OGATA
The application of the Large Language Model (LLM) to robot action planning has been actively studied.
no code implementations • 8 Mar 2022 • Minori Toyoda, Kanata Suzuki, Yoshihiko Hayashi, Tetsuya OGATA
We experimentally evaluated our method using a paired dataset consisting of motion-captured actions and descriptions.
1 code implementation • 30 Oct 2021 • Akira Sakai, Taro Sunagawa, Spandan Madan, Kanata Suzuki, Takashi Katoh, Hiromichi Kobashi, Hanspeter Pfister, Pawan Sinha, Xavier Boix, Tomotake Sasaki
While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations and illuminations, Deep Neural Networks (DNNs) severely suffer in this case, even when large amounts of training examples are available.
no code implementations • 17 Apr 2021 • Minori Toyoda, Kanata Suzuki, Hiroki Mori, Yoshihiko Hayashi, Tetsuya OGATA
These embeddings allow the robot to properly generate actions from unseen words that are not paired with actions in a dataset.
1 code implementation • 17 Mar 2021 • Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori, Tetsuya OGATA
However, a manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.
no code implementations • 18 Jan 2021 • Kanata Suzuki, Tetsuya OGATA
The learning instability caused by these unstable signals is a problem that remains to be solved in DRL.
no code implementations • 31 Mar 2020 • Pin-Chu Yang, Mohammed Al-Sada, Chang-Chieh Chiu, Kevin Kuo, Tito Pradhono Tomo, Kanata Suzuki, Nelson Yalta, Kuo-Hao Shu, Tetsuya OGATA
Although numerous robots have been developed, less have focused on otaku-culture or on embodying the anime character figurine.
Action Generation Cultural Vocal Bursts Intensity Prediction +1
no code implementations • 12 Mar 2020 • Yasuto Yokota, Kanata Suzuki, Yuzi Kanazawa, Tomoyoshi Takebayashi
However, the DNN that was used to detect grasping positions has two problems with respect to extracting feature vectors from a layer for shape classification: (1) Because each layer of the grasping position detection DNN is activated by all objects in the input image, it is necessary to refine the features for each grasping position.
Robotics
no code implementations • 8 Mar 2020 • Kanata Suzuki, Yasuto Yokota, Yuzi Kanazawa, Tomoyoshi Takebayashi
: SSD that detects the grasping position of an object, and Siamese networks (SNs) that evaluate the trial sample using the similarity of two input data in the feature space.