Search Results for author: Yoko Yamakata

Found 7 papers, 0 papers with code

Noisy Annotation Refinement for Object Detection

no code implementations20 Oct 2021 Jiafeng Mao, Qing Yu, Yoko Yamakata, Kiyoharu Aizawa

In this study, we propose a new problem setting of training object detectors on datasets with entangled noises of annotations of class labels and bounding boxes.

Object object-detection +1

Visual Grounding Annotation of Recipe Flow Graph

no code implementations LREC 2020 Taichi Nishimura, Suzushi Tomori, Hayato Hashimoto, Atsushi Hashimoto, Yoko Yamakata, Jun Harashima, Yoshitaka Ushiku, Shinsuke Mori

Visual grounding is provided as bounding boxes to image sequences of recipes, and each bounding box is linked to an element of the workflow.

Visual Grounding

English Recipe Flow Graph Corpus

no code implementations LREC 2020 Yoko Yamakata, Shinsuke Mori, John Carroll

For r-NE tagging we train a deep neural network NER tool; to compute flow graphs we train a dependency-style parsing procedure which we apply to the entire sequence of r-NEs in a recipe. In evaluations, our systems achieve 71. 1 to 87. 5 F1, demonstrating that our annotation scheme is learnable.

NER

Recognition of Multiple Food Items in a Single Photo for Use in a Buffet-Style Restaurant

no code implementations3 Mar 2019 Masashi Anzawa, Sosuke Amano, Yoko Yamakata, Keiko Motonaga, Akiko Kamei, Kiyoharu Aizawa

We investigate image recognition of multiple food items in a single photo, focusing on a buffet restaurant application, where menu changes at every meal, and only a few images per class are available.

Cannot find the paper you are looking for? You can Submit a new open access paper.