1 code implementation • 30 Mar 2024 • Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu Okumura
To address this issue, we propose a novel text dataset distillation approach, called Distilling dataset into Language Model (DiLM), which trains a language model to generate informative synthetic training samples as text data, instead of directly optimizing synthetic samples.
1 code implementation • 15 Jun 2023 • Satoshi Kosugi, Toshihiko Yamasaki
Second, to allow this model to consider the contents of images, we propose a novel training scheme where we download images from Flickr and create pseudo input and retouched image pairs using a degrading model.
1 code implementation • 15 Jun 2023 • Satoshi Kosugi, Toshihiko Yamasaki
Existing photo enhancement methods are either not content-aware or not local; therefore, we propose a crowd-powered local enhancement method for content-aware local enhancement, which is achieved by asking crowd workers to locally optimize parameters for image editing functions.
1 code implementation • 2 Oct 2020 • Nobukatsu Kajiura, Satoshi Kosugi, Xueting Wang, Toshihiko Yamasaki
In this study, we address image retargeting, which is a task that adjusts input images to arbitrary sizes.
no code implementations • 17 Dec 2019 • Satoshi Kosugi, Toshihiko Yamasaki
This paper tackles unpaired image enhancement, a task of learning a mapping function which transforms input images into enhanced images in the absence of input-output image pairs.
no code implementations • ICCV 2019 • Satoshi Kosugi, Toshihiko Yamasaki, Kiyoharu Aizawa
Weakly supervised object detection (WSOD), where a detector is trained with only image-level annotations, is attracting more and more attention.