Search Results for author: Satoshi Kosugi

Found 6 papers, 4 papers with code

DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation

1 code implementation30 Mar 2024 Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu Okumura

To address this issue, we propose a novel text dataset distillation approach, called Distilling dataset into Language Model (DiLM), which trains a language model to generate informative synthetic training samples as text data, instead of directly optimizing synthetic samples.

In-Context Learning Language Modelling +3

Personalized Image Enhancement Featuring Masked Style Modeling

1 code implementation15 Jun 2023 Satoshi Kosugi, Toshihiko Yamasaki

Second, to allow this model to consider the contents of images, we propose a novel training scheme where we download images from Flickr and create pseudo input and retouched image pairs using a degrading model.

Image Enhancement Language Modelling +1

Crowd-Powered Photo Enhancement Featuring an Active Learning Based Local Filter

1 code implementation15 Jun 2023 Satoshi Kosugi, Toshihiko Yamasaki

Existing photo enhancement methods are either not content-aware or not local; therefore, we propose a crowd-powered local enhancement method for content-aware local enhancement, which is achieved by asking crowd workers to locally optimize parameters for image editing functions.

Active Learning

Unpaired Image Enhancement Featuring Reinforcement-Learning-Controlled Image Editing Software

no code implementations17 Dec 2019 Satoshi Kosugi, Toshihiko Yamasaki

This paper tackles unpaired image enhancement, a task of learning a mapping function which transforms input images into enhanced images in the absence of input-output image pairs.

Image Enhancement reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.