Search Results for author: Wataru Shimoda

Found 7 papers, 3 papers with code

Total Disentanglement of Font Images into Style and Character Class Features

no code implementations19 Mar 2024 Daichi Haraguchi, Wataru Shimoda, Kota Yamaguchi, Seiichi Uchida

Second, it is demonstrated that the disentangled features produced by total disentanglement apply to a variety of tasks, including font recognition, character recognition, and one-shot font image generation.

Disentanglement Font Recognition +1

Image Cropping under Design Constraints

no code implementations13 Oct 2023 Takumi Nishiyasu, Wataru Shimoda, Yoichi Sato

We explore two derived approaches, a proposal-based approach, and a heatmap-based approach, and we construct a dataset for evaluating the performance of the proposed approaches on image cropping under design constraints.

Image Cropping

Towards Diverse and Consistent Typography Generation

no code implementations5 Sep 2023 Wataru Shimoda, Daichi Haraguchi, Seiichi Uchida, Kota Yamaguchi

In this work, we consider the typography generation task that aims at producing diverse typographic styling for the given graphic document.

Attribute

Self-Supervised Difference Detection for Weakly-Supervised Semantic Segmentation

1 code implementation ICCV 2019 Wataru Shimoda, Keiji Yanai

In this paper, to make the most of such mapping functions, we assume that the results of the mapping function include noise, and we improve the accuracy by removing noise.

Segmentation Weakly supervised segmentation +2

Saliency Detection by Forward and Backward Cues in Deep-CNNs

1 code implementation1 Mar 2017 Nevrez Imamoglu, Chi Zhang, Wataru Shimoda, Yuming Fang, Boxin Shi

As prior knowledge of objects or object features helps us make relations for similar objects on attentional tasks, pre-trained deep convolutional neural networks (CNNs) can be used to detect salient objects on images regardless of the object class is in the network knowledge or not.

Object Saliency Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.