Search Results for author: Kuniaki Saito

Found 31 papers, 17 papers with code

COCO-FUNIT: Few-Shot Unsupervised Image Translation with a Content Conditioned Style Encoder

1 code implementation ECCV 2020 Kuniaki Saito, Kate Saenko, Ming-Yu Liu

Unsupervised image-to-image translation intends to learn a mapping of an image in a given domain to an analogous image in a different domain, without explicit supervision of the mapping.

Translation Unsupervised Image-To-Image Translation

Open Set Domain Adaptation by Backpropagation

4 code implementations ECCV 2018 Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, Tatsuya Harada

Almost all of them are proposed for a closed-set scenario, where the source and the target domain completely share the class of their samples.

Domain Adaptation

Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

8 code implementations CVPR 2018 Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, Tatsuya Harada

To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries.

Image Classification Multi-Source Unsupervised Domain Adaptation +2

Semi-supervised Domain Adaptation via Minimax Entropy

3 code implementations ICCV 2019 Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, Kate Saenko

Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision.

Domain Adaptation Semi-supervised Domain Adaptation

Pic2Word: Mapping Pictures to Words for Zero-shot Composed Image Retrieval

1 code implementation CVPR 2023 Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister

Existing methods rely on supervised learning of CIR models using labeled triplets consisting of the query image, text specification, and the target image.

Attribute Retrieval +2

Universal Domain Adaptation through Self Supervision

1 code implementation NeurIPS 2020 Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Kate Saenko

While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori.

Clustering Partial Domain Adaptation +2

Melody Generation for Pop Music via Word Representation of Musical Properties

1 code implementation31 Oct 2017 Andrew Shin, Leopold Crestel, Hiroharu Kato, Kuniaki Saito, Katsunori Ohnishi, Masataka Yamaguchi, Masahiro Nakawaki, Yoshitaka Ushiku, Tatsuya Harada

Automatic melody generation for pop music has been a long-time aspiration for both AI researchers and musicians.

Sound Multimedia Audio and Speech Processing

OVANet: One-vs-All Network for Universal Domain Adaptation

2 code implementations ICCV 2021 Kuniaki Saito, Kate Saenko

In this paper, we propose a method to learn the threshold using source samples and to adapt it to the target domain.

Universal Domain Adaptation

OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers

1 code implementation28 May 2021 Kuniaki Saito, Donghyun Kim, Kate Saenko

OpenMatch achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.

Novelty Detection Outlier Detection

ERM++: An Improved Baseline for Domain Generalization

1 code implementation4 Apr 2023 Piotr Teterwak, Kuniaki Saito, Theodoros Tsiligkaridis, Kate Saenko, Bryan A. Plummer

We also explore the relationship between DG performance and similarity to pre-training data, and find that similarity to pre-training data distributions is an important driver of performance, but that ERM++ with stronger initializations can deliver strong performance even on dissimilar datasets. Code is released at https://github. com/piotr-teterwak/erm_plusplus.

Domain Generalization

Mind the Backbone: Minimizing Backbone Distortion for Robust Object Detection

1 code implementation26 Mar 2023 Kuniaki Saito, Donghyun Kim, Piotr Teterwak, Rogerio Feris, Kate Saenko

We propose to use Relative Gradient Norm (RGN) as a way to measure the vulnerability of a backbone to feature distortion, and show that high RGN is indeed correlated with lower OOD performance.

object-detection Robust Object Detection

Adversarial Dropout Regularization

no code implementations ICLR 2018 Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, Kate Saenko

However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes.

General Classification Image Classification +2

Spatio-temporal Person Retrieval via Natural Language Queries

no code implementations ICCV 2017 Masataka Yamaguchi, Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada

In this paper, we address the problem of spatio-temporal person retrieval from multiple videos using a natural language query, in which we output a tube (i. e., a sequence of bounding boxes) which encloses the person described by the query.

Human Detection Natural Language Queries +2

DualNet: Domain-Invariant Network for Visual Question Answering

no code implementations20 Jun 2016 Kuniaki Saito, Andrew Shin, Yoshitaka Ushiku, Tatsuya Harada

Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers.

Question Answering Visual Question Answering

DeMIAN: Deep Modality Invariant Adversarial Network

no code implementations23 Dec 2016 Kuniaki Saito, Yusuke Mukuta, Yoshitaka Ushiku, Tatsuya Harada

To obtain the common representations under such a situation, we propose to make the distributions over different modalities similar in the learned representations, namely modality-invariant representations.

Domain Adaptation General Classification +2

Syn2Real: A New Benchmark forSynthetic-to-Real Visual Domain Adaptation

no code implementations26 Jun 2018 Xingchao Peng, Ben Usman, Kuniaki Saito, Neela Kaushik, Judy Hoffman, Kate Saenko

In this paper, we present a new large-scale benchmark called Syn2Real, which consists of a synthetic domain rendered from 3D object models and two real-image domains containing the same object categories.

Classification Domain Adaptation +5

TWINs: Two Weighted Inconsistency-reduced Networks for Partial Domain Adaptation

no code implementations18 Dec 2018 Toshihiko Matsuura, Kuniaki Saito, Tatsuya Harada

We utilize two classification networks to estimate the ratio of the target samples in each class with which a classification loss is weighted to adapt the classes present in the target domain.

General Classification Partial Domain Adaptation +2

MULE: Multimodal Universal Language Embedding

no code implementations8 Sep 2019 Donghyun Kim, Kuniaki Saito, Kate Saenko, Stan Sclaroff, Bryan A. Plummer

In this paper, we present a modular approach which can easily be incorporated into existing vision-language methods in order to support many languages.

Data Augmentation Machine Translation +2

Cross-domain Self-supervised Learning for Domain Adaptation with Few Source Labels

no code implementations18 Mar 2020 Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A. Plummer, Stan Sclaroff, Kate Saenko

We show that when labeled source examples are limited, existing methods often fail to learn discriminative features applicable for both source and target domains.

Self-Supervised Learning Unsupervised Domain Adaptation

Self-supervised Visual Attribute Learning for Fashion Compatibility

no code implementations1 Aug 2020 Donghyun Kim, Kuniaki Saito, Samarth Mishra, Stan Sclaroff, Kate Saenko, Bryan A Plummer

Our approach consists of three self-supervised tasks designed to capture different concepts that are neglected in prior work that we can select from depending on the needs of our downstream tasks.

Attribute Object Recognition +3

CDS: Cross-Domain Self-Supervised Pre-Training

no code implementations ICCV 2021 Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A. Plummer, Stan Sclaroff, Kate Saenko

We present a two-stage pre-training approach that improves the generalization ability of standard single-domain pre-training.

Domain Adaptation Transfer Learning

Pyramid Mini-Batching for Optimal Transport

no code implementations29 Sep 2021 Devin Guillory, Kuniaki Saito, Eric Tzeng, Yannik Pitcan, Kate Saenko, Trevor Darrell

Optimal transport theory provides a useful tool to measure the differences between two distributions.

Domain Adaptation

Learning to Detect Every Thing in an Open World

no code implementations3 Dec 2021 Kuniaki Saito, Ping Hu, Trevor Darrell, Kate Saenko

LDET leads to significant improvements on many datasets in the open-world instance segmentation task, outperforming baselines on cross-category generalization on COCO, as well as cross-dataset evaluation on UVO and Cityscapes.

Data Augmentation object-detection +3

Prefix Conditioning Unifies Language and Label Supervision

no code implementations CVPR 2023 Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister

In experiments, we show that this simple technique improves the performance in zero-shot image recognition accuracy and robustness to the image-level distribution shift.

Classification Contrastive Learning +2

Unsupervised LLM Adaptation for Question Answering

no code implementations16 Feb 2024 Kuniaki Saito, Kihyuk Sohn, Chen-Yu Lee, Yoshitaka Ushiku

In this task, we leverage a pre-trained LLM, a publicly available QA dataset (source data), and unlabeled documents from the target domain.

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.