1 code implementation • 6 Feb 2023 • Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister
Existing methods rely on supervised learning of CIR models using labeled triplets consisting of the query image, text specification, and the target image.
no code implementations • 2 Jun 2022 • Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, Tomas Pfister
However, a naive unification of the real caption and the prompt sentences could lead to a complication in learning, as the distribution shift in text may not be handled properly in the language encoder.
no code implementations • 3 Dec 2021 • Kuniaki Saito, Ping Hu, Trevor Darrell, Kate Saenko
LDET leads to significant improvements on many datasets in the open-world instance segmentation task, outperforming baselines on cross-category generalization on COCO, as well as cross-dataset evaluation on UVO and Cityscapes.
1 code implementation • NeurIPS 2021 • Kuniaki Saito, Donghyun Kim, Kate Saenko
\ours achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
no code implementations • 29 Sep 2021 • Devin Guillory, Kuniaki Saito, Eric Tzeng, Yannik Pitcan, Kate Saenko, Trevor Darrell
Optimal transport theory provides a useful tool to measure the differences between two distributions.
2 code implementations • ICCV 2021 • Kuniaki Saito, Donghyun Kim, Piotr Teterwak, Stan Sclaroff, Trevor Darrell, Kate Saenko
Unsupervised domain adaptation (UDA) methods can dramatically improve generalization on unlabeled target domains.
1 code implementation • 23 Jul 2021 • Dina Bashkirova, Dan Hendrycks, Donghyun Kim, Samarth Mishra, Kate Saenko, Kuniaki Saito, Piotr Teterwak, Ben Usman
Progress in machine learning is typically measured by training and testing a model on the same distribution of data, i. e., the same domain.
1 code implementation • 28 May 2021 • Kuniaki Saito, Donghyun Kim, Kate Saenko
OpenMatch achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
2 code implementations • ICCV 2021 • Kuniaki Saito, Kate Saenko
In this paper, we propose a method to learn the threshold using source samples and to adapt it to the target domain.
no code implementations • ICCV 2021 • Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A. Plummer, Stan Sclaroff, Kate Saenko
We present a two-stage pre-training approach that improves the generalization ability of standard single-domain pre-training.
no code implementations • 1 Aug 2020 • Donghyun Kim, Kuniaki Saito, Samarth Mishra, Stan Sclaroff, Kate Saenko, Bryan A Plummer
Our approach consists of three self-supervised tasks designed to capture different concepts that are neglected in prior work that we can select from depending on the needs of our downstream tasks.
1 code implementation • ECCV 2020 • Kuniaki Saito, Kate Saenko, Ming-Yu Liu
Unsupervised image-to-image translation intends to learn a mapping of an image in a given domain to an analogous image in a different domain, without explicit supervision of the mapping.
no code implementations • 18 Mar 2020 • Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A. Plummer, Stan Sclaroff, Kate Saenko
We show that when labeled source examples are limited, existing methods often fail to learn discriminative features applicable for both source and target domains.
1 code implementation • NeurIPS 2020 • Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Kate Saenko
While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori.
no code implementations • 8 Sep 2019 • Donghyun Kim, Kuniaki Saito, Kate Saenko, Stan Sclaroff, Bryan A. Plummer
In this paper, we present a modular approach which can easily be incorporated into existing vision-language methods in order to support many languages.
3 code implementations • ICCV 2019 • Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, Kate Saenko
Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision.
no code implementations • 18 Dec 2018 • Toshihiko Matsuura, Kuniaki Saito, Tatsuya Harada
We utilize two classification networks to estimate the ratio of the target samples in each class with which a classification loss is weighted to adapt the classes present in the target domain.
2 code implementations • CVPR 2019 • Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, Kate Saenko
This motivates us to propose a novel method for detector adaptation based on strong local alignment and weak global alignment.
Ranked #2 on
Unsupervised Domain Adaptation
on SIM10K to BDD100K
1 code implementation • 11 Dec 2018 • Kohei Watanabe, Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada
The other is a multitask learning approach that uses depth images as outputs.
no code implementations • 26 Jun 2018 • Xingchao Peng, Ben Usman, Kuniaki Saito, Neela Kaushik, Judy Hoffman, Kate Saenko
In this paper, we present a new large-scale benchmark called Syn2Real, which consists of a synthetic domain rendered from 3D object models and two real-image domains containing the same object categories.
4 code implementations • ECCV 2018 • Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, Tatsuya Harada
Almost all of them are proposed for a closed-set scenario, where the source and the target domain completely share the class of their samples.
8 code implementations • CVPR 2018 • Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, Tatsuya Harada
To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries.
Ranked #3 on
Domain Adaptation
on HMDBfull-to-UCF
Image Classification
Multi-Source Unsupervised Domain Adaptation
+2
no code implementations • ICLR 2018 • Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, Kate Saenko
However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes.
Ranked #2 on
Synthetic-to-Real Translation
on Syn2Real-C
1 code implementation • 31 Oct 2017 • Andrew Shin, Leopold Crestel, Hiroharu Kato, Kuniaki Saito, Katsunori Ohnishi, Masataka Yamaguchi, Masahiro Nakawaki, Yoshitaka Ushiku, Tatsuya Harada
Automatic melody generation for pop music has been a long-time aspiration for both AI researchers and musicians.
Sound Multimedia Audio and Speech Processing
no code implementations • ICCV 2017 • Masataka Yamaguchi, Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada
In this paper, we address the problem of spatio-temporal person retrieval from multiple videos using a natural language query, in which we output a tube (i. e., a sequence of bounding boxes) which encloses the person described by the query.
1 code implementation • ICML 2017 • Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada
Deep-layered models trained on a large number of labeled samples boost the accuracy of many tasks.
Ranked #5 on
Sentiment Analysis
on Multi-Domain Sentiment Dataset
no code implementations • 23 Dec 2016 • Kuniaki Saito, Yusuke Mukuta, Yoshitaka Ushiku, Tatsuya Harada
To obtain the common representations under such a situation, we propose to make the distributions over different modalities similar in the learned representations, namely modality-invariant representations.
no code implementations • 20 Jun 2016 • Kuniaki Saito, Andrew Shin, Yoshitaka Ushiku, Tatsuya Harada
Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers.