Search Results for author: Yi-Wen Chen

Found 8 papers, 5 papers with code

End-to-end Multi-modal Video Temporal Grounding

1 code implementation NeurIPS 2021 Yi-Wen Chen, Yi-Hsuan Tsai, Ming-Hsuan Yang

Specifically, we adopt RGB images for appearance, optical flow for motion, and depth maps for image structure.

Optical Flow Estimation Self-Supervised Learning

Understanding Synonymous Referring Expressions via Contrastive Features

1 code implementation20 Apr 2021 Yi-Wen Chen, Yi-Hsuan Tsai, Ming-Hsuan Yang

While prior work usually treats each sentence and attends it to an object separately, we focus on learning a referring expression comprehension model that considers the property in synonymous sentences.

Object Referring Expression +3

Regularizing Meta-Learning via Gradient Dropout

1 code implementation13 Apr 2020 Hung-Yu Tseng, Yi-Wen Chen, Yi-Hsuan Tsai, Sifei Liu, Yen-Yu Lin, Ming-Hsuan Yang

With the growing attention on learning-to-learn new tasks using only a few examples, meta-learning has been widely used in numerous problems such as few-shot classification, reinforcement learning, and domain generalization.

Domain Generalization Meta-Learning

Referring Expression Object Segmentation with Caption-Aware Consistency

1 code implementation10 Oct 2019 Yi-Wen Chen, Yi-Hsuan Tsai, Tiantian Wang, Yen-Yu Lin, Ming-Hsuan Yang

To this end, we propose an end-to-end trainable comprehension network that consists of the language and visual encoders to extract feature representations from both domains.

Caption Generation Object +4

Unseen Object Segmentation in Videos via Transferable Representations

no code implementations8 Jan 2019 Yi-Wen Chen, Yi-Hsuan Tsai, Chu-Ya Yang, Yen-Yu Lin, Ming-Hsuan Yang

The entire process is decomposed into two tasks: 1) solving a submodular function for selecting object-like segments, and 2) learning a CNN model with a transferable module for adapting seen categories in the source domain to the unseen target video.

Object Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.