Search Results for author: Yen-Cheng Liu

Found 12 papers, 5 papers with code

Enhancing Multi-Robot Perception via Learned Data Association

no code implementations1 Jul 2021 Nathaniel Glaser, Yen-Cheng Liu, Junjiao Tian, Zsolt Kira

In this paper, we address the multi-robot collaborative perception problem, specifically in the context of multi-view infilling for distributed semantic segmentation.

Semantic Segmentation

Overcoming Obstructions via Bandwidth-Limited Multi-Agent Spatial Handshaking

no code implementations1 Jul 2021 Nathaniel Glaser, Yen-Cheng Liu, Junjiao Tian, Zsolt Kira

In this paper, we address bandwidth-limited and obstruction-prone collaborative perception, specifically in the context of multi-agent semantic segmentation.

Semantic Segmentation

Unbiased Teacher for Semi-Supervised Object Detection

1 code implementation ICLR 2021 Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, Peter Vajda

To address this, we introduce Unbiased Teacher, a simple yet effective approach that jointly trains a student and a gradually progressing teacher in a mutually-beneficial manner.

Image Classification Object Detection +1

Posterior Re-calibration for Imbalanced Datasets

no code implementations NeurIPS 2020 Junjiao Tian, Yen-Cheng Liu, Nathan Glaser, Yen-Chang Hsu, Zsolt Kira

Neural Networks can perform poorly when the training label distribution is heavily imbalanced, as well as when the testing data differs from the training distribution.

Long-tail Learning Semantic Segmentation

When2com: Multi-Agent Perception via Communication Graph Grouping

1 code implementation CVPR 2020 Yen-Cheng Liu, Junjiao Tian, Nathaniel Glaser, Zsolt Kira

While significant advances have been made for single-agent perception, many applications require multiple sensing agents and cross-agent communication due to benefits such as coverage and robustness.

Who2com: Collaborative Perception via Learnable Handshake Communication

no code implementations21 Mar 2020 Yen-Cheng Liu, Junjiao Tian, Chih-Yao Ma, Nathan Glaser, Chia-Wen Kuo, Zsolt Kira

In this paper, we propose the problem of collaborative perception, where robots can combine their local observations with those of neighboring agents in a learnable way to improve accuracy on a perception task.

Multi-agent Reinforcement Learning Scene Understanding +1

UNO: Uncertainty-aware Noisy-Or Multimodal Fusion for Unanticipated Input Degradation

no code implementations6 Nov 2019 Junjiao Tian, Wesley Cheung, Nathan Glaser, Yen-Cheng Liu, Zsolt Kira

Specifically, we analyze a number of uncertainty measures, each of which captures a different aspect of uncertainty, and we propose a novel way to fuse degraded inputs by scaling modality-specific output softmax probabilities.

Semantic Segmentation

A Unified Feature Disentangler for Multi-Domain Image Translation and Manipulation

1 code implementation NeurIPS 2018 Alexander H. Liu, Yen-Cheng Liu, Yu-Ying Yeh, Yu-Chiang Frank Wang

We present a novel and unified deep learning framework which is capable of learning domain-invariant representation from data across multiple domains.

Unsupervised Domain Adaptation

Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation

no code implementations CVPR 2018 Yen-Cheng Liu, Yu-Ying Yeh, Tzu-Chien Fu, Sheng-De Wang, Wei-Chen Chiu, Yu-Chiang Frank Wang

While representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated.

Representation Learning Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.