Search Results for author: Yung-Hsu Yang

Found 5 papers, 2 papers with code

UniDepth: Universal Monocular Metric Depth Estimation

1 code implementation27 Mar 2024 Luigi Piccinelli, Yung-Hsu Yang, Christos Sakaridis, Mattia Segu, Siyuan Li, Luc van Gool, Fisher Yu

However, the remarkable accuracy of recent MMDE methods is confined to their training domains.

 Ranked #1 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

Monocular Depth Estimation

CC-3DT: Panoramic 3D Object Tracking via Cross-Camera Fusion

no code implementations2 Dec 2022 Tobias Fischer, Yung-Hsu Yang, Suryansh Kumar, Min Sun, Fisher Yu

To track the 3D locations and trajectories of the other traffic participants at any given time, modern autonomous vehicles are equipped with multiple cameras that cover the vehicle's full surroundings.

3D Object Tracking Autonomous Vehicles +2

Dense Prediction with Attentive Feature Aggregation

no code implementations1 Nov 2021 Yung-Hsu Yang, Thomas E. Huang, Min Sun, Samuel Rota Bulò, Peter Kontschieder, Fisher Yu

Our experiments show consistent and significant improvements on challenging semantic segmentation benchmarks, including Cityscapes, BDD100K, and Mapillary Vistas, at negligible computational and parameter overhead.

Boundary Detection Semantic Segmentation

Monocular Quasi-Dense 3D Object Tracking

1 code implementation12 Mar 2021 Hou-Ning Hu, Yung-Hsu Yang, Tobias Fischer, Trevor Darrell, Fisher Yu, Min Sun

Experiments on our proposed simulation data and real-world benchmarks, including KITTI, nuScenes, and Waymo datasets, show that our tracking framework offers robust object association and tracking on urban-driving scenarios.

3D Object Tracking Autonomous Driving +3

Cannot find the paper you are looking for? You can Submit a new open access paper.