Search Results for author: Yasaman Haghighi

Found 4 papers, 1 papers with code

EgoSim: An Egocentric Multi-view Simulator and Real Dataset for Body-worn Cameras during Motion and Activity

no code implementations25 Feb 2025 Dominik Hollidt, Paul Streli, Jiaxi Jiang, Yasaman Haghighi, Changlin Qian, Xintong Liu, Christian Holz

This will bring fresh perspectives to established tasks in computer vision and benefit key areas such as human motion tracking, body pose estimation, or action recognition -- particularly for the lower body, which is typically occluded.

3D Pose Estimation Action Recognition

HEADS-UP: Head-Mounted Egocentric Dataset for Trajectory Prediction in Blind Assistance Systems

no code implementations30 Sep 2024 Yasaman Haghighi, Celine Demonsant, Panagiotis Chalimourdas, Maryam Tavasoli Naeini, Jhon Kevin Munoz, Bladimir Bacca, Silvan Suter, Matthieu Gani, Alexandre Alahi

In this paper, we introduce HEADS-UP, the first egocentric dataset collected from head-mounted cameras, designed specifically for trajectory prediction in blind assistance systems.

Prediction Trajectory Prediction

Neural Implicit Dense Semantic SLAM

no code implementations27 Apr 2023 Yasaman Haghighi, Suryansh Kumar, Jean-Philippe Thiran, Luc van Gool

Visual Simultaneous Localization and Mapping (vSLAM) is a widely used technique in robotics and computer vision that enables a robot to create a map of an unfamiliar environment using a camera sensor while simultaneously tracking its position over time.

3D geometry Scene Understanding +2

Cannot find the paper you are looking for? You can Submit a new open access paper.