no code implementations • 13 Apr 2024 • Eric Price, Aamir Ahmad
Using UAVs for wildlife observation and motion capture offers manifold advantages for studying animals in the wild, especially grazing herds in open terrain.
1 code implementation • 7 May 2023 • Elia Bonetto, Chenghao Xu, Aamir Ahmad
To solve this, we present a fully customizable framework for generating realistic animated dynamic environments (GRADE) for robotics research, first introduced in [1].
1 code implementation • 30 Apr 2023 • Elia Bonetto, Aamir Ahmad
Through extensive evaluations of our model with real-world data from i) limited datasets available on the internet and ii) a new one collected and manually labelled by us, we show that we can detect zebras by using only synthetic data during training.
1 code implementation • 19 Feb 2023 • Eric Price, Aamir Ahmad
In this paper, we propose a new annotation method which leverages a combination of a learning-based detector (SSD) and a learning-based tracker (RE$^3$).
1 code implementation • 28 Sep 2022 • Nitin Saini, Chun-Hao P. Huang, Michael J. Black, Aamir Ahmad
Second, we learn a probability distribution of short human motion sequences ($\sim$1sec) relative to the ground plane and leverage it to disambiguate between the camera and human motion.
1 code implementation • 20 Jan 2022 • Nitin Saini, Elia Bonetto, Eric Price, Aamir Ahmad, Michael J. Black
In this letter, we present a novel markerless 3D human motion capture (MoCap) system for unstructured, outdoor environments that uses a team of autonomous unmanned aerial vehicles (UAVs) with on-board RGB cameras and computation.
no code implementations • 13 Jul 2020 • Rahul Tallamraju, Nitin Saini, Elia Bonetto, Michael Pabst, Yu Tang Liu, Michael J. Black, Aamir Ahmad
We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose and shape of a single moving person using multiple micro aerial vehicles.