Mixed Reality
52 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Mixed Reality
Most implemented papers
Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation
The goal of this paper is to estimate the 6D pose and dimensions of unseen object instances in an RGB-D image.
Artificial Intelligence Assisted Infrastructure Assessment Using Mixed Reality Systems
Conventional methods for visual assessment of civil infrastructures have certain limitations, such as subjectivity of the collected data, long inspection time, and high cost of labor.
Learning Convolutional Transforms for Lossy Point Cloud Geometry Compression
Efficient point cloud compression is fundamental to enable the deployment of virtual and mixed reality applications, since the number of points to code can range in the order of millions.
Improved Deep Point Cloud Geometry Compression
Point clouds have been recognized as a crucial data structure for 3D content and are essential in a number of applications such as virtual and mixed reality, autonomous driving, cultural heritage, etc.
HoloLens 2 Research Mode as a Tool for Computer Vision Research
Mixed reality headsets, such as the Microsoft HoloLens 2, are powerful sensing devices with integrated compute capabilities, which makes it an ideal platform for computer vision research.
Neural RGB-D Surface Reconstruction
Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR.
Virtual, Augmented, and Mixed Reality for Human-Robot Interaction: A Survey and Virtual Design Element Taxonomy
Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) has been gaining considerable attention in research in recent years.
Fast and Lightweight Scene Regressor for Camera Relocalization
The proposed approach uses sparse descriptors to regress the scene coordinates, instead of a dense RGB image.
Affordance segmentation of hand-occluded containers from exocentric images
To train the model, we annotated the visual affordances of an existing dataset with mixed-reality images of hand-held containers in third-person (exocentric) images.
Human3.6m: Large scale datasets and predictive methods for 3D human sensing in natural environments
We introduce a new dataset, Human3. 6M, of 3. 6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms.