Detection and Segmentation of Custom Objects using High Distraction Photorealistic Synthetic Data

28 Jul 2020  ·  Roey Ron, Gil Elbaz ·

We show a straightforward and useful methodology for performing instance segmentation using synthetic data. We apply this methodology on a basic case and derived insights through quantitative analysis. We created a new public dataset: The Expo Markers Dataset intended for detection and segmentation tasks. This dataset contains 5,000 synthetic photorealistic images with their corresponding pixel-perfect segmentation ground truth. The goal is to achieve high performance on manually-gathered and annotated real-world data of custom objects. We do that by creating 3D models of the target objects and other possible distraction objects and place them within a simulated environment. Expo Markers were chosen for this task, fitting our requirements of a custom object due to the exact texture, size and 3D shape. An additional advantage is the availability of this object in offices around the world for easy testing and validation of our results. We generate the data using a domain randomization technique that also simulates other photorealistic objects in the scene, known as distraction objects. These objects provide visual complexity, occlusions, and lighting challenges to help our model gain robustness in training. We are also releasing our manually-gathered datasets used for comparison and evaluation of our synthetic dataset. This white-paper provides strong evidence that photorealistic simulated data can be used in practical real world applications as a more scalable and flexible solution than manually-captured data. Code is available at the following address: https://github.com/DataGenResearchTeam/expo_markers

PDF Abstract

Datasets


Introduced in the Paper:

EXPO-HD

Used in the Paper:

MS COCO

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here