The ScanNet200 benchmark studies 200-class 3D semantic segmentation - an order of magnitude more class categories than previous 3D scene understanding benchmarks. The source of scene data is identical to ScanNet, but parses a larger vocabulary for semantic and instance segmentation
22 PAPERS • 3 BENCHMARKS
…Usage: 2D/3D image segmentation Format: HDF5 Libraries to read HDF5 files: 1) silx: https://github.com/silx-kit/silx 2) h5py: https://www.h5py.org 3) pymicro: https://github.com/heprom/pymicro Trained models to segment this dataset: https://doi.org/10.5281/zenodo.4601560 Please cite us as @ARTICLE{10.3389/fmats.2021.761229, AUTHOR={Bertoldo, João P. C. and Decencière, Etienne and Ryckelynck, David and Proudhon, Henry}, TITLE={A Modular U-Net for Automated Segmentation of X-Ray Tomography Images in Composite Materials}, JOURNAL={Frontiers in
1 PAPER • 1 BENCHMARK
1 PAPER • NO BENCHMARKS YET
Our project (STPLS3D) aims to provide a large-scale aerial photogrammetry dataset with synthetic and real annotated 3D point clouds for semantic and instance segmentation tasks.
32 PAPERS • 3 BENCHMARKS