KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odome
3,167 PAPERS • 139 BENCHMARKS
ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes).
1,656 PAPERS • 13 BENCHMARKS
The SYNTHIA dataset is a synthetic dataset that consists of 9400 multi-viewpoint photo-realistic frames rendered from a virtual city and comes with pixel-level semantic annotations for 13 classes. Each frame has resolution of 1280 × 960.
496 PAPERS • 10 BENCHMARKS
KITTI-360 is a large-scale dataset that contains rich sensory information and full annotations. It is the successor of the popular KITTI dataset, providing more comprehensive semantic/instance labels in 2D and 3D, richer 360 degree sensory information (fisheye images and pushbroom laser scans), very accurate and geo-localized vehicle and camera poses, and a series of new challenging benchmarks.
138 PAPERS • 5 BENCHMARKS
We introduce our new dataset, Spaces, to provide a more challenging shared dataset for future view synthesis research. Spaces consists of 100 indoor and outdoor scenes, captured using a 16-camera rig. For each scene, we captured image sets at 5-10 slightly different rig positions (within ∼10cm of each other). This jittering of the rig position provides a flexible dataset for view synthesis, as we can mix views from different rig positions for the same scene during training. We calibrated the intrinsics and the relative pose of the rig cameras using a standard structure from motion approach, using the nominal rig layout as a prior. We corrected exposure differences . For our main experiments we undistort the images and downsample them to a resolution of 800 × 480. We use 90 scenes from the dataset for training and hold out 10 for evaluation.
30 PAPERS • NO BENCHMARKS YET
OmniObject3D is a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects. OmniObject3D has several appealing properties:
26 PAPERS • NO BENCHMARKS YET
The PhotoShape dataset consists of photorealistic, relightable, 3D shapes produced by the work proposed in the work of Park et al. (2021).
18 PAPERS • 1 BENCHMARK
This dataset focus on two blur types: camera motion blur and defocus blur. For each type of blur we synthesize $5$ scenes using Blender. We manually place multi-view cameras to mimic real data capture. To render images with camera motion blur, we randomly perturb the camera pose, and then linearly interpolate poses between the original and perturbed poses for each view. We render images from interpolated poses and blend them in linear RGB space to generate the final blurry images. For defocus blur, we use the built-in functionality to render depth-of-field images. We fix the aperture and randomly choose a focus plane between the nearest and furthest depth.
17 PAPERS • NO BENCHMARKS YET
The shiny folder contains 8 scenes with challenging view-dependent effects used in our paper. We also provide additional scenes in the shiny_extended folder. The test images for each scene used in our paper consist of one of every eight images in alphabetical order.
15 PAPERS • 1 BENCHMARK
OMMO is a new benchmark for several outdoor NeRF-based tasks, such as novel view synthesis, surface reconstruction, and multi-modal NeRF. It contains complex objects and scenes with calibrated images, point clouds and prompt annotations.
4 PAPERS • NO BENCHMARKS YET
We present a large-scale dataset for 3D urban scene understanding. Compared to existing datasets, our dataset consists of 75 outdoor urban scenes with diverse backgrounds, encompassing over 15,000 images. These scenes offer 360◦ hemispherical views, capturing diverse foreground objects illuminated under various lighting conditions. Additionally, our dataset encompasses scenes that are not limited to forward-driving views, addressing the limitations of previous datasets such as limited overlap and coverage between camera views. The closest pre-existing dataset for generalizable evaluation is DTU [2] (80 scenes) which comprises mostly indoor objects and does not provide multiple foreground objects or background scenes.
3 PAPERS • 1 BENCHMARK
The new dataset contains around 1,500 train videos and 290 test videos, with 50 frames per video on average. The dataset was obtained after processing the manually captured video sequences of static real-life urban scenes. The main property of the dataset is the abundance of close objects and, consequently, the larger prevalence of occlusions. According to the introduced heuristic, the mean area of occluded image parts for SWORD is approximately five times larger than for RealEstate10k data (14% vs 3% respectively). This rationalizes the collection and usage of SWORD and explains that SWORD allows training more powerful models despite being of smaller size.
The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc. The images were extracted from video files with 15 fps at HD2K resolution with a size of 2280 × 1282 pixels. The dataset also provides a GPS geolocalization tag for each second of the sequences and reflects different climatological conditions. It also involved up to 4 different persons filming the dataset at different moments of the day.
This is the dataset for the CGF 2021 paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks".
2 PAPERS • 1 BENCHMARK
IBL-NeRF Dataset. Contains multi-view images with its intrinsic components.
2 PAPERS • NO BENCHMARKS YET
We present SILVR, a dataset of light field images for six-degrees-of-freedom navigation in large fully-immersive volumes. The SILVR dataset is short for "Synthetic Immersive Large-Volume Ray" dataset.
LLNeRF Dataset is a real-world dataset as a benchmark for model learning and evaluation. To obtain real low-illumination images with real noise distributions, photos are taken at nighttime outdoor scenes or low-light indoor scenes containing diverse objects. Since the ISP operations are device dependent and the noise distributions across devices are also different, the data is collected using a mobile phone camera and a DSLR camera to enrich the diversity of the dataset.
1 PAPER • NO BENCHMARKS YET
Replay is a collection of multi-view, multi-modal videos of humans interacting socially. Each scene is filmed in high production quality, from different viewpoints with several static cameras, as well as wearable action cameras, and recorded with a large array of microphones at different positions in the room. The full Replay dataset consists of 68 scenes of social interactions between people, such as playing boarding games, exercising, or unwrapping presents. Each scene is about 5 minutes long and filmed with 12 cameras, static and dynamic. Audio is captured separately by 12 binaural microphones and additional near-range microphones for each actor and for each egocentric video. All sensors are temporally synchronized, undistorted, geometrically calibrated, and color calibrated.
SPARF is a large-scale ShapeNet-based synthetic dataset for novel view synthesis consisting of ~17 million images rendered from nearly 40,000 shapes at high resolution (400×400 pixels).
Synthetic dataset comprising three different environments for multi-camera dynamic novel view synthesis for soccer. This dataset is made compatible for Nerfstudio, and includes data parsers with various settings to reproduce the settings of our paper "Dynamic NeRFs for Soccer Scenes" and more.
Dataset of paired thermal and RGB images comprising ten diverse scenes—six indoor and four outdoor scenes— for 3D scene reconstruction and novel view synthesis (e.g. with NeRF).