RELLIS-3D is a multi-modal dataset for off-road robotics. It was collected in an off-road environment containing annotations for 13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis Campus of Texas A&M University and presents challenges to existing algorithms related to class imbalance and environmental topography. The dataset also provides full-stack sensor data in ROS bag format, including RGB camera images, LiDAR point clouds, a pair of stereo images, high-precision GPS measurement, and IMU data.
15 PAPERS • 2 BENCHMARKS
A novel egocentric dataset collected from social mobile manipulator JackRabbot. The dataset includes 64 minutes of annotated multimodal sensor data including stereo cylindrical 360 degrees RGB video at 15 fps, 3D point clouds from two Velodyne 16 Lidars, line 3D point clouds from two Sick Lidars, audio signal, RGB-D video at 30 fps, 360 degrees spherical image from a fisheye camera and encoder values from the robot's wheels.
11 PAPERS • NO BENCHMARKS YET
MIDGARD is an open-source simulator for autonomous robot navigation in outdoor unstructured environments. It is designed to enable the training of autonomous agents (e.g., unmanned ground vehicles) in photorealistic 3D environments, and support the generalization skills of learning-based agents thanks to the variability in training scenarios.
2 PAPERS • NO BENCHMARKS YET
The LWIR DoFP Dataset of Road Scene (LDDRS) is a road detection dataset with 2,113 annotated images. It contains both day and night scenes, with multiple cars and pedestrians per image.
1 PAPER • NO BENCHMARKS YET