93 papers with code • 0 benchmarks • 11 datasets
Self-driving cars : the task of making a car that can drive itself without human guidance.
( Image credit: Learning a Driving Simulator )
As the application of deep neural networks proliferates in numerous areas such as medical imaging, video surveillance, and self driving cars, the need for explaining the decisions of these models has become a hot research topic, both at the global and local level.
We present a generic neural network architecture that uses Composite Fields to detect and construct a spatio-temporal pose which is a single, connected graph whose nodes are the semantic keypoints (e. g., a person's body joints) in multiple frames.
Ranked #4 on Multi-Person Pose Estimation on COCO
Estimating scene geometry from data obtained with cost-effective sensors is key for robots and self-driving cars.
Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception.
Understanding human motion behavior is critical for autonomous moving platforms (like self-driving cars and social robots) if they are to navigate human-centric environments.
Ranked #12 on Trajectory Prediction on Stanford Drone
The basis for most vision based applications like robotics, self-driving cars and potentially augmented and virtual reality is a robust, continuous estimation of the position and orientation of a camera system w. r. t the observed environment (scene).