no code implementations • 11 Apr 2023 • Ayon Sen, Gang Pan, Anton Mitrokhin, Ashraful Islam
Accurate camera-to-lidar calibration is a requirement for sensor data fusion in many 3D perception tasks.
no code implementations • 6 May 2022 • Levi Burner, Anton Mitrokhin, Cornelia Fermüller, Yiannis Aloimonos
Depth and segmentation are provided at 60 Hz for the event cameras and 30 Hz for the classical camera.
no code implementations • CVPR 2020 • Anton Mitrokhin, Zhiyuan Hua, Cornelia Fermuller, Yiannis Aloimonos
In this work we present a Graph Convolutional neural network for the task of scene motion segmentation by a moving camera.
5 code implementations • ICLR 2020 • Chengxi Ye, Matthew Evanusa, Hua He, Anton Mitrokhin, Tom Goldstein, James A. Yorke, Cornelia Fermüller, Yiannis Aloimonos
Convolution is a central operation in Convolutional Neural Networks (CNNs), which applies a kernel to overlapping regions shifted across the image.
no code implementations • 18 Mar 2019 • Anton Mitrokhin, Chengxi Ye, Cornelia Fermuller, Yiannis Aloimonos, Tobi Delbruck
In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects.
no code implementations • 23 Sep 2018 • Chengxi Ye, Anton Mitrokhin, Cornelia Fermüller, James A. Yorke, Yiannis Aloimonos
In this work we present a lightweight, unsupervised learning pipeline for \textit{dense} depth, optical flow and egomotion estimation from sparse event output of the Dynamic Vision Sensor (DVS).
no code implementations • 12 Mar 2018 • Anton Mitrokhin, Cornelia Fermuller, Chethan Parameshwara, Yiannis Aloimonos
Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), are ideally suited for real-time motion analysis.