Inspired by frame-based methods, state-of-the-art event-based optical flow networks rely on the explicit computation of correlation volumes, which are expensive to compute and store on systems with limited processing budget and memory.
In this work, we present NanoFlowNet, a lightweight convolutional neural network for real-time dense optical flow estimation on edge computing hardware.
Learning-based visual ego-motion estimation is promising yet not ready for navigating agile mobile robots in the real world.
no code implementations • 11 May 2022 • Sabrina M. Neuman, Brian Plancher, Bardienus P. Duisterhof, Srivatsan Krishnan, Colby Banbury, Mark Mazumder, Shvetank Prakash, Jason Jabbour, Aleksandra Faust, Guido C. H. E. de Croon, Vijay Janapa Reddi
Machine learning (ML) has become a pervasive tool across computing systems.
Enabling the capability of assessing risk and making risk-aware decisions is essential to applying reinforcement learning to safety-critical robots like drones.
Our network can detect propellers at a rate of 85. 1% even when 60% of the propeller is occluded and can run at upto 35Hz on a 2W power budget.
In this work we approach, for the first time, the intensity reconstruction problem from a self-supervised learning perspective.
The framework is based on the automatic extraction of two distinct models: 1) a neural network model trained to estimate the relationship between the robots' sensor readings and the global performance of the swarm, and 2) a probabilistic state transition model that explicitly models the local state transitions (i. e., transitions in observations from the perspective of a single robot in the swarm) given a policy.
MAMBPO uses a learned world model to improve sample efficiency compared to model-free Multi-Agent Soft Actor-Critic (MASAC).
In the field of visual ego-motion estimation for Micro Air Vehicles (MAVs), fast maneuvers stay challenging mainly because of the big visual disparity and motion blur.
Accurate relative localization is an important requirement for a swarm of robots, especially when performing a cooperative task.
Robotics Multiagent Systems
Automatic optimization of robotic behavior has been the long-standing goal of Evolutionary Robotics.
We present fully autonomous source seeking onboard a highly constrained nano quadcopter, by contributing application-specific system and observation feature design to enable inference of a deep-RL policy onboard a nano quadcopter.
We further show that MonoDepth's use of the vertical image position allows it to estimate the distance towards arbitrary obstacles, even those not appearing in the training set, but that it requires a strong edge at the ground contact point of the object to do so.
Convolutional layers with input synapses characterized by single and multiple transmission delays are employed for feature and local motion perception, respectively; while global motion selectivity emerges in a final fully-connected layer.
We then formally show that these local states can only coexist when the global desired pattern is achieved and that, until this occurs, there is always a sequence of actions that will lead from the current pattern to the desired pattern.
In addition, a method for estimating the divergence from event-based optical flow is introduced, which accounts for the aperture problem.