Evaluation of Embedded Platforms for Lower Limb Prosthesis with Visual Sensing Capabilities

26 Jun 2020  ·  Rafael L. da Silva, Nathan Starliper, Boxuan Zhong, He Helen Huang, Edgar Lobaton ·

Lower limb prosthesis can benefit from embedded systems capable of applying computer vision techniques to enhance autonomous control and context awareness for intelligent decision making. In order to fill in the gap of current literature of autonomous systems for prosthetic legs employing computer vision methods, we evaluate the performance capabilities of two off-the-shelf platforms, the Jetson TX2 and Raspberry Pi 3, by assessing their CPU load, memory usage, run time and classification accuracy for different image sizes and widespread computer vision algorithms. We make use of a dataset that we collected for terrain recognition using images from a camera mounted on a leg, which would enable context-awareness for lower limb prosthesis. We show that, given reasonably large images and an appropriate frame selection method, it is possible to identify the terrain that a subject is going to step on, with the possibility of reconstructing the surface and obtaining its inclination. This is part of a proposed system equipped with an embedded camera and inertial measurement unit to recognize different types of terrain and estimate the slope of the surface in front of the subject.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here