CFORB: Circular FREAK-ORB Visual Odometry

17 Jun 2015  ·  Daniel J. Mankowitz, Ehud Rivlin ·

We present a novel Visual Odometry algorithm entitled Circular FREAK-ORB (CFORB). This algorithm detects features using the well-known ORB algorithm [12] and computes feature descriptors using the FREAK algorithm [14]. CFORB is invariant to both rotation and scale changes, and is suitable for use in environments with uneven terrain. Two visual geometric constraints have been utilized in order to remove invalid feature descriptor matches. These constraints have not previously been utilized in a Visual Odometry algorithm. A variation to circular matching [16] has also been implemented. This allows features to be matched between images without having to be dependent upon the epipolar constraint. This algorithm has been run on the KITTI benchmark dataset and achieves a competitive average translational error of $3.73 \%$ and average rotational error of $0.0107 deg/m$. CFORB has also been run in an indoor environment and achieved an average translational error of $3.70 \%$. After running CFORB in a highly textured environment with an approximately uniform feature spread across the images, the algorithm achieves an average translational error of $2.4 \%$ and an average rotational error of $0.009 deg/m$.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here