Search Results for author: Sajad Saeedi

Found 13 papers, 3 papers with code

Systematic Comparison of Path Planning Algorithms using PathBench

no code implementations7 Mar 2022 Hao-Ya Hsueh, Alexandru-Iosif Toma, Hussein Ali Jaafar, Edward Stow, Riku Murai, Paul H. J. Kelly, Sajad Saeedi

An unified path planning interface that facilitates the development and benchmarking of existing and new algorithms is needed.

Pareto Frontier Approximation Network (PA-Net) to Solve Bi-objective TSP

no code implementations2 Mar 2022 Ishaan Mehta, Sajad Saeedi

In this work, we present PA-Net, a network that generates good approximations of the Pareto front for the bi-objective travelling salesperson problem (BTSP).

RL-PGO: Reinforcement Learning-based Planar Pose-Graph Optimization

no code implementations26 Feb 2022 Nikolaos Kourtzanidis, Sajad Saeedi

The objective of pose SLAM or pose-graph optimization (PGO) is to estimate the trajectory of a robot given odometric and loop closing constraints.


A Robot Web for Distributed Many-Device Localisation

no code implementations7 Feb 2022 Riku Murai, Joseph Ortiz, Sajad Saeedi, Paul H. J. Kelly, Andrew J. Davison

We show that a distributed network of robots or other devices which make measurements of each other can collaborate to globally localise via efficient ad-hoc peer to peer communication.

Pareto Frontier Approximation Network (PA-Net) Applied to Multi-objective TSP

no code implementations29 Sep 2021 Ishaan Mehta, Sajad Saeedi

In this work, we present PA-Net, a network that generates good approximations of the Pareto front for the multi-objective optimization problems.

Waypoint Planning Networks

1 code implementation1 May 2021 Alexandru-Iosif Toma, Hussein Ali Jaafar, Hao-Ya Hsueh, Stephen James, Daniel Lenton, Ronald Clark, Sajad Saeedi

We propose waypoint planning networks (WPN), a hybrid algorithm based on LSTMs with a local kernel - a classic algorithm such as A*, and a global kernel using a learned algorithm.

Motion Planning

Cain: Automatic Code Generation for Simultaneous Convolutional Kernels on Focal-plane Sensor-processors

1 code implementation21 Jan 2021 Edward Stow, Riku Murai, Sajad Saeedi, Paul H. J. Kelly

Focal-plane Sensor-processors (FPSPs) are a camera technology that enable low power, high frame rate computation, making them suitable for edge computation.

Code Generation Frame

AnalogNet: Convolutional Neural Network Inference on Analog Focal Plane Sensor Processors

no code implementations2 Jun 2020 Matthew Z. Wong, Benoit Guillard, Riku Murai, Sajad Saeedi, Paul H. J. Kelly

We present a high-speed, energy-efficient Convolutional Neural Network (CNN) architecture utilising the capabilities of a unique class of devices known as analog Focal Plane Sensor Processors (FPSP), in which the sensor and the processor are embedded together on the same silicon chip.


BIT-VO: Visual Odometry at 300 FPS using Binary Features from the Focal Plane

no code implementations23 Apr 2020 Riku Murai, Sajad Saeedi, Paul H. J. Kelly

Focal-plane Sensor-processor (FPSP) is a next-generation camera technology which enables every pixel on the sensor chip to perform computation in parallel, on the focal plane where the light intensity is captured.

Frame Visual Odometry

InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset

no code implementations3 Sep 2018 Wenbin Li, Sajad Saeedi, John McCormac, Ronald Clark, Dimos Tzoumanikas, Qing Ye, Yuzhong Huang, Rui Tang, Stefan Leutenegger

Datasets have gained an enormous amount of popularity in the computer vision community, from training and evaluation of Deep Learning-based methods to benchmarking Simultaneous Localization and Mapping (SLAM).

Frame Simultaneous Localization and Mapping

Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications Using HyperMapper

no code implementations2 Feb 2017 Luigi Nardi, Bruno Bodin, Sajad Saeedi, Emanuele Vespa, Andrew J. Davison, Paul H. J. Kelly

In this paper we investigate an emerging application, 3D scene understanding, likely to be significant in the mobile space in the near future.

Active Learning Scene Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.