Semantic Segmentation Using Transfer Learning on Fisheye Images

While semantic segmentation has been extensively studied in the realm of regular perspective images, its application to fisheye images remains relatively unexplored. Existing literature on fisheye semantic segmentation mostly revolves around multi-task or multi-modal models, which are computationally intensive. This motivated us to assess the performance of current segmentation methods, specifically on fisheye images. Surprisingly, we discover that these methods do not yield satisfactory results when directly trained on fisheye datasets using a fully supervised approach. This can be attributed to the fact that the models are not designed to handle fisheye images, and the available fisheye datasets are not sufficiently large to effectively train complex models. To overcome these challenges, we propose a novel training method by employing Transfer Learning (TL) on existing semantic segmentation models that concentrate on a single task and modality. To achieve this, we investigate six different fine-tuning configurations using the WoodScape fisheye image segmentation dataset. Furthermore, we introduce a pre-training stage that learns from perspective images by applying a fisheye transformation before employing transfer learning. As a result, our proposed training pipeline demonstrates a remarkable 18.29% improvement in mean Intersection over Union (mIoU) compared to directly adopting the best existing segmentation methods for fish eye images.

PDF

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here