Deep Structure Learning using Feature Extraction in Trained Projection Space

1 Sep 2020  ·  Christoph Angermann, Markus Haltmeier ·

Over the last decade of machine learning, convolutional neural networks have been the most striking successes for feature extraction of rich sensory and high-dimensional data. While learning data representations via convolutions is already well studied and efficiently implemented in various deep learning libraries, one often faces limited memory capacity and insufficient number of training data, especially for high-dimensional and large-scale tasks. To overcome these limitations, we introduce a network architecture using a self-adjusting and data dependent version of the Radon-transform (linear data projection), also known as x-ray projection, to enable feature extraction via convolutions in lower-dimensional space. The resulting framework, named PiNet, can be trained end-to-end and shows promising performance on volumetric segmentation tasks. We test proposed model on public datasets to show that our approach achieves comparable results only using fractional amount of parameters. Investigation of memory usage and processing time confirms PiNet's superior efficiency compared to other segmentation models.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here