ModulOM: Disseminating Deep Learning Research with Modular Output Mathematics

Solving a task with a deep neural network requires an appropriate formulation of the underlying inference problem. A formulation defines the type of variables output by the network, but also the set of variables and functions, denoted output mathematics, needed to turn those outputs into task-relevant predictions. Despite the fact that the task performance may largely depend on the formulation, most deep learning experiment repositories do not offer a convenient solution to explore formulation variants in a flexible and incremental manner. Software components for neural network creation, parameter optimization or data augmentation, in contrast, offer some degree of modularity that has proved to facilitate the transfer of know-how associated to model development. But this is not the case for output mathematics. Our paper proposes to address this limitation by embedding the output mathematics in a modular component as well, by building on multiple inheritance principles in object-oriented programming. The flexibility offered by the proposed component and its added value in terms of knowledge dissemination are demonstrated in the context of the Panoptic-Deeplab method, a representative computer vision use case.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here