We propose a new approach to increase inference performance in environments that require a specific sequence of actions in order to be solved.
We construct CPN models with different backbone networks, and apply them to instance segmentation of cells in datasets from different modalities.
By solving the brain mapping problem on this graph using graph neural networks, we obtain significantly improved classification results.
Here we present a new workflow for mapping cytoarchitectonic areas in large series of cell-body stained histological sections of human postmortem brains.
Cytoarchitectonic maps provide microstructural reference parcellations of the brain, describing its organization in terms of the spatial arrangement of neuronal cell bodies as measured from histological tissue sections.
Alpha matting describes the problem of separating the objects in the foreground from the background of an image given only a rough sketch.
In this paper, we propose the application of conditional generative adversarial networks to solve various phase retrieval problems.
We propose a modular extension of backpropagation for the computation of block-diagonal approximations to various curvature matrices of the training objective (in particular, the Hessian, generalized Gauss-Newton, and positive-curvature Hessian).
We show that the self-supervised model has implicitly learned to distinguish several cortical brain areas -- a strong indicator that the proposed auxiliary task is appropriate for cytoarchitectonic mapping.
Its high resolution allows the study of laminar and columnar patterns of cell distributions, which build an important basis for the simulation of cortical areas and networks.
To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object’s color or shape.
We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map.
In this work, we also rely on a two-step procedure, but learn the second step on a large dataset of natural images, using a neural network.
Modelling camera shake as a space-invariant convolution simplifies the problem of removing camera shake, but often insufficiently models actual motion blur such as those due to camera rotation and movements outside the sensor plane or when objects in the scene have different distances to the camera.
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data.