A GGLN Neuron is a type of neuron used in the GGLN architecture. GGLN. The key idea is that further representational power can be added to a weighted product of Gaussians via a contextual gating procedure. This is achieved by extending a weighted product of Gaussians model with an additional type of input called side information. The side information will be used by a neuron to select a weight vector to apply for a given example from a table of weight vectors. In typical applications to regression, the side information is defined as the (normalized) input features for an input example: i.e. $z=(x\bar{x}) / \sigma_{x}$.
More formally, associated with each neuron is a context function $c: \mathcal{Z} \rightarrow \mathcal{C}$, where $\mathcal{Z}$ is the set of possible side information and $\mathcal{C}={0, \ldots, k1}$ for some $k \in \mathbb{N}$ is the context space. Each neuron $i$ is now parameterized by a weight matrix $W_{i}=\left[w_{i, 0} \ldots w_{i, k1}\right]^{\top}$ with each row vector $w_{i j} \in \mathcal{W}$ for $0 \leq j<k$. The context function $c$ is responsible for mapping side information $z \in \mathcal{Z}$ to a particular row $w_{i, c(z)}$ of $W_{i}$, which we then use to weight the Product of Gaussians. In other words, a GGLN neuron can be defined by:
$$ \operatorname{PoG}_{W}^{c}\left(y ; f_{1}(\cdot), \ldots, f_{m}(\cdot), z\right):=\operatorname{PoG}_{w^{c(z)}}\left(y ; f_{1}(\cdot), \ldots, f_{m}(\cdot)\right) $$
with the associated loss function $\log \left(\operatorname{PoG}_{W}^{c}\left(y ; f_{1}(y), \ldots, f_{m}(y), z\right)\right)$ inheriting all the properties needed to apply Online Convex Programming.
Source: Gaussian Gated Linear NetworksPaper  Code  Results  Date  Stars 

Component  Type 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 