Gated Linear Networks

G-GLN Neuron

Introduced by Budden et al. in Gaussian Gated Linear Networks

A G-GLN Neuron is a type of neuron used in the G-GLN architecture. G-GLN. The key idea is that further representational power can be added to a weighted product of Gaussians via a contextual gating procedure. This is achieved by extending a weighted product of Gaussians model with an additional type of input called side information. The side information will be used by a neuron to select a weight vector to apply for a given example from a table of weight vectors. In typical applications to regression, the side information is defined as the (normalized) input features for an input example: i.e. $z=(x-\bar{x}) / \sigma_{x}$.

More formally, associated with each neuron is a context function $c: \mathcal{Z} \rightarrow \mathcal{C}$, where $\mathcal{Z}$ is the set of possible side information and $\mathcal{C}={0, \ldots, k-1}$ for some $k \in \mathbb{N}$ is the context space. Each neuron $i$ is now parameterized by a weight matrix $W_{i}=\left[w_{i, 0} \ldots w_{i, k-1}\right]^{\top}$ with each row vector $w_{i j} \in \mathcal{W}$ for $0 \leq j<k$. The context function $c$ is responsible for mapping side information $z \in \mathcal{Z}$ to a particular row $w_{i, c(z)}$ of $W_{i}$, which we then use to weight the Product of Gaussians. In other words, a G-GLN neuron can be defined by:

$$ \operatorname{PoG}_{W}^{c}\left(y ; f_{1}(\cdot), \ldots, f_{m}(\cdot), z\right):=\operatorname{PoG}_{w^{c(z)}}\left(y ; f_{1}(\cdot), \ldots, f_{m}(\cdot)\right) $$

with the associated loss function $-\log \left(\operatorname{PoG}_{W}^{c}\left(y ; f_{1}(y), \ldots, f_{m}(y), z\right)\right)$ inheriting all the properties needed to apply Online Convex Programming.

Source: Gaussian Gated Linear Networks


Paper Code Results Date Stars


Task Papers Share
Denoising 1 33.33%
Density Estimation 1 33.33%
Multi-Armed Bandits 1 33.33%


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign