Orthogonal Regularization is a regularization technique for convolutional neural networks, introduced with generative modelling as the task in mind. Orthogonality is argued to be a desirable quality in ConvNet filters, partially because multiplication by an orthogonal matrix leaves the norm of the original matrix unchanged. This property is valuable in deep or recurrent networks, where repeated matrix multiplication can result in signals vanishing or exploding. To try to maintain orthogonality throughout training, Orthogonal Regularization encourages weights to be orthogonal by pushing them towards the nearest orthogonal manifold. The objective function is augmented with the cost:
$$ \mathcal{L}_{ortho} = \sum\left(|WW^{T} − I|\right) $$
Where $\sum$ indicates a sum across all filter banks, $W$ is a filter bank, and $I$ is the identity matrix
Source: Neural Photo Editing with Introspective Adversarial NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 3 | 6.67% |
Image Generation | 3 | 6.67% |
Dimensionality Reduction | 2 | 4.44% |
Model Compression | 2 | 4.44% |
Federated Learning | 2 | 4.44% |
Sentence | 2 | 4.44% |
Sentiment Analysis | 2 | 4.44% |
Speech Synthesis | 2 | 4.44% |
Video Generation | 2 | 4.44% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |