BigGAN-deep is a deeper version (4x) of BigGAN. The main difference is a slightly differently designed residual block. Here the $z$ vector is concatenated with the conditional vector without splitting it into chunks. It is also based on residual blocks with bottlenecks. BigGAN-deep uses a different strategy than BigGAN aimed at preserving identity throughout the skip connections. In G, where the number of channels needs to be reduced, BigGAN-deep simply retains the first group of channels and drop the rest to produce the required number of channels. In D, where the number of channels should be increased, BigGAN-deep passes the input channels unperturbed, and concatenates them with the remaining channels produced by a 1 × 1 convolution. As far as the network configuration is concerned, the discriminator is an exact reflection of the generator.
There are two blocks at each resolution (BigGAN uses one), and as a result BigGAN-deep is four times deeper than BigGAN. Despite their increased depth, the BigGAN-deep models have significantly fewer parameters mainly due to the bottleneck structure of their residual blocks.
Source: Large Scale GAN Training for High Fidelity Natural Image SynthesisPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 4 | 18.18% |
Conditional Image Generation | 3 | 13.64% |
Bias Detection | 2 | 9.09% |
Clustering | 2 | 9.09% |
Density Estimation | 1 | 4.55% |
Metric Learning | 1 | 4.55% |
Relational Reasoning | 1 | 4.55% |
Self-Supervised Learning | 1 | 4.55% |
Fairness | 1 | 4.55% |