Models Genesis, or Generic Autodidactic Models, is a self-supervised approach for learning 3D image representations. The objective of Models Genesis is to learn a common image representation that is transferable and generalizable across diseases, organs, and modalities. It consists of an encoder-decoder architecture with skip connections in between, and is trained to learn a common image representation by restoring the original sub-volume $x_{i}$ (as ground truth) from the transformed one $\bar{x}_{i}$ (as input), in which the reconstruction loss (MSE) is computed between the model prediction $x'_{0}$ and ground truth $x_{i}$. Once trained, the encoder alone can be fine-tuned for target classification tasks; while the encoder and decoder together can be fine-tuned for target segmentation tasks.
Source: Models GenesisPaper | Code | Results | Date | Stars |
---|
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |