Recent advances in language-image pre-training has witnessed the emerging field of building transferable systems that can effortlessly adapt to a wide range of computer vision & multimodal tasks in the wild. This also poses a challenge to evaluate the transferability of these models due to the lack of easy-to-use evaluation toolkits and public benchmarks. "Segmentation in the Wild (SegInW)" Challenge is a part of X-Decoder, that proposed a new benchmark to evaluate the transfer ability of pre-trained vision models. This benchmark presents a diverse set of downstream segmentation datasets, measuring the ability of pre-training models on both the segmentation accuracy and their transfer efficiency in a new task, in terms of training examples and trainable parameters. This SegInW Challenge consists of 25 free public Segmentation datasets, crowd-sourced on roboflow.com. For more details about the challenge submission format, please refer to X-Decoder for SGinW.