Out-of-distribution detection seeks to identify novelties, samples that deviate from the norm. The task has been found to be quite challenging, particularly in the case where the normal data distribution consists of multiple semantic classes (e.g., multiple object categories). To overcome this challenge, current approaches require manual labeling of the normal images provided during training. In this work, we tackle multi-class novelty detection without class labels. Our simple but effective solution consists of two stages: we first discover "pseudo-class" labels using unsupervised clustering. Then using these pseudo-class labels, we are able to use standard supervised out-of-distribution detection methods. We verify the performance of our method by a favorable comparison to the state-of-the-art, and provide extensive analysis and ablations.