Beyond Laurel/Yanny: An Autoencoder-Enabled Search for Polyperceivable Audio
The famous {``}laurel/yanny{''} phenomenon references an audio clip that elicits dramatically different responses from different listeners. For the original clip, roughly half the population hears the word {``}laurel,{''} while the other half hears {``}yanny.{''} How common are such {``}polyperceivable{''} audio clips? In this paper we apply ML techniques to study the prevalence of polyperceivability in spoken language. We devise a metric that correlates with polyperceivability of audio clips, use it to efficiently find new {``}laurel/yanny{''}-type examples, and validate these results with human experiments. Our results suggest that polyperceivable examples are surprisingly prevalent in natural language, existing for {\textgreater}2{\%} of English words.
PDF Abstract