no code implementations • 12 Sep 2017 • Lyan Verwimp, Joris Pelemans, Marieke Lycke, Hugo Van hamme, Patrick Wambacq
One model is trained on all available data (46M word tokens), but we also trained models on a specific type of TV show or domain/topic.