no code implementations • ACL 2019 • Jeffrey Lund, Piper Armstrong, Wilson Fearn, Stephen Cowley, Courtni Byun, Jordan Boyd-Graber, Kevin Seppi
Topic models are typically evaluated with respect to the global topic distributions that they generate, using metrics such as coherence, but without regard to local (token-level) topic assignments.
no code implementations • NAACL 2019 • Jeffrey Lund, Piper Armstrong, Wilson Fearn, Stephen Cowley, Emily Hales, Kevin Seppi
Cross-referencing, which links passages of text to other related passages, can be a valuable study aid for facilitating comprehension of a text.
no code implementations • EMNLP 2018 • Jeffrey Lund, Stephen Cowley, Wilson Fearn, Emily Hales, Kevin Seppi
We propose Labeled Anchors, an interactive and supervised topic model based on the anchor words algorithm (Arora et al., 2013).
no code implementations • EMNLP 2017 • You Lu, Jeffrey Lund, Jordan Boyd-Graber
For online topic modeling, the magnitude of gradients is very large.
no code implementations • ACL 2017 • Jeffrey Lund, Connor Cook, Kevin Seppi, Jordan Boyd-Graber
We propose combinations of words as anchors, going beyond existing single word anchor algorithms{---}an approach we call {``}Tandem Anchors{''}.
no code implementations • COLING 2016 • Jeffrey Lund, Paul Felt, Kevin Seppi, Eric Ringger
Probabilistic models are a useful means for analyzing large text corpora.