MTLHealth: A Deep Learning System for Detecting Disturbing Content in Student Essays

7 Mar 2021  ·  Joseph Valencia, Erin Yao ·

Essay submissions to standardized tests like the ACT occasionally include references to bullying, self-harm, violence, and other forms of disturbing content. Graders must take great care to identify cases like these and decide whether to alert authorities on behalf of students who may be in danger. There is a growing need for robust computer systems to support human decision-makers by automatically flagging potential instances of disturbing content. This paper describes MTLHealth, a disturbing content detection pipeline built around recent advances from computational linguistics, particularly pre-trained language model Transformer networks.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods