Text-based depression detection on sparse data

8 Apr 2019  ·  Heinrich Dinkel, Mengyue Wu, Kai Yu ·

Previous text-based depression detection is commonly based on large user-generated data. Sparse scenarios like clinical conversations are less investigated. This work proposes a text-based multi-task BGRU network with pretrained word embeddings to model patients' responses during clinical interviews. Our main approach uses a novel multi-task loss function, aiming at modeling both depression severity and binary health state. We independently investigate word- and sentence-level word-embeddings as well as the use of large-data pretraining for depression detection. To strengthen our findings, we report mean-averaged results for a multitude of independent runs on sparse data. First, we show that pretraining is helpful for word-level text-based depression detection. Second, our results demonstrate that sentence-level word-embeddings should be mostly preferred over word-level ones. While the choice of pooling function is less crucial, mean and attention pooling should be preferred over last-timestep pooling. Our method outputs depression presence results as well as predicted severity score, culminating a macro F1 score of 0.84 and MAE of 3.48 on the DAIC-WOZ development set.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here