no code implementations • NAACL (CLPsych) 2022 • Adithya V Ganesan, Vasudha Varadarajan, Juhi Mittal, Shashanka Subrahmanya, Matthew Matero, Nikita Soni, Sharath Chandra Guntuku, Johannes Eichstaedt, H. Andrew Schwartz
Psychological states unfold dynamically; to understand and measure mental health at scale we need to detect and measure these changes from sequences of online posts.
no code implementations • WASSA (ACL) 2022 • Matthew Matero, Albert Hung, H. Schwartz
Many recent works in natural language processing have demonstrated ability to assess aspects of mental health from personal discourse.
no code implementations • 3 Feb 2024 • Gourab Dey, Adithya V Ganesan, Yash Kumar Lal, Manal Shah, Shreyashee Sinha, Matthew Matero, Salvatore Giorgi, Vivek Kulkarni, H. Andrew Schwartz
Social science NLP tasks, such as emotion or humor detection, are required to capture the semantics along with the implicit pragmatics from text, often with limited amounts of training data.
1 code implementation • Findings (ACL) 2022 • Nikita Soni, Matthew Matero, Niranjan Balasubramanian, H. Andrew Schwartz
Natural language is generated by people, yet traditional language modeling views words or documents as if generated independently.
no code implementations • 27 Dec 2021 • Matthew Matero, Albert Hung, H. Andrew Schwartz
Recent works have demonstrated ability to assess aspects of mental health from personal discourse.
1 code implementation • Findings (EMNLP) 2021 • Matthew Matero, Nikita Soni, Niranjan Balasubramanian, H. Andrew Schwartz
Much of natural language processing is focused on leveraging large capacity language models, typically trained over single messages with a task of predicting one or more tokens.
1 code implementation • NAACL 2021 • Adithya V Ganesan, Matthew Matero, Aravind Reddy Ravula, Huy Vu, H. Andrew Schwartz
In human-level NLP tasks, such as predicting mental health, personality, or demographics, the number of observations is often smaller than the standard 768+ hidden state sizes of each layer within modern transformer-based language models, limiting the ability to effectively leverage transformers.
1 code implementation • COLING 2020 • Matthew Matero, H. Andrew Schwartz
Human natural language is mentioned at a specific point in time while human emotions change over time.
no code implementations • WS 2019 • Matthew Matero, Akash Idnani, Youngseo Son, Salvatore Giorgi, Huy Vu, Mohammad Zamani, Parth Limbachiya, Sharath Ch Guntuku, ra, H. Andrew Schwartz
Mental health predictive systems typically model language as if from a single context (e. g. Twitter posts, status updates, or forum posts) and often limited to a single level of analysis (e. g. either the message-level or user-level).