Search Results for author: Matthew Matero

Found 9 papers, 4 papers with code

SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks

no code implementations3 Feb 2024 Gourab Dey, Adithya V Ganesan, Yash Kumar Lal, Manal Shah, Shreyashee Sinha, Matthew Matero, Salvatore Giorgi, Vivek Kulkarni, H. Andrew Schwartz

Social science NLP tasks, such as emotion or humor detection, are required to capture the semantics along with the implicit pragmatics from text, often with limited amounts of training data.

Humor Detection Reading Comprehension

Human Language Modeling

1 code implementation Findings (ACL) 2022 Nikita Soni, Matthew Matero, Niranjan Balasubramanian, H. Andrew Schwartz

Natural language is generated by people, yet traditional language modeling views words or documents as if generated independently.

Age Estimation Language Modelling +3

MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection

1 code implementation Findings (EMNLP) 2021 Matthew Matero, Nikita Soni, Niranjan Balasubramanian, H. Andrew Schwartz

Much of natural language processing is focused on leveraging large capacity language models, typically trained over single messages with a task of predicting one or more tokens.

Attribute Language Modelling +2

Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality

1 code implementation NAACL 2021 Adithya V Ganesan, Matthew Matero, Aravind Reddy Ravula, Huy Vu, H. Andrew Schwartz

In human-level NLP tasks, such as predicting mental health, personality, or demographics, the number of observations is often smaller than the standard 768+ hidden state sizes of each layer within modern transformer-based language models, limiting the ability to effectively leverage transformers.

Dimensionality Reduction

Autoregressive Affective Language Forecasting: A Self-Supervised Task

1 code implementation COLING 2020 Matthew Matero, H. Andrew Schwartz

Human natural language is mentioned at a specific point in time while human emotions change over time.

Suicide Risk Assessment with Multi-level Dual-Context Language and BERT

no code implementations WS 2019 Matthew Matero, Akash Idnani, Youngseo Son, Salvatore Giorgi, Huy Vu, Mohammad Zamani, Parth Limbachiya, Sharath Ch Guntuku, ra, H. Andrew Schwartz

Mental health predictive systems typically model language as if from a single context (e. g. Twitter posts, status updates, or forum posts) and often limited to a single level of analysis (e. g. either the message-level or user-level).

Cannot find the paper you are looking for? You can Submit a new open access paper.