Search Results for author: Sandra Matz

Found 4 papers, 1 papers with code

Large Language Models Can Infer Psychological Dispositions of Social Media Users

no code implementations13 Sep 2023 Heinrich Peters, Sandra Matz

As Large Language Models (LLMs) demonstrate increasingly human-like abilities in various natural language processing (NLP) tasks that are bound to become integral to personalized technologies, understanding their capabilities and inherent biases is crucial.

Zero-Shot Learning

The Managerial Effects of Algorithmic Fairness Activism

no code implementations4 Dec 2020 Bo Cowgill, Fabrizio Dell'Acqua, Sandra Matz

We randomly expose business decision-makers to arguments used in AI fairness activism.

Ethics Fairness

Correcting Sociodemographic Selection Biases for Population Prediction from Social Media

1 code implementation10 Nov 2019 Salvatore Giorgi, Veronica Lynn, Keshav Gupta, Farhan Ahmed, Sandra Matz, Lyle Ungar, H. Andrew Schwartz

However, social media users are not typically a representative sample of the intended population -- a "selection bias".

Selection bias

Latent Human Traits in the Language of Social Media: An Open-Vocabulary Approach

no code implementations22 May 2017 Vivek Kulkarni, Margaret L. Kern, David Stillwell, Michal Kosinski, Sandra Matz, Lyle Ungar, Steven Skiena, H. Andrew Schwartz

Taking advantage of linguistic information available through Facebook, we study the process of inferring a new set of potential human traits based on unprompted language use.

Cannot find the paper you are looking for? You can Submit a new open access paper.