Search Results for author: Sarah Masud Preum

Found 6 papers, 2 papers with code

Do LLMs Find Human Answers To Fact-Driven Questions Perplexing? A Case Study on Reddit

no code implementations1 Apr 2024 Parker Seegmiller, Joseph Gatto, Omar Sharif, Madhusudan Basak, Sarah Masud Preum

Large language models (LLMs) have been shown to be proficient in correctly answering questions in the context of online discourse.

Chain-of-Thought Embeddings for Stance Detection on Social Media

1 code implementation30 Oct 2023 Joseph Gatto, Omar Sharif, Sarah Masud Preum

Chain-of-Thought (COT) prompting has recently been shown to improve performance on stance detection tasks -- alleviating some of these issues.

Stance Detection

Statistical Depth for Ranking and Characterizing Transformer-Based Text Embeddings

1 code implementation23 Oct 2023 Parker Seegmiller, Sarah Masud Preum

We adopt a statistical depth to measure distributions of transformer-based text embeddings, transformer-based text embedding (TTE) depth, and introduce the practical use of this depth for both modeling and distributional inference in NLP pipelines.

Data Augmentation In-Context Learning +2

Text Encoders Lack Knowledge: Leveraging Generative LLMs for Domain-Specific Semantic Textual Similarity

no code implementations12 Sep 2023 Joseph Gatto, Omar Sharif, Parker Seegmiller, Philip Bohlman, Sarah Masud Preum

Additionally, we show generative LLMs significantly outperform existing encoder-based STS models when characterizing the semantic similarity between two texts with complex semantic relationships dependent on world knowledge.

Memorization Semantic Similarity +4

Cannot find the paper you are looking for? You can Submit a new open access paper.