Search Results for author: James R. Foulds

Found 9 papers, 2 papers with code

Polling Latent Opinions: A Method for Computational Sociolinguistics Using Transformer Language Models

1 code implementation15 Apr 2022 Philip Feldman, Aaron Dant, James R. Foulds, Shemei Pan

Text analysis of social media for sentiment, topic analysis, and other analysis depends initially on the selection of keywords and phrases that will be used to create the research corpora.

Memorization

Dense Distributions from Sparse Samples: Improved Gibbs Sampling Parameter Estimators for LDA

1 code implementation8 May 2015 Yannis Papanikolaou, James R. Foulds, Timothy N. Rubin, Grigorios Tsoumakas

We introduce a novel approach for estimating Latent Dirichlet Allocation (LDA) parameters from collapsed Gibbs samples (CGS), by leveraging the full conditional distributions over the latent variable assignments to efficiently average over multiple samples, for little more computational cost than drawing a single additional collapsed Gibbs sample.

Clustering Multi-Label Classification

Neural Embedding Allocation: Distributed Representations of Topic Models

no code implementations10 Sep 2019 Kamrun Naher Keya, Yannis Papanikolaou, James R. Foulds

Word embedding models such as the skip-gram learn vector representations of words' semantic relationships, and document embedding models learn similar representations for documents.

Document Embedding Topic Models

Equitable Allocation of Healthcare Resources with Fair Cox Models

no code implementations14 Oct 2020 Kamrun Naher Keya, Rashidul Islam, SHimei Pan, Ian Stockwell, James R. Foulds

Healthcare programs such as Medicaid provide crucial services to vulnerable populations, but due to limited resources, many of the individuals who need these services the most languish on waiting lists.

Fairness

Analyzing COVID-19 Tweets with Transformer-based Language Models

no code implementations20 Apr 2021 Philip Feldman, Sim Tiwari, Charissa S. L. Cheah, James R. Foulds, SHimei Pan

This paper describes a method for using Transformer-based Language Models (TLMs) to understand public opinion from social media posts.

Learning User Embeddings from Temporal Social Media Data: A Survey

no code implementations17 May 2021 Fatema Hasan, Kevin S. Xu, James R. Foulds, SHimei Pan

User-generated data on social media contain rich information about who we are, what we like and how we make decisions.

Representation Learning

Fair Inference for Discrete Latent Variable Models

no code implementations15 Sep 2022 Rashidul Islam, SHimei Pan, James R. Foulds

It is now well understood that machine learning models, trained on data without due care, often exhibit unfair and discriminatory behavior against certain populations.

Fairness Representation Learning +1

Trapping LLM Hallucinations Using Tagged Context Prompts

no code implementations9 Jun 2023 Philip Feldman, James R. Foulds, SHimei Pan

Recent advances in large language models (LLMs), such as ChatGPT, have led to highly sophisticated conversation agents.

Hallucination

Killer Apps: Low-Speed, Large-Scale AI Weapons

no code implementations14 Jan 2024 Philip Feldman, Aaron Dant, James R. Foulds

The accelerating advancements in Artificial Intelligence (AI) and Machine Learning (ML), highlighted by the development of cutting-edge Generative Pre-trained Transformer (GPT) models by organizations such as OpenAI, Meta, and Anthropic, present new challenges and opportunities in warfare and security.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.