Search Results for author: Maria Glenski

Found 12 papers, 3 papers with code

Foundation Models of Scientific Knowledge for Chemistry: Opportunities, Challenges and Lessons Learned

1 code implementation BigScience (ACL) 2022 Sameera Horawalavithana, Ellyn Ayton, Shivam Sharma, Scott Howland, Megha Subramanian, Scott Vasquez, Robin Cosbey, Maria Glenski, Svitlana Volkova

Foundation models pre-trained on large corpora demonstrate significant gains across many natural language processing tasks and domains e. g., law, healthcare, education, etc.

EXPERT: Public Benchmarks for Dynamic Heterogeneous Academic Graphs

1 code implementation14 Apr 2022 Sameera Horawalavithana, Ellyn Ayton, Anastasiya Usenko, Shivam Sharma, Jasmine Eshun, Robin Cosbey, Maria Glenski, Svitlana Volkova

Machine learning models that learn from dynamic graphs face nontrivial challenges in learning and inference as both nodes and edges change over time.

Unsupervised Keyphrase Extraction via Interpretable Neural Networks

1 code implementation15 Mar 2022 Rishabh Joshi, Vidhisha Balachandran, Emily Saldanha, Maria Glenski, Svitlana Volkova, Yulia Tsvetkov

Keyphrase extraction aims at automatically extracting a list of "important" phrases representing the key concepts in a document.

Keyphrase Extraction Topic Classification

Towards Trustworthy Deception Detection: Benchmarking Model Robustness across Domains, Modalities, and Languages

no code implementations RDSM (COLING) 2020 Maria Glenski, Ellyn Ayton, Robin Cosbey, Dustin Arendt, Svitlana Volkova

Our analyses reveal a significant drop in performance when testing neural models on out-of-domain data and non-English languages that may be mitigated using diverse training data.

Benchmarking Deception Detection +2

Evaluating Deception Detection Model Robustness To Linguistic Variation

no code implementations NAACL (SocialNLP) 2021 Maria Glenski, Ellyn Ayton, Robin Cosbey, Dustin Arendt, Svitlana Volkova

With the increasing use of machine-learning driven algorithmic judgements, it is critical to develop models that are robust to evolving or manipulated inputs.

Adversarial Defense Deception Detection +1

Measure Utility, Gain Trust: Practical Advice for XAI Researcher

no code implementations27 Sep 2020 Brittany Davis, Maria Glenski, William Sealy, Dustin Arendt

However, the focus on trust is too narrow, and has led the research community astray from tried and true empirical methods that produced more defensible scientific knowledge about people and explanations.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference

no code implementations21 Sep 2020 Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan Rossi, Tim Althoff

Across 648 experiments and two datasets, we evaluate every commonly used causal inference method and identify their strengths and weaknesses to inform social media researchers seeking to use such methods, and guide future improvements.

Causal Inference

Improved Forecasting of Cryptocurrency Price using Social Signals

no code implementations1 Jul 2019 Maria Glenski, Tim Weninger, Svitlana Volkova

Social media signals have been successfully used to develop large-scale predictive and anticipatory analytics.

Identifying and Understanding User Reactions to Deceptive and Trusted Social News Sources

no code implementations ACL 2018 Maria Glenski, Tim Weninger, Svitlana Volkova

In the age of social news, it is important to understand the types of reactions that are evoked from news sources with various levels of credibility.

Cannot find the paper you are looking for? You can Submit a new open access paper.