1 code implementation • BigScience (ACL) 2022 • Sameera Horawalavithana, Ellyn Ayton, Shivam Sharma, Scott Howland, Megha Subramanian, Scott Vasquez, Robin Cosbey, Maria Glenski, Svitlana Volkova
Foundation models pre-trained on large corpora demonstrate significant gains across many natural language processing tasks and domains e. g., law, healthcare, education, etc.
no code implementations • 21 Nov 2023 • Sai Munikoti, Anurag Acharya, Sridevi Wagle, Sameera Horawalavithana
We train a graph neural network on the curated document graph to act as a structural encoder for the corresponding passages retrieved during the model pretraining.
1 code implementation • 15 Nov 2023 • Sridevi Wagle, Sai Munikoti, Anurag Acharya, Sara Smith, Sameera Horawalavithana
This research investigates how uncertainty scores vary when scientific knowledge is incorporated as pretraining and retrieval data and explores the relationship between uncertainty scores and the accuracy of model-generated outputs.
no code implementations • 7 Nov 2023 • Sai Munikoti, Anurag Acharya, Sridevi Wagle, Sameera Horawalavithana
Despite the dramatic progress in Large Language Model (LLM) development, LLMs often provide seemingly plausible but not factual information, often referred to as hallucinations.
1 code implementation • 17 Oct 2023 • Anurag Acharya, Sai Munikoti, Aaron Hellinger, Sara Smith, Sridevi Wagle, Sameera Horawalavithana
As LLMs have become increasingly popular, they have been used in almost every field.
1 code implementation • 18 Jul 2023 • Sameera Horawalavithana, Ellyn Ayton, Anastasiya Usenko, Robin Cosbey, Svitlana Volkova
The ability to anticipate technical expertise and capability evolution trends globally is essential for national and global security, especially in safety-critical domains like nuclear nonproliferation (NN) and rapidly emerging fields like artificial intelligence (AI).
1 code implementation • 3 Jul 2023 • Sameera Horawalavithana, Sai Munikoti, Ian Stewart, Henry Kvinge
Instruction finetuning is a popular paradigm to align large language models (LLM) with human intent.
1 code implementation • 14 Apr 2022 • Sameera Horawalavithana, Ellyn Ayton, Anastasiya Usenko, Shivam Sharma, Jasmine Eshun, Robin Cosbey, Maria Glenski, Svitlana Volkova
Machine learning models that learn from dynamic graphs face nontrivial challenges in learning and inference as both nodes and edges change over time.
no code implementations • 22 Sep 2021 • Kin Wai Ng, Sameera Horawalavithana, Adriana Iamnitchi
Due to their widespread adoption, social media platforms present an ideal environment for studying and understanding social behavior, especially on information spread.
no code implementations • 24 Feb 2021 • Sameera Horawalavithana, Ravindu De Silva, Mohamed Nabeel, Charitha Elvitigala, Primal Wijesekara, Adriana Iamnitchi
We investigate the link sharing behavior of Twitter users following the temporary halt of AstraZeneca COVID-19 vaccine development in September 2020.
Social and Information Networks
no code implementations • 26 Apr 2020 • Sameera Horawalavithana, John Skvoretz, Adriana Iamnitchi
Predicting the flow of information in dynamic social environments is relevant to many areas of the contemporary society, from disseminating health care messages to meme tracking.
no code implementations • 3 Jul 2019 • Sameera Horawalavithana, Adriana Iamnitchi
More precisely, we study the boundaries of anonymity based on the structural properties of real graph datasets in terms of how their dK-based anonymized versions resist (or fail) to various types of attacks.