no code implementations • EMNLP (WNUT) 2020 • Kellin Pelrine, Jacob Danovitch, Albert Orozco Camacho, Reihaneh Rabbany
Given the global scale of COVID-19 and the flood of social media content related to it, how can we find informative discussions?
no code implementations • Findings (ACL) 2022 • Yifei Li, Pratheeksha Nair, Kellin Pelrine, Reihaneh Rabbany
In this paper, we propose NEAT (Name Extraction Against Trafficking) for extracting person names.
no code implementations • 13 Jan 2024 • Mauricio Rivera, Jean-François Godbout, Reihaneh Rabbany, Kellin Pelrine
We propose an uncertainty quantification framework that leverages both direct confidence elicitation and sampled-based consistency methods to provide better calibration for NLP misinformation mitigation solutions.
no code implementations • 12 Jan 2024 • Tyler Vergho, Jean-Francois Godbout, Reihaneh Rabbany, Kellin Pelrine
Recent large language models (LLMs) have been shown to be effective for misinformation detection.
no code implementations • 2 Jan 2024 • Yury Orlovskiy, Camille Thibault, Anne Imouza, Jean-François Godbout, Reihaneh Rabbany, Kellin Pelrine
Misinformation poses a variety of risks, such as undermining public trust and distorting factual discourse.
1 code implementation • 21 Dec 2023 • Kellin Pelrine, Mohammad Taufeeque, Michał Zając, Euan McLean, Adam Gleave
Language model attacks typically assume one of two extreme threat models: full white-box access to model weights, or black-box access limited to a text generation API.
1 code implementation • 25 Aug 2023 • Kellin Pelrine, Anne Imouza, Zachary Yang, Jacob-Junqi Tian, Sacha Lévy, Gabrielle Desrosiers-Brisebois, Aarash Feizi, Cécile Amadoro, André Blais, Jean-François Godbout, Reihaneh Rabbany
A large number of studies on social media compare the behaviour of users from different political parties.
no code implementations • 19 Aug 2023 • Hao Yu, Zachary Yang, Kellin Pelrine, Jean Francois Godbout, Reihaneh Rabbany
Recent advancements in large language models have demonstrated remarkable capabilities across various NLP tasks.
1 code implementation • 24 May 2023 • Kellin Pelrine, Anne Imouza, Camille Thibault, Meilina Reksoprodjo, Caleb Gupta, Joel Christoph, Jean-François Godbout, Reihaneh Rabbany
We propose focusing on generalization, uncertainty, and how to leverage recent large language models, in order to create more practical tools to evaluate information veracity in contexts where perfect classification is impossible.
2 code implementations • 1 Nov 2022 • Tony T. Wang, Adam Gleave, Tom Tseng, Kellin Pelrine, Nora Belrose, Joseph Miller, Michael D. Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, Stuart Russell
The core vulnerability uncovered by our attack persists even in KataGo agents adversarially trained to defend against our attack.
1 code implementation • 22 Sep 2022 • Sacha Lévy, Farimah Poursafaei, Kellin Pelrine, Reihaneh Rabbany
How can we study social interactions on evolving topics at a mass scale?
1 code implementation • 20 Jul 2022 • Farimah Poursafaei, Shenyang Huang, Kellin Pelrine, Reihaneh Rabbany
To evaluate against more difficult negative edges, we introduce two more challenging negative sampling strategies that improve robustness and better match real-world applications.
2 code implementations • 14 Apr 2021 • Kellin Pelrine, Jacob Danovitch, Reihaneh Rabbany
As social media becomes increasingly prominent in our day to day lives, it is increasingly important to detect informative content and prevent the spread of disinformation and unverified rumours.