no code implementations • NAACL (DADC) 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
no code implementations • 14 Aug 2023 • Houjiang Liu, Anubrata Das, Alexander Boltz, Didi Zhou, Daisy Pinaroc, Matthew Lease, Min Kyung Lee
While many Natural Language Processing (NLP) techniques have been proposed for fact-checking, both academic research and fact-checking organizations report limited adoption of such NLP work due to poor alignment with fact-checker practices, values, and needs.
no code implementations • 8 Jan 2023 • Anubrata Das, Houjiang Liu, Venelin Kovatchev, Matthew Lease
We recommend that future research include collaboration with fact-checker stakeholders early on in NLP research, as well as incorporation of human-centered design practices in model development, in order to further guide technology development for human use and practical adoption.
no code implementations • 29 Jun 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
no code implementations • 15 Apr 2022 • Venelin Kovatchev, Soumyajit Gupta, Anubrata Das, Matthew Lease
In this work, we first introduce a differentiable measure that enables direct optimization of group fairness (specifically, balancing accuracy across groups) in model training.
1 code implementation • ACL 2022 • Anubrata Das, Chitrank Gupta, Venelin Kovatchev, Matthew Lease, Junyi Jessy Li
We present ProtoTEx, a novel white-box NLP classification architecture based on prototype networks.
1 code implementation • 17 Feb 2022 • Li Shi, Nilavra Bhattacharya, Anubrata Das, Matthew Lease, Jacek Gwidzka
We conducted a lab-based eye-tracking study to investigate how the interactivity of an AI-powered fact-checking system affects user interactions, such as dwell time, attention, and mental resources involved in using the system.
no code implementations • 20 Sep 2021 • Prakhar Singh, Anubrata Das, Junyi Jessy Li, Matthew Lease
Fact-checking is the process of evaluating the veracity of claims (i. e., purported facts).
no code implementations • 12 May 2021 • Michael D. Ekstrand, Anubrata Das, Robin Burke, Fernando Diaz
Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems.
1 code implementation • 22 Jul 2019 • Anubrata Das, Matthew Lease
While search efficacy has been evaluated traditionally on the basis of result relevance, fairness of search has attracted recent attention.
1 code implementation • 8 Jul 2019 • Anubrata Das, Kunjan Mehta, Matthew Lease
The effect of user bias in fact-checking has not been explored extensively from a user-experience perspective.