no code implementations • 5 Apr 2023 • Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha
Building upon research suggesting that people blame AI systems, we investigated how several factors influence people's reactive attitudes towards machines, designers, and users.
no code implementations • 11 May 2022 • Gabriel Lima, Nina Grgić-Hlača, Jin Keun Jeong, Meeyoung Cha
Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i. e., patients), due to a misguided perception that they have control over explainable algorithms.
no code implementations • 1 Feb 2021 • Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha
How to attribute responsibility for autonomous artificial intelligence (AI) systems' actions has been widely debated across the humanities and social science disciplines.
no code implementations • 15 Jan 2021 • Gabriel Lima, Meeyoung Cha
There is a growing need for data-driven research efforts on how the public perceives the ethical, moral, and legal issues of autonomous AI systems.
1 code implementation • 4 Aug 2020 • Gabriel Lima, Changyeon Kim, Seungho Ryu, Chihyung Jeon, Meeyoung Cha
Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted "electronic personalities."
no code implementations • 15 Jun 2020 • Gabriel Lima, Meeyoung Cha, Chiyoung Cha, Hyeyoung Hwang
This study presents survey results of the public's willingness to get vaccinated against COVID-19 during an early phase of the pandemic and examines factors that could influence vaccine acceptance based on a between-subjects design.
no code implementations • 2 May 2020 • Nina Grgić-Hlača, Gabriel Lima, Adrian Weller, Elissa M. Redmiles
A growing number of oversight boards and regulatory bodies seek to monitor and govern algorithms that make decisions about people's lives.
no code implementations • 23 Apr 2020 • Gabriel Lima, Meeyoung Cha
Responsible Artificial Intelligence (AI) proposes a framework that holds all stakeholders involved in the development of AI to be responsible for their systems.
no code implementations • 13 Mar 2020 • Gabriel Lima, Meeyoung Cha, Chihyung Jeon, Kyungsin Park
Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios.
no code implementations • LREC 2012 • Carmen Dayrell, C, Arnaldo ido Jr., Gabriel Lima, Danilo Machado Jr., Ann Copestake, Val{\'e}ria Feltrim, Stella Tagnin, S Aluisio, ra
Here, we present MAZEA (Multi-label Argumentative Zoning for English Abstracts), a multi-label classifier which automatically identifies rhetorical moves in abstracts but allows for a given sentence to be assigned as many labels as appropriate.