Search Results for author: Ilaria Liccardi

Found 3 papers, 3 papers with code

Debugging Tests for Model Explanations

1 code implementation NeurIPS 2020 Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim

For several explanation methods, we assess their ability to: detect spurious correlation artifacts (data contamination), diagnose mislabeled training examples (data contamination), differentiate between a (partially) re-initialized model and a trained one (model contamination), and detect out-of-distribution inputs (test-time contamination).

Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

1 code implementation22 May 2020 Harini Suresh, Natalie Lao, Ilaria Liccardi

ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect.

BIG-bench Machine Learning Decision Making +1

Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence

1 code implementation8 Jan 2020 Midas Nouwens, Ilaria Liccardi, Michael Veale, David Karger, Lalana Kagal

New consent management platforms (CMPs) have been introduced to the web to conform with the EU's General Data Protection Regulation, particularly its requirements for consent when companies collect and process users' personal data.

Human-Computer Interaction Computers and Society

Cannot find the paper you are looking for? You can Submit a new open access paper.