We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts.
When a model's performance differs across socially or culturally relevant groups--like race, gender, or the intersections of many such groups--it is often called "biased."
Despite impressive performance in many benchmark datasets, AI models can still make mistakes, especially among out-of-distribution examples.
However, taking into account the differences of different data types, how to use a fusion method adapted to financial data to fuse real market prices and tweets from social media, so that the prediction model can fully integrate different types of data remains a challenging problem.
no code implementations • 24 Sep 2021 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations.
In this paper, we describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
Presentations are critical for communication in all areas of our lives, yet the creation of slide decks is often tedious and time-consuming.
Automated Machine Learning (AutoML) is a rapidly growing set of technologies that automate the model development pipeline by searching model space and generating candidate models.
This paper studies how to improve the generalization performance and learning speed of the navigation agents trained with deep reinforcement learning (DRL).
There is an active research thread in AI, \autoai, that aims to develop systems for automating end-to-end the DS/ML Lifecycle.
no code implementations • 15 Nov 2020 • Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang
Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders.
Deep reinforcement learning (DRL) demonstrates great potential in mapless navigation domain.
We, then, conduct a second user experiment which shows that our time allocation strategy with explanation can effectively de-anchor the human and improve collaborative performance when the AI model has low confidence and is incorrect.
The similarity score between feature rankings provided by the annotator and the local model explanation is used to assign a weight to each corresponding committee model.
In this article, we establish scale-invariant Strichartz estimates for the Schr\"odinger equation on arbitrary compact globally symmetric spaces and some bilinear Strichartz estimates on products of rank-one spaces.
Analysis of PDEs Representation Theory
The method can also leverage side information, where users can specify points for which they may want the explanations to be similar.
Today, AI is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern.
We conducted an empirical study comparing the model learning outcomes, feedback content and experience with XAL, to that of traditional AL and coactive learning (providing the model's prediction without the explanation).
Many proposed methods for explaining machine learning predictions are in fact challenging to understand for nontechnical consumers.
In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success.
2 code implementations • 6 Sep 2019 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability.
As the application of deep neural networks proliferates in numerous areas such as medical imaging, video surveillance, and self driving cars, the need for explaining the decisions of these models has become a hot research topic, both at the global and local level.
no code implementations • 14 Dec 2018 • Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor
We report on a user study that shows positive user feedback for this new approach to build conversational agents, and demonstrates the effectiveness of using data programming for auto-labeling.
A distinct advantage of JACA is that it can be applied to the multi-view data with block-missing structure, that is to cases where a subset of views or class labels is missing for some subjects.
12 code implementations • 3 Oct 2018 • Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang
Such architectural design and abstractions enable researchers and developers to extend the toolkit with their new algorithms and improvements, and to use it for performance benchmarking.