no code implementations • 30 Apr 2024 • Dazhuo Qiu, Mengying Wang, Arijit Khan, Yinghui Wu
Given a graph neural network M, a robust counterfactual witness refers to the fraction of a graph G that are counterfactual and factual explanation of the results of M over G, but also remains so for any "disturbed" G by flipping up to k of its node pairs.
1 code implementation • 13 Feb 2024 • Yangxin Fan, Raymond Wieser, Laura Bruckman, Roger French, Yinghui Wu
We further verify the generality and effectiveness of ST-GTrend for trend analysis using financial and economic datasets.
1 code implementation • 4 Jan 2024 • Tingyang Chen, Dazhuo Qiu, Yinghui Wu, Arijit Khan, Xiangyu Ke, Yunjun Gao
Existing approaches aim to understand the overall results of GNNs rather than providing explanations for specific class labels of interest, and may return explanation structures that are hard to access, nor directly queryable. We propose GVEX, a novel paradigm that generates Graph Views for EXplanation.
no code implementations • 21 Feb 2023 • Yangxin Fan, Xuanji Yu, Raymond Wieser, David Meakin, Avishai Shaton, Jean-Nicolas Jaubert, Robert Flottemesch, Michael Howell, Jennifer Braid, Laura S. Bruckman, Roger French, Yinghui Wu
The integration of the global Photovoltaic (PV) market with real time data-loggers has enabled large scale PV data analytical pipelines for power forecasting and long-term reliability assessment of PV fleets.
no code implementations • 16 Feb 2022 • Sai Pushpak Nandanoori, Sheng Guan, Soumya Kundu, Seemita Pal, Khushbu Agarwal, Yinghui Wu, Sutanay Choudhury
In particular, accurate and timely prediction of the (electro-mechanical) transient dynamic trajectories of the power grid is necessary for early detection of any instability and prevention of catastrophic failures.
no code implementations • 7 Jan 2020 • Mohammad Hossein Namaki, Avrilia Floratou, Fotis Psallidas, Subru Krishnan, Ashvin Agrawal, Yinghui Wu, Yiwen Zhu, Markus Weimer
There has recently been a lot of ongoing research in the areas of fairness, bias and explainability of machine learning (ML) models due to the self-evident or regulatory requirements of various ML applications.