no code implementations • 18 Jun 2021 • An-phi Nguyen, Maria Rodriguez Martinez
Interpretability has become a necessary feature for machine learning models deployed in critical scenarios, e. g. legal system, healthcare.
no code implementations • 15 Jul 2020 • An-phi Nguyen, María Rodríguez Martínez
Despite the growing body of work in interpretable machine learning, it remains unclear how to evaluate different explainability methods without resorting to qualitative assessment and user-studies.
no code implementations • 15 Jul 2020 • An-phi Nguyen, María Rodríguez Martínez
If we understand a problem, we may introduce inductive biases in our model in the form of invariances.
no code implementations • 30 Sep 2019 • An-phi Nguyen, María Rodríguez Martínez
Being able to interpret, or explain, the predictions made by a machine learning model is of fundamental importance.
1 code implementation • 18 Apr 2019 • Guillaume Jaume, An-phi Nguyen, María Rodríguez Martínez, Jean-Philippe Thiran, Maria Gabrani
The ability of a graph neural network (GNN) to leverage both the graph topology and graph labels is fundamental to building discriminative node and graph embeddings.
Ranked #28 on
Graph Classification
on MUTAG
no code implementations • WS 2018 • Ivan Girardi, Pengfei Ji, An-phi Nguyen, Nora Hollenstein, Adam Ivankay, Lorenz Kuhn, Chiara Marchiori, Ce Zhang
In addition, a method to detect warning symptoms is implemented to render the classification task transparent from a medical perspective.