Search Results for author: Ioannis Mollas

Found 12 papers, 10 papers with code

ETHOS: an Online Hate Speech Detection Dataset

1 code implementation11 Jun 2020 Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, Grigorios Tsoumakas

Online hate speech is a recent problem in our society that is rising at a steady pace by leveraging the vulnerabilities of the corresponding regimes that characterise most social media platforms.

Hate Speech Detection

VisioRed: A Visualisation Tool for Interpretable Predictive Maintenance

1 code implementation31 Mar 2021 Spyridon Paraschos, Ioannis Mollas, Nick Bassiliades, Grigorios Tsoumakas

The use of machine learning rapidly increases in high-risk scenarios where decisions are required, for example in healthcare or industrial monitoring equipment.

BIG-bench Machine Learning Decision Making +2

Conclusive Local Interpretation Rules for Random Forests

1 code implementation13 Apr 2021 Ioannis Mollas, Nick Bassiliades, Grigorios Tsoumakas

LionForests is a random forest-specific interpretation technique, which provides rules as explanations.

Binary Classification Decision Making +1

Local Multi-Label Explanations for Random Forest

1 code implementation5 Jul 2022 Nikolaos Mylonas, Ioannis Mollas, Nick Bassiliades, Grigorios Tsoumakas

Random Forest falls short on this property, especially when a large number of tree predictors are used.

Classification Decision Making +2

LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding

1 code implementation15 Jun 2019 Ioannis Mollas, Nikolaos Bassiliades, Grigorios Tsoumakas

Technological breakthroughs on smart homes, self-driving cars, health care and robotic assistants, in addition to reinforced law regulations, have critically influenced academic research on explainable machine learning.

General Classification Self-Driving Cars

Truthful Meta-Explanations for Local Interpretability of Machine Learning Models

1 code implementation7 Dec 2022 Ioannis Mollas, Nick Bassiliades, Grigorios Tsoumakas

As a result, the demand for a selection tool, a meta-explanation technique based on a high-quality evaluation metric, is apparent.

Local Interpretability of Random Forests for Multi-Target Regression

1 code implementation29 Mar 2023 Avraam Bardos, Nikolaos Mylonas, Ioannis Mollas, Grigorios Tsoumakas

Although model-agnostic techniques exist for multi-target regression, specific techniques tailored to random forest models are not available.

Multi-target regression regression

LionForests: Local Interpretation of Random Forests

no code implementations20 Nov 2019 Ioannis Mollas, Nick Bassiliades, Ioannis Vlahavas, Grigorios Tsoumakas

Towards a future where machine learning systems will integrate into every aspect of people's lives, researching methods to interpret such systems is necessary, instead of focusing exclusively on enhancing their performance.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.