Search Results for author: Indre Zliobaite

Found 7 papers, 1 papers with code

Fairness-aware machine learning: a perspective

no code implementations2 Aug 2017 Indre Zliobaite

We need to analyze machine learning process as a whole to systematically explain the roots of discrimination occurrence, which will allow to devise global machine learning optimization criteria for guaranteed prevention, as opposed to pushing empirical constraints into existing algorithms case-by-case.

BIG-bench Machine Learning Decision Making +1

A note on adjusting $R^2$ for using with cross-validation

no code implementations5 May 2016 Indre Zliobaite, Nikolaj Tatti

We show how to adjust the coefficient of determination ($R^2$) when used for measuring predictive accuracy via leave-one-out cross-validation.

A survey on measuring indirect discrimination in machine learning

2 code implementations31 Oct 2015 Indre Zliobaite

In this survey we review and organize various discrimination measures that have been used for measuring discrimination in data, as well as in evaluating performance of discrimination-aware predictive models.

Computers and Society Applications

Predicting respiratory motion for real-time tumour tracking in radiotherapy

no code implementations4 Aug 2015 Tomas Krilavicius, Indre Zliobaite, Henrikas Simonavicius, Laimonas Jarusevicius

Accurate predictions of lung tumor motion are expected to improve the precision of radiation treatment by controlling the position of a couch or a beam in order to compensate for respiratory motion during radiation treatment.

motion prediction

Optimal estimates for short horizon travel time prediction in urban areas

no code implementations30 Jul 2015 Indre Zliobaite, Mikhail Khokhlov

One approach is to predict travel times for route segments, and sum those estimates to obtain a prediction for the whole route.

On the relation between accuracy and fairness in binary classification

no code implementations21 May 2015 Indre Zliobaite

We argue that comparison of non-discriminatory classifiers needs to account for different rates of positive predictions, otherwise conclusions about performance may be misleading, because accuracy and discrimination of naive baselines on the same dataset vary with different rates of positive predictions.

Classification Fairness +1

Predictive User Modeling with Actionable Attributes

no code implementations23 Dec 2013 Indre Zliobaite, Mykola Pechenizkiy

We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling.


Cannot find the paper you are looking for? You can Submit a new open access paper.