no code implementations • NeurIPS 2012 • Francisco Ruiz, Isabel Valera, Carlos Blanco, Fernando Pérez-Cruz
In the present paper, we are interested in seeking the hidden causes behind the suicide attempts, for which we propose to model the subjects using a nonparametric latent model based on the Indian Buffet Process (IBP).
no code implementations • 29 Jan 2014 • Francisco J. R. Ruiz, Isabel Valera, Carlos Blanco, Fernando Perez-Cruz
To this end, we use the large amount of information collected in the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) database and propose to model these data using a nonparametric latent model based on the Indian Buffet Process (IBP).
no code implementations • NeurIPS 2014 • Mehrdad Farajtabar, Nan Du, Manuel Gomez Rodriguez, Isabel Valera, Hongyuan Zha, Le Song
Events in an online social network can be categorized roughly into endogenous events, where users just respond to the actions of their neighbors within the network, or exogenous events, where users take actions due to drives external to the network.
no code implementations • NeurIPS 2014 • Isabel Valera, Zoubin Ghahramani
Even though heterogeneous databases can be found in a broad variety of applications, there exists a lack of tools for estimating missing data in such databases.
2 code implementations • 19 Jul 2015 • Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi
Algorithmic decision making systems are ubiquitous across a wide variety of online as well as offline services.
1 code implementation • NeurIPS 2015 • Isabel Valera, Francisco Ruiz, Lennart Svensson, Fernando Perez-Cruz
We propose the infinite factorial dynamic model (iFDM), a general Bayesian nonparametric model for source separation.
1 code implementation • 18 Oct 2016 • Charalampos Mavroforakis, Isabel Valera, Manuel Gomez Rodriguez
People are increasingly relying on the Web and social media to find solutions to their problems in a wide range of domains.
no code implementations • 24 Oct 2016 • Behzad Tabibian, Isabel Valera, Mehrdad Farajtabar, Le Song, Bernhard Schölkopf, Manuel Gomez-Rodriguez
Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness.
3 code implementations • 26 Oct 2016 • Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi
To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates.
no code implementations • 14 Dec 2016 • Utkarsh Upadhyay, Isabel Valera, Manuel Gomez-Rodriguez
In this paper, we present a probabilistic modeling framework of crowdlearning, which uncovers the evolution of a user's expertise over time by leveraging other users' assessments of her contributions.
1 code implementation • 12 Jun 2017 • Isabel Valera, Melanie F. Pradier, Maria Lomeli, Zoubin Ghahramani
Second, its Bayesian nonparametric nature allows us to automatically infer the model complexity from the data, i. e., the number of features necessary to capture the latent structure in the data.
1 code implementation • NeurIPS 2017 • Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, Adrian Weller
The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups.
no code implementations • 26 Jul 2017 • Isabel Valera, Melanie F. Pradier, Zoubin Ghahramani
This paper introduces a general Bayesian non- parametric latent feature model suitable to per- form automatic exploratory analysis of heterogeneous datasets, where the attributes describing each object can be either discrete, continuous or mixed variables.
1 code implementation • ICML 2017 • Isabel Valera, Zoubin Ghahramani
A common practice in statistics and machine learning is to assume that the statistical data types (e. g., ordinal, categorical or real-valued) of variables, and usually also the likelihood model, is known.
1 code implementation • NeurIPS 2018 • Isabel Valera, Adish Singla, Manuel Gomez Rodriguez
Societies often rely on human experts to take a wide variety of decisions affecting their members, from jail-or-release decisions taken by judges and stop-and-frisk decisions taken by police officers to accept-or-reject decisions taken by academics.
1 code implementation • NeurIPS 2018 • Francesco Locatello, Gideon Dresdner, Rajiv Khanna, Isabel Valera, Gunnar Rätsch
Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.
2 code implementations • 10 Jul 2018 • Alfredo Nazabal, Pablo M. Olmos, Zoubin Ghahramani, Isabel Valera
Variational autoencoders (VAEs), as well as other generative models, have been shown to be efficient and accurate for capturing the latent structure of vast amounts of complex high-dimensional data.
no code implementations • 24 Jul 2018 • Antonio Vergari, Alejandro Molina, Robert Peharz, Zoubin Ghahramani, Kristian Kersting, Isabel Valera
Classical approaches for {exploratory data analysis} are usually not flexible enough to deal with the uncertainty inherent to real-world data: they are often restricted to fixed latent interaction models and homogeneous likelihoods; they are sensitive to missing, corrupt and anomalous data; moreover, their expressiveness generally comes at the price of intractable inference.
no code implementations • 18 Oct 2018 • Francisco J. R. Ruiz, Isabel Valera, Lennart Svensson, Fernando Perez-Cruz
New communication standards need to deal with machine-to-machine communications, in which users may start or stop transmitting at any time in an asynchronous manner.
1 code implementation • 8 Feb 2019 • Niki Kilbertus, Manuel Gomez-Rodriguez, Bernhard Schölkopf, Krikamol Muandet, Isabel Valera
In this paper, we show that in this selective labels setting, learning a predictor directly only from available labeled data is suboptimal in terms of both fairness and utility.
1 code implementation • 27 May 2019 • Amir-Hossein Karimi, Gilles Barthe, Borja Balle, Isabel Valera
Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval.
2 code implementations • 14 Feb 2020 • Amir-Hossein Karimi, Bernhard Schölkopf, Isabel Valera
As machine learning is increasingly used to inform consequential decision-making (e. g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision.
2 code implementations • 26 Feb 2020 • Adrián Javaloy, Isabel Valera
While MTL solutions do not directly apply in the probabilistic setting (as they cannot handle the likelihood constraints) we show that similar ideas may be leveraged during data preprocessing.
1 code implementation • NeurIPS 2020 • Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, Isabel Valera
Recent work has discussed the limitations of counterfactual explanations to recommend actions for algorithmic recourse, and argued for the need of taking causal relationships between features into consideration.
no code implementations • 8 Oct 2020 • Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, Isabel Valera
Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives.
no code implementations • 10 Oct 2020 • Kiarash Mohammadi, Amir-Hossein Karimi, Gilles Barthe, Isabel Valera
Counterfactual explanations (CFE) are being widely used to explain algorithmic decisions, especially in consequential decision-making contexts (e. g., loan approval or pretrial bail).
1 code implementation • 13 Oct 2020 • Julius von Kügelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, Bernhard Schölkopf
Algorithmic fairness is typically studied from the perspective of predictions.
no code implementations • 1 Jan 2021 • Adrián Javaloy, Isabel Valera
GradNorm eases the fitting of all individual tasks by dynamically equalizing the contribution of each task to the overall gradient magnitude.
no code implementations • 8 Feb 2021 • Jakob Schoeffer, Niklas Kuehl, Isabel Valera
In this paper, we focus on scenarios where only imperfect labels are available and propose a new fair ranking-based decision system based on monotonic relationships between legitimate features and the outcome.
2 code implementations • ICLR 2022 • Adrián Javaloy, Isabel Valera
Multitask learning is being increasingly adopted in applications domains like computer vision and reinforcement learning.
no code implementations • NeurIPS 2021 • Pablo Sanchez Martin, Miriam Rateike, Isabel Valera
We propose the Variational Causal Autoencoder (VCAUSE), a novel class of variational graph autoencoders for causal inference in the absence of hidden confounders, when only observational data and the causal graph are available.
no code implementations • NeurIPS 2021 • Adrián Javaloy, Isabel Valera
Multi-task learning is being increasingly adopted in applications domains like computer vision and reinforcement learning.
1 code implementation • 27 Oct 2021 • Pablo Sanchez-Martin, Miriam Rateike, Isabel Valera
In this paper, we introduce VACA, a novel class of variational graph autoencoders for causal inference in the absence of hidden confounders, when only observational data and the causal graph are available.
1 code implementation • 10 May 2022 • Miriam Rateike, Ayan Majumdar, Olga Mineeva, Krishna P. Gummadi, Isabel Valera
In addition, data is often selectively labeled, i. e., even the biased labels are only observed for a small fraction of the data that received a positive decision.
1 code implementation • 9 Jun 2022 • Adrián Javaloy, Maryam Meghdadi, Isabel Valera
We refer to this limitation as modality collapse.
1 code implementation • 21 Nov 2022 • Adrián Javaloy, Pablo Sanchez-Martin, Amit Levi, Isabel Valera
Existing Graph Neural Networks (GNNs) compute the message exchange between nodes by either aggregating uniformly (convolving) the features of all the neighboring nodes, or by applying a non-uniform score (attending) to the features.
1 code implementation • 13 Feb 2023 • Batuhan Koyuncu, Pablo Sanchez-Martin, Ignacio Peis, Pablo M. Olmos, Isabel Valera
Recent approaches build on implicit neural representations (INRs) to propose generative models over function spaces.
1 code implementation • NeurIPS 2023 • Adrián Javaloy, Pablo Sánchez-Martín, Isabel Valera
In this work, we deepen on the use of normalizing flows for causal reasoning.
no code implementations • 11 Aug 2023 • Nan Wu, Isabel Valera, Fabian Sinz, Alexander Ecker, Thomas Euler, Yongrong Qiu
While deep neural network models have demonstrated excellent power on neural prediction, they usually do not provide the uncertainty of the resulting neural representations and derived statistics, such as the stimuli driving neurons optimally, from in silico experiments.
no code implementations • 21 Nov 2023 • Miriam Rateike, Isabel Valera, Patrick Forré
Neglecting the effect that decisions have on individuals (and thus, on the underlying data distribution) when designing algorithmic decision-making policies may increase inequalities and unfairness in the long term - even if fairness considerations were taken in the policy design process.
no code implementations • 18 Apr 2024 • Pablo Sanchez-Martin, Kinaan Aamir Khan, Isabel Valera
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in solving graph classification tasks.