Search Results for author: Ş. İlker Birbil

Found 8 papers, 4 papers with code

Differentially Private Distributed Bayesian Linear Regression with MCMC

1 code implementation31 Jan 2023 Barış Alparslan, Sinan Yildirim, Ş. İlker Birbil

We develop a novel generative statistical model for privately shared statistics, which exploits a useful distributional relation between the summary statistics of linear regression.

Bayesian Inference Privacy Preserving +1

Semantic match: Debugging feature attribution methods in XAI for healthcare

no code implementations5 Jan 2023 Giovanni Cinà, Tabea E. Röber, Rob Goedhart, Ş. İlker Birbil

Despite valid concerns, we argue that existing criticism on the viability of post-hoc local explainability methods throws away the baby with the bathwater by generalizing a problem that is specific to image data.

Explainable Artificial Intelligence (XAI) Feature Importance +1

Rule Generation for Classification: Scalability, Interpretability, and Fairness

1 code implementation21 Apr 2021 Adia C. Lumadjeng, Tabea Röber, M. Hakan Akyüz, Ş. İlker Birbil

The method returns a set of rules along with their optimal weights indicating the importance of each rule for learning.

Classification Fairness +1

Differentially Private Accelerated Optimization Algorithms

1 code implementation5 Aug 2020 Nurdan Kuru, Ş. İlker Birbil, Mert Gurbuzbalaban, Sinan Yildirim

The first algorithm is inspired by Polyak's heavy ball method and employs a smoothing approach to decrease the accumulated noise on the gradient steps required for differential privacy.

HAMSI: A Parallel Incremental Optimization Algorithm Using Quadratic Approximations for Solving Partially Separable Problems

no code implementations5 Sep 2015 Kamer Kaya, Figen Öztoprak, Ş. İlker Birbil, A. Taylan Cemgil, Umut Şimşekli, Nurdan Kuru, Hazal Koptagel, M. Kaan Öztürk

We propose HAMSI (Hessian Approximated Multiple Subsets Iteration), which is a provably convergent, second order incremental algorithm for solving large-scale partially separable optimization problems.

Parallel Stochastic Gradient Markov Chain Monte Carlo for Matrix Factorisation Models

no code implementations3 Jun 2015 Umut Şimşekli, Hazal Koptagel, Hakan Güldaş, A. Taylan Cemgil, Figen Öztoprak, Ş. İlker Birbil

For large matrix factorisation problems, we develop a distributed Markov Chain Monte Carlo (MCMC) method based on stochastic gradient Langevin dynamics (SGLD) that we call Parallel SGLD (PSGLD).

Cannot find the paper you are looking for? You can Submit a new open access paper.