Search Results for author: Hakan Ferhatosmanoglu

Found 8 papers, 3 papers with code

FSD-Inference: Fully Serverless Distributed Inference with Scalable Cloud Communication

no code implementations22 Mar 2024 Joe Oakley, Hakan Ferhatosmanoglu

In the absence of such solutions in the serverless domain, parallel computation with significant IPC requirements is challenging.

Low-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message Propagation

1 code implementation29 Aug 2023 Shuang Wang, Bahaeddin Eravci, Rustam Guliyev, Hakan Ferhatosmanoglu

Graph Neural Network (GNN) training and inference involve significant challenges of scalability with respect to both model sizes and number of layers, resulting in degradation of efficiency and accuracy for large and deep GNNs.

Node Classification Quantization

Scalable Graph Convolutional Network Training on Distributed-Memory Systems

no code implementations9 Dec 2022 Gunduz Vehbi Demirci, Aparajita Haldar, Hakan Ferhatosmanoglu

The large data sizes of graphs and their vertex features make scalable training algorithms and distributed memory systems necessary.

Blocking graph partitioning +1

RAGUEL: Recourse-Aware Group Unfairness Elimination

no code implementations30 Aug 2022 Aparajita Haldar, Teddy Cunningham, Hakan Ferhatosmanoglu

While machine learning and ranking-based systems are in widespread use for sensitive decision-making processes (e. g., determining job candidates, assigning credit scores), they are rife with concerns over unintended biases in their outcomes, which makes algorithmic fairness (e. g., demographic parity, equal opportunity) an objective of interest.

Attribute counterfactual +3

GeoPointGAN: Synthetic Spatial Data with Local Label Differential Privacy

1 code implementation18 May 2022 Teddy Cunningham, Konstantin Klemmer, Hongkai Wen, Hakan Ferhatosmanoglu

We introduce GeoPointGAN, a novel GAN-based solution for generating synthetic spatial point datasets with high utility and strong individual level privacy guarantees.

Management Privacy Preserving +1

Partitioning sparse deep neural networks for scalable training and inference

no code implementations23 Apr 2021 Gunduz Vehbi Demirci, Hakan Ferhatosmanoglu

Both the feedforward (inference) and backpropagation steps in stochastic gradient descent (SGD) algorithm for training sparse DNNs involve consecutive sparse matrix-vector multiplications (SpMVs).

Computational Efficiency Management

Cannot find the paper you are looking for? You can Submit a new open access paper.