Search Results for author: Dragana Bajovic

Found 8 papers, 0 papers with code

Large deviations rates for stochastic gradient descent with strongly convex functions

no code implementations2 Nov 2022 Dragana Bajovic, Dusan Jakovetic, Soummya Kar

In this work we provide a formal framework for the study of general high probability bounds with SGD, based on the theory of large deviations.

Informativeness

A One-shot Framework for Distributed Clustered Learning in Heterogeneous Environments

no code implementations22 Sep 2022 Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar

In the proposed setup, the grouping of users (based on the data distributions they sample), as well as the underlying statistical properties of the distributions, are apriori unknown.

Clustering Federated Learning

Gradient Based Clustering

no code implementations1 Feb 2022 Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar

We propose a general approach for distance based clustering, using the gradient of the cost function that measures clustering quality with respect to cluster assignments and cluster center positions.

Clustering

Personalized Federated Learning via Convex Clustering

no code implementations1 Feb 2022 Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar

The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized via a sum-of-norms penalty, weighted by a penalty parameter $\lambda$.

Clustering Personalized Federated Learning

Deep Learning Anomaly Detection for Cellular IoT with Applications in Smart Logistics

no code implementations17 Feb 2021 Milos Savic, Milan Lukic, Dragan Danilovic, Zarko Bodroski, Dragana Bajovic, Ivan Mezei, Dejan Vukobratovic, Srdjan Skrbic, Dusan Jakovetic

The number of connected Internet of Things (IoT) devices within cyber-physical infrastructure systems grows at an increasing rate.

Anomaly Detection Networking and Internet Architecture

Primal-dual methods for large-scale and distributed convex optimization and data analytics

no code implementations18 Dec 2019 Dusan Jakovetic, Dragana Bajovic, Joao Xavier, Jose M. F. Moura

The augmented Lagrangian method (ALM) is a classical optimization tool that solves a given "difficult" (constrained) problem via finding solutions of a sequence of "easier"(often unconstrained) sub-problems with respect to the original (primal) variable, wherein constraints satisfaction is controlled via the so-called dual variables.

Optimization and Control Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.