Search Results for author: Aritra Dutta

Found 16 papers, 4 papers with code

Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve Aerial Visual Perception?

no code implementations7 Dec 2023 Aritra Dutta, Srijan Das, Jacob Nielsen, Rajatsubhra Chakraborty, Mubarak Shah

Despite the commercial abundance of UAVs, aerial data acquisition remains challenging, and the existing Asia and North America-centric open-source UAV datasets are small-scale or low-resolution and lack diversity in scene contextuality.

Benchmarking object-detection +2

Demystifying the Myths and Legends of Nonconvex Convergence of SGD

no code implementations19 Oct 2023 Aritra Dutta, El Houcine Bergou, Soumia Boucherouite, Nicklas Werge, Melih Kandemir, Xin Li

Additionally, our analyses allow us to measure the density of the $\epsilon$-stationary points in the final iterates of SGD, and we recover the classical $O(\frac{1}{\sqrt{T}})$ asymptotic rate under various existing assumptions on the objective function and the bounds on the stochastic gradient.

A Note on Randomized Kaczmarz Algorithm for Solving Doubly-Noisy Linear Systems

no code implementations31 Aug 2023 El Houcine Bergou, Soumia Boucherouite, Aritra Dutta, Xin Li, Anna Ma

In this paper, we analyze the convergence of RK for noisy linear systems when the coefficient matrix, $A$, is corrupted with both additive and multiplicative noise, along with the noisy vector, $b$.

Personalized Federated Learning with Communication Compression

no code implementations12 Sep 2022 El Houcine Bergou, Konstantin Burlachenko, Aritra Dutta, Peter Richtárik

Recently, Hanzely and Richt\'{a}rik (2020) proposed a new formulation for training personalized FL models aimed at balancing the trade-off between the traditional global model and the local models that could be trained by individual devices using their private data only.

Personalized Federated Learning

Rethinking gradient sparsification as total error minimization

no code implementations NeurIPS 2021 Atal Narayan Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis

Unlike with Top-$k$ sparsifier, we show that hard-threshold has the same asymptotic convergence and linear speedup property as SGD in the convex case and has no impact on the data-heterogeneity in the non-convex case.

DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning

1 code implementation NeurIPS 2021 Hang Xu, Kelly Kostopoulou, Aritra Dutta, Xin Li, Alexandros Ntoulas, Panos Kalnis

DeepReduce is orthogonal to existing gradient sparsifiers and can be applied in conjunction with them, transparently to the end-user, to significantly lower the communication overhead.

DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep Learning

1 code implementation NeurIPS 2021 Kelly Kostopoulou, Hang Xu, Aritra Dutta, Xin Li, Alexandros Ntoulas, Panos Kalnis

This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors, tailored for distributed deep learning.

On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning

1 code implementation19 Nov 2019 Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, Panos Kalnis

Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks.

Model Compression Quantization

Direct Nonlinear Acceleration

1 code implementation28 May 2019 Aritra Dutta, El Houcine Bergou, Yunming Xiao, Marco Canini, Peter Richtárik

In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct approach, which we call direct nonlinear acceleration (DNA).

Best Pair Formulation & Accelerated Scheme for Non-convex Principal Component Pursuit

no code implementations25 May 2019 Aritra Dutta, Filip Hanzely, Jingwei Liang, Peter Richtárik

The best pair problem aims to find a pair of points that minimize the distance between two disjoint sets.

A Nonconvex Projection Method for Robust PCA

no code implementations21 May 2018 Aritra Dutta, Filip Hanzely, Peter Richtárik

Robust principal component analysis (RPCA) is a well-studied problem with the goal of decomposing a matrix into the sum of low-rank and sparse components.

Face Detection Shadow Removal

Weighted Low-Rank Approximation of Matrices and Background Modeling

no code implementations15 Apr 2018 Aritra Dutta, Xin Li, Peter Richtarik

We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem.

Online and Batch Supervised Background Estimation via L1 Regression

no code implementations23 Nov 2017 Aritra Dutta, Peter Richtarik

We propose a surprisingly simple model for supervised video background estimation.

regression

Weighted Low Rank Approximation for Background Estimation Problems

no code implementations4 Jul 2017 Aritra Dutta, Xin Li

Classical principal component analysis (PCA) is not robust to the presence of sparse outliers in the data.

A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

no code implementations2 Jul 2017 Aritra Dutta, Xin Li, Peter Richtárik

Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.