Search Results for author: Aditya Devarakonda

Found 6 papers, 2 papers with code

Sequential and Shared-Memory Parallel Algorithms for Partitioned Local Depths

no code implementations31 Jul 2023 Aditya Devarakonda, Grey Ballard

In this work, we design, analyze, and optimize sequential and shared-memory parallel algorithms for partitioned local depths (PaLD).

Avoiding Communication in Logistic Regression

no code implementations16 Nov 2020 Aditya Devarakonda, James Demmel

Stochastic gradient descent (SGD) is one of the most widely used optimization methods for solving various machine learning problems.

regression

Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization

no code implementations17 Dec 2017 Aditya Devarakonda, Kimon Fountoulakis, James Demmel, Michael W. Mahoney

Parallel computing has played an important role in speeding up convex optimization methods for big data analytics and large-scale machine learning (ML).

AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks

1 code implementation6 Dec 2017 Aditya Devarakonda, Maxim Naumov, Michael Garland

Training deep neural networks with Stochastic Gradient Descent, or its variants, requires careful choice of both learning rate and batch size.

Computational Efficiency

Avoiding Communication in Proximal Methods for Convex Optimization Problems

no code implementations24 Oct 2017 Saeed Soori, Aditya Devarakonda, James Demmel, Mert Gurbuzbalaban, Maryam Mehri Dehnavi

We formulate the algorithm for two different optimization methods on the Lasso problem and show that the latency cost is reduced by a factor of k while bandwidth and floating-point operation costs remain the same.

Matrix Factorization at Scale: a Comparison of Scientific Data Analytics in Spark and C+MPI Using Three Case Studies

1 code implementation5 Jul 2016 Alex Gittens, Aditya Devarakonda, Evan Racah, Michael Ringenburg, Lisa Gerhardt, Jey Kottalam, Jialin Liu, Kristyn Maschhoff, Shane Canon, Jatin Chhugani, Pramod Sharma, Jiyan Yang, James Demmel, Jim Harrell, Venkat Krishnamurthy, Michael W. Mahoney, Prabhat

We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms.

Distributed, Parallel, and Cluster Computing G.1.3; C.2.4

Cannot find the paper you are looking for? You can Submit a new open access paper.