Search Results for author: Can Karakus

Found 9 papers, 1 papers with code

MADA: Meta-Adaptive Optimizers through hyper-gradient Descent

no code implementations17 Jan 2024 Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham Sabach, Branislav Kveton, Volkan Cevher

Since Adam was introduced, several novel adaptive optimizers for deep learning have been proposed.

Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training

no code implementations10 Nov 2021 Can Karakus, Rahul Huilgol, Fei Wu, Anirudh Subramanian, Cade Daniel, Derya Cavdar, Teng Xu, Haohan Chen, Arash Rahnama, Luis Quintela

In contrast to existing solutions, the implementation of the SageMaker library is much more generic and flexible, in that it can automatically partition and run pipeline parallelism over arbitrary model architectures with minimal code change, and also offers a general and extensible framework for tensor parallelism, which supports a wider range of use cases, and is modular enough to be easily applied to new training scripts.

Collaborative Filtering

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations

no code implementations6 Jun 2019 Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi

Communication bottleneck has been identified as a significant issue in distributed optimization of large-scale learning models.

Distributed Optimization Quantization

Differentially Private Consensus-Based Distributed Optimization

no code implementations19 Mar 2019 Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi

Consensus-based optimization consists of a set of computational nodes arranged in a graph, each having a local objective that depends on their local data, where in every step nodes take a linear combination of their neighbors' messages, as well as taking a new gradient step.

Distributed Optimization

Privacy-Utility Trade-off of Linear Regression under Random Projections and Additive Noise

no code implementations13 Feb 2019 Mehrdad Showkatbakhsh, Can Karakus, Suhas Diggavi

Data privacy is an important concern in machine learning, and is fundamentally at odds with the task of training useful learning models, which typically require the acquisition of large amounts of private user data.

BIG-bench Machine Learning regression

Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning

no code implementations14 Mar 2018 Can Karakus, Yifan Sun, Suhas Diggavi, Wotao Yin

Performance of distributed optimization and learning systems is bottlenecked by "straggler" nodes and slow communication links, which significantly delay computation.

Distributed Optimization regression

Cannot find the paper you are looking for? You can Submit a new open access paper.