Distributed Computing

43 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Greatest papers with code

Optuna: A Next-generation Hyperparameter Optimization Framework

optuna/optuna 25 Jul 2019

We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications.

Distributed Computing Hyperparameter Optimization

MMLSpark: Unifying Machine Learning Ecosystems at Massive Scales

Azure/mmlspark 20 Oct 2018

We introduce Microsoft Machine Learning for Apache Spark (MMLSpark), an ecosystem of enhancements that expand the Apache Spark distributed computing library to tackle problems in Deep Learning, Micro-Service Orchestration, Gradient Boosting, Model Interpretability, and other areas of modern computation.

Distributed Computing Object Detection

Flexible and Scalable Deep Learning with MMLSpark

Azure/mmlspark 11 Apr 2018

In this work we detail a novel open source library, called MMLSpark, that combines the flexible deep learning library Cognitive Toolkit, with the distributed computing framework Apache Spark.

Distributed Computing

Billion-scale Network Embedding with Iterative Random Projection

benedekrozemberczki/karateclub 7 May 2018

Network embedding, which learns low-dimensional vector representation for nodes in the network, has attracted considerable research attention recently.

Distributed Computing Link Prediction +2

Fiber: A Platform for Efficient Development and Distributed Training for Reinforcement Learning and Population-Based Methods

uber/fiber 25 Mar 2020

Recent advances in machine learning are consistently enabled by increasing amounts of computation.

Distributed Computing

MALib: A Parallel Framework for Population-based Multi-agent Reinforcement Learning

sjtu-marl/malib 5 Jun 2021

Our framework is comprised of three key components: (1) a centralized task dispatching model, which supports the self-generated tasks and scalable training with heterogeneous policy combinations; (2) a programming architecture named Actor-Evaluator-Learner, which achieves high parallelism for both training and sampling, and meets the evaluation requirement of auto-curriculum learning; (3) a higher-level abstraction of MARL training paradigms, which enables efficient code reuse and flexible deployments on different distributed computing paradigms.

Atari Games Curriculum Learning +2

A System for Massively Parallel Hyperparameter Tuning

c-bata/goptuna ICLR 2018

Modern learning models are characterized by large hyperparameter spaces and long training times.

Distributed Computing Hyperparameter Optimization

MANGO: A Python Library for Parallel Hyperparameter Tuning

ARM-software/mango 22 May 2020

Tuning hyperparameters for machine learning algorithms is a tedious task, one that is typically done manually.

Distributed Computing Distributed Optimization +1

Distributed Deep Neural Networks over the Cloud, the Edge and End Devices

kunglab/ddnn 6 Sep 2017

In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.

Distributed Computing Object Recognition +1