Distributed Computing
88 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Distributed Computing
Libraries
Use these libraries to find Distributed Computing models and implementationsMost implemented papers
Optuna: A Next-generation Hyperparameter Optimization Framework
We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications.
A System for Massively Parallel Hyperparameter Tuning
Modern learning models are characterized by large hyperparameter spaces and long training times.
FedML: A Research Library and Benchmark for Federated Machine Learning
Federated learning (FL) is a rapidly growing research field in machine learning.
Residual-INR: Communication Efficient On-Device Learning Using Implicit Neural Representation
However, as the scale of the edge computing system is getting larger, communication among devices is becoming the bottleneck because of the limited bandwidth of wireless communication leads to large data transfer latency.
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning.
Billion-scale Network Embedding with Iterative Random Projection
Network embedding, which learns low-dimensional vector representation for nodes in the network, has attracted considerable research attention recently.
Orchestral: a lightweight framework for parallel simulations of cell-cell communication
By the use of operator-splitting we decouple the simulation of reaction-diffusion kinetics inside the cells from the simulation of molecular cell-cell interactions occurring on the boundaries between cells.
Distributed Bayesian Matrix Decomposition for Big Data Mining and Clustering
Such a method should scale up well, model the heterogeneous noise, and address the communication issue in a distributed system.
Data-Juicer: A One-Stop Data Processing System for Large Language Models
A data recipe is a mixture of data from different sources for training LLMs, which plays a vital role in LLMs' performance.
Scalable Agent-Based Modeling for Complex Financial Market Simulations
To the best of our knowledge, this study is the first to implement multiple assets, parallel agent decision-making, a continuous double auction mechanism, and intelligent agent types in a scalable real-time environment.