Many powerful machine learning models are based on the composition of
multiple processing layers, such as deep nets, which gives rise to nonconvex
objective functions. A general, recent approach to optimise such "nested"
functions is the method of auxiliary coordinates (MAC)...
MAC introduces an
auxiliary coordinate for each data point in order to decouple the nested model
into independent submodels. This decomposes the optimisation into steps that
alternate between training single layers and updating the coordinates. It has
the advantage that it reuses existing single-layer algorithms, introduces
parallelism, and does not need to use chain-rule gradients, so it works with
nondifferentiable layers. With large-scale problems, or when distributing the
computation is necessary for faster training, the dataset may not fit in a
single machine. It is then essential to limit the amount of communication
between machines so it does not obliterate the benefit of parallelism. We
describe a general way to achieve this, ParMAC. ParMAC works on a cluster of
processing machines with a circular topology and alternates two steps until
convergence: one step trains the submodels in parallel using stochastic
updates, and the other trains the coordinates in parallel. Only submodel
parameters, no data or coordinates, are ever communicated between machines. ParMAC exhibits high parallelism, low communication overhead, and facilitates
data shuffling, load balancing, fault tolerance and streaming data processing. We study the convergence of ParMAC and propose a theoretical model of its
runtime and parallel speedup. We develop ParMAC to learn binary autoencoders
for fast, approximate image retrieval. We implement it in MPI in a distributed
system and demonstrate nearly perfect speedups in a 128-processor cluster with
a training set of 100 million high-dimensional points.