Search Results for author: Celestine Dünner

Found 8 papers, 2 papers with code

Parallel training of linear models without compromising convergence

no code implementations5 Nov 2018 Nikolas Ioannou, Celestine Dünner, Kornilios Kourtis, Thomas Parnell

The combined set of optimizations result in a consistent bottom line speedup in convergence of up to 12x compared to the initial asynchronous parallel training algorithm and up to 42x, compared to state of the art implementations (scikit-learn and h2o) on a range of multi-core CPU architectures.

A Distributed Second-Order Algorithm You Can Trust

no code implementations ICML 2018 Celestine Dünner, Aurelien Lucchi, Matilde Gargiani, An Bian, Thomas Hofmann, Martin Jaggi

Due to the rapid growth of data and computational resources, distributed optimization has become an active research area in recent years.

Distributed Optimization Second-order methods

Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems

1 code implementation NeurIPS 2017 Celestine Dünner, Thomas Parnell, Martin Jaggi

We propose a generic algorithmic building block to accelerate training of machine learning models on heterogeneous compute systems.

BIG-bench Machine Learning

Large-Scale Stochastic Learning using GPUs

no code implementations22 Feb 2017 Thomas Parnell, Celestine Dünner, Kubilay Atasu, Manolis Sifalakis, Haris Pozidis

In this work we propose an accelerated stochastic learning system for very large-scale applications.

Understanding and Optimizing the Performance of Distributed Machine Learning Applications on Apache Spark

no code implementations5 Dec 2016 Celestine Dünner, Thomas Parnell, Kubilay Atasu, Manolis Sifalakis, Haralampos Pozidis

We begin by analyzing the characteristics of a state-of-the-art distributed machine learning algorithm implemented in Spark and compare it to an equivalent reference implementation using the high performance computing framework MPI.

BIG-bench Machine Learning Computational Efficiency

Scalable and interpretable product recommendations via overlapping co-clustering

1 code implementation7 Apr 2016 Reinhard Heckel, Michail Vlachos, Thomas Parnell, Celestine Dünner

We consider the problem of generating interpretable recommendations by identifying overlapping co-clusters of clients and products, based only on positive or implicit feedback.

Clustering

Primal-Dual Rates and Certificates

no code implementations16 Feb 2016 Celestine Dünner, Simone Forte, Martin Takáč, Martin Jaggi

We propose an algorithm-independent framework to equip existing optimization methods with primal-dual certificates.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.