Search Results for author: Michal Derezinski

Found 4 papers, 1 papers with code

LocalNewton: Reducing Communication Bottleneck for Distributed Learning

no code implementations16 May 2021 Vipul Gupta, Avishek Ghosh, Michal Derezinski, Rajiv Khanna, Kannan Ramchandran, Michael Mahoney

To enhance practicability, we devise an adaptive scheme to choose L, and we show that this reduces the number of local iterations in worker machines between two model synchronizations as the training proceeds, successively refining the model quality at the master.

Distributed Optimization

Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method

no code implementations NeurIPS 2020 Michal Derezinski, Rajiv Khanna, Michael W. Mahoney

The Column Subset Selection Problem (CSSP) and the Nystrom method are among the leading tools for constructing small low-rank approximations of large datasets in machine learning and scientific computing.

Sampling from a k-DPP without looking at all items

1 code implementation NeurIPS 2020 Daniele Calandriello, Michal Derezinski, Michal Valko

Determinantal point processes (DPPs) are a useful probabilistic model for selecting a small diverse subset out of a large collection of items, with applications in summarization, recommendation, stochastic optimization, experimental design and more.

Experimental Design Point Processes +1

The limits of squared Euclidean distance regularization

no code implementations NeurIPS 2014 Michal Derezinski, Manfred K. Warmuth

We conjecture that our hardness results hold for any training algorithm that is based on the squared Euclidean distance regularization (i. e. Back-propagation with the Weight Decay heuristic).

Cannot find the paper you are looking for? You can Submit a new open access paper.