Search Results for author: Barzan Mozafari

Found 9 papers, 2 papers with code

Communication-efficient Distributed Learning for Large Batch Optimization

1 code implementation Proceedings of the 39th International Conference on Machine Learning 2022 Rui Liu, Barzan Mozafari

In this paper, we propose new gradient compression methods for large batch optimization, JointSpar and its variant JointSpar-LARS with layerwise adaptive learning rates, that jointly reduce both the computation and the communication cost.

Transformer with Memory Replay

no code implementations19 May 2022 Rui Liu, Barzan Mozafari

Transformers achieve state-of-the-art performance for natural language processing tasks by pre-training on large-scale text corpora.

Adam with Bandit Sampling for Deep Learning

no code implementations NeurIPS 2020 Rui Liu, Tianyi Wu, Barzan Mozafari

In this paper, we propose a generalization of Adam, called Adambs, that allows us to also adapt to different training examples based on their importance in the model's convergence.

QuickSel: Quick Selectivity Learning with Mixture Models

1 code implementation26 Dec 2018 Yongjoo Park, Shucheng Zhong, Barzan Mozafari

Estimating the selectivity of a query is a key step in almost any cost-based query optimizer.

Databases

BlinkML: Efficient Maximum Likelihood Estimation with Probabilistic Guarantees

no code implementations26 Dec 2018 Yongjoo Park, Jingyi Qing, Xiaoyang Shen, Barzan Mozafari

Most practitioners cannot precisely capture the effect of sampling on the quality of their model, and eventually on their decision-making process during the tuning phase.

Decision Making regression

A Bandit Approach to Maximum Inner Product Search

no code implementations15 Dec 2018 Rui Liu, Tianyi Wu, Barzan Mozafari

There has been substantial research on sub-linear time approximate algorithms for Maximum Inner Product Search (MIPS).

Revisiting Projection-Free Optimization for Strongly Convex Constraint Sets

no code implementations14 Nov 2018 Jarrid Rector-Brooks, Jun-Kun Wang, Barzan Mozafari

We also show that, for the general case of (smooth) non-convex functions, FW with line search converges with high probability to a stationary point at a rate of $O\left(\frac{1}{t}\right)$, as long as the constraint set is strongly convex -- one of the fastest convergence rates in non-convex optimization.

Database Learning: Toward a Database that Becomes Smarter Every Time

no code implementations16 Mar 2017 Yongjoo Park, Ahmad Shahab Tajik, Michael Cafarella, Barzan Mozafari

Also, processing more queries should continuously enhance our knowledge of the underlying distribution, and hence lead to increasingly faster response times for future queries.

Active Learning for Crowd-Sourced Databases

no code implementations17 Sep 2012 Barzan Mozafari, Purnamrita Sarkar, Michael J. Franklin, Michael. I. Jordan, Samuel Madden

Based on this observation, we present two new active learning algorithms to combine humans and algorithms together in a crowd-sourced database.

Active Learning BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.