Search Results for author: Walid Krichene

Found 25 papers, 5 papers with code

Training Differentially Private Ad Prediction Models with Semi-Sensitive Features

no code implementations26 Jan 2024 Lynn Chua, Qiliang Cui, Badih Ghazi, Charlie Harrison, Pritish Kamath, Walid Krichene, Ravi Kumar, Pasin Manurangsi, Krishna Giri Narra, Amer Sinha, Avinash Varadarajan, Chiyuan Zhang

Motivated by problems arising in digital advertising, we introduce the task of training differentially private (DP) machine learning models with semi-sensitive features.

Private Learning with Public Features

no code implementations24 Oct 2023 Walid Krichene, Nicolas Mayoraz, Steffen Rendle, Shuang Song, Abhradeep Thakurta, Li Zhang

We study a class of private learning problems in which the data is a join of private and public features.

Private Matrix Factorization with Public Item Features

no code implementations17 Sep 2023 Mihaela Curmei, Walid Krichene, Li Zhang, Mukund Sundararajan

It can be applied to different types of public item data, including: (1) categorical item features; (2) item-item similarities learned from public sources; and (3) publicly available user feedback.

Collaborative Filtering

Multi-Task Differential Privacy Under Distribution Skew

no code implementations15 Feb 2023 Walid Krichene, Prateek Jain, Shuang Song, Mukund Sundararajan, Abhradeep Thakurta, Li Zhang

We study the problem of multi-task learning under user-level differential privacy, in which $n$ users contribute data to $m$ tasks, each involving a subset of users.

Multi-Task Learning

Differentially Private Image Classification from Features

1 code implementation24 Nov 2022 Harsh Mehta, Walid Krichene, Abhradeep Thakurta, Alexey Kurakin, Ashok Cutkosky

We find that linear regression is much more effective than logistic regression from both privacy and computational aspects, especially at stricter epsilon values ($\epsilon < 1$).

Classification Image Classification +3

Reciprocity in Machine Learning

no code implementations19 Feb 2022 Mukund Sundararajan, Walid Krichene

Are these contributions (outflows of influence) and benefits (inflows of influence) reciprocal?

BIG-bench Machine Learning Recommendation Systems

ALX: Large Scale Matrix Factorization on TPUs

no code implementations3 Dec 2021 Harsh Mehta, Steffen Rendle, Walid Krichene, Li Zhang

We present ALX, an open-source library for distributed matrix factorization using Alternating Least Squares, written in JAX.

Link Prediction

Revisiting the Performance of iALS on Item Recommendation Benchmarks

1 code implementation26 Oct 2021 Steffen Rendle, Walid Krichene, Li Zhang, Yehuda Koren

Matrix factorization learned by implicit alternating least squares (iALS) is a popular baseline in recommender system research publications.

Collaborative Filtering Recommendation Systems

iALS++: Speeding up Matrix Factorization with Subspace Optimization

1 code implementation26 Oct 2021 Steffen Rendle, Walid Krichene, Li Zhang, Yehuda Koren

However, iALS does not scale well with large embedding dimensions, d, due to its cubic runtime dependency on d. Coordinate descent variations, iCD, have been proposed to lower the complexity to quadratic in d. In this work, we show that iCD approaches are not well suited for modern processors and can be an order of magnitude slower than a careful iALS implementation for small to mid scale embedding sizes (d ~ 100) and only perform better than iALS on large embeddings d ~ 1000.

Global Convergence of Second-order Dynamics in Two-layer Neural Networks

no code implementations14 Jul 2020 Walid Krichene, Kenneth F. Caluya, Abhishek Halder

Recent results have shown that for two-layer fully connected neural networks, gradient flow converges to a global optimum in the infinite width limit, by making a connection between the mean field dynamics and the Wasserstein gradient flow.

Vocal Bursts Valence Prediction

Superbloom: Bloom filter meets Transformer

no code implementations11 Feb 2020 John Anderson, Qingqing Huang, Walid Krichene, Steffen Rendle, Li Zhang

We extend the idea of word pieces in natural language models to machine learning tasks on opaque ids.

Acceleration and Averaging in Stochastic Descent Dynamics

no code implementations NeurIPS 2017 Walid Krichene, Peter L. Bartlett

We formulate and study a general family of (continuous-time) stochastic dynamics for accelerated first-order minimization of smooth convex functions.

Acceleration and Averaging in Stochastic Mirror Descent Dynamics

no code implementations19 Jul 2017 Walid Krichene, Peter L. Bartlett

We discuss the interaction between the parameters of the dynamics (learning rate and averaging weights) and the covariation of the noise process, and show, in particular, how the asymptotic rate of covariation affects the choice of parameters and, ultimately, the convergence rate.

Minimizing Regret on Reflexive Banach Spaces and Nash Equilibria in Continuous Zero-Sum Games

no code implementations NeurIPS 2016 Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen

We study a general adversarial online learning problem, in which we are given a decision set X' in a reflexive Banach space X and a sequence of reward vectors in the dual space of X.

Adaptive Averaging in Accelerated Descent Dynamics

no code implementations NeurIPS 2016 Walid Krichene, Alexandre Bayen, Peter L. Bartlett

This dynamics can be described naturally as a coupling of a dual variable accumulating gradients at a given rate $\eta(t)$, and a primal variable obtained as the weighted average of the mirrored dual trajectory, with weights $w(t)$.

Minimizing Regret on Reflexive Banach Spaces and Learning Nash Equilibria in Continuous Zero-Sum Games

no code implementations3 Jun 2016 Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen

Under the assumption of uniformly continuous rewards, we obtain explicit anytime regret bounds in a setting where the decision set is the set of probability distributions on a compact metric space $S$ whose Radon-Nikodym derivatives are elements of $L^p(S)$ for some $p > 1$.

Learning Nash Equilibria in Congestion Games

no code implementations31 Jul 2014 Walid Krichene, Benjamin Drighès, Alexandre M. Bayen

We show that strong convergence can be guaranteed for a class of algorithms with a vanishing upper bound on discounted regret, and which satisfy an additional condition.

Cannot find the paper you are looking for? You can Submit a new open access paper.