We introduce the notion of performative power, which measures the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to steer a population.
Our main contribution is an algorithm that achieves regret bounds scaling only with the complexity of the distribution shifts and not that of the reward function.
When reasoning about strategic behavior in a machine learning context it is tempting to combine standard microfoundations of rational agents with the statistical decision theory underlying classification.
An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points.
We explain why standard design choices are problematic in these cases, and show that alternative choices of surrogate objectives and policy parameterizations can prevent the failure modes.
In this paper we tackle the challenge of making the stochastic coordinate descent algorithm differentially private.
In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions.
In this paper we propose a novel parallel stochastic coordinate descent (SCD) algorithm with convergence guarantees that exhibits strong scalability.
In this paper we analyze, evaluate, and improve the performance of training Random Forest (RF) models on modern CPU architectures.
To address this, we propose Chicle, a new elastic distributed training framework which exploits the nature of machine learning algorithms to implement elasticity and load balancing without micro-tasks.
A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator.
We present Acquisition Thompson Sampling (ATS), a novel technique for batch Bayesian Optimization (BO) based on the idea of sampling multiple acquisition functions from a stochastic process.