no code implementations • 24 Mar 2023 • Rafael Frongillo, Manuel Lladser, Anish Thilagar, Bo Waggoner
We initiate the study of forecasting competitions for correlated events.
no code implementations • 7 Nov 2022 • Rafael Frongillo, Dhamma Kimpara, Bo Waggoner
The characterization rules out a loss whose expectation is the cross-entropy between the target distribution and the model.
no code implementations • 18 Jul 2022 • Jessie Finocchiaro, Rafael Frongillo, Emma Goodwill, Anish Thilagar
For the proposed hinge-like surrogates that are convex (i. e., polyhedral), we apply the recent embedding framework of Finocchiaro et al. (2019; 2022) to determine the prediction problem for which the surrogate is consistent.
no code implementations • 16 Mar 2022 • Jessie Finocchiaro, Rafael Frongillo, Enrique Nueve
The Lov\'asz hinge is a convex surrogate recently proposed for structured binary classification, in which $k$ binary predictions are made simultaneously and the error is judged by a submodular set function.
no code implementations • 24 Feb 2022 • Gabriel P. Andrade, Rafael Frongillo, Georgios Piliouras
Games are natural models for multi-agent machine learning settings, such as generative adversarial networks (GANs).
no code implementations • NeurIPS 2021 • Rafael Frongillo, Bo Waggoner
Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset.
no code implementations • 28 Jun 2021 • Gabriel P. Andrade, Rafael Frongillo, Elliot Gorokhovsky, Sharadha Srinivasan
Kakade, Kearns, and Ortiz (KKO) introduce a graph-theoretic generalization of the classic Arrow--Debreu (AD) exchange economy.
no code implementations • NeurIPS 2021 • Jessica Finocchiaro, Rafael Frongillo, Bo Waggoner
The convex consistency dimension of a supervised learning task is the lowest prediction dimension $d$ such that there exists a convex surrogate $L : \mathbb{R}^d \times \mathcal Y \to \mathbb R$ that is consistent for the given task.
no code implementations • 5 Mar 2021 • Gabriel P. Andrade, Rafael Frongillo, Georgios Piliouras
In this paper we show that, in a strong sense, this dynamic complexity is inherent to games.
no code implementations • 16 Feb 2021 • Rafael Frongillo, Robert Gomez, Anish Thilagar, Bo Waggoner
Winner-take-all competitions in forecasting and machine-learning suffer from distorted incentives.
no code implementations • NeurIPS 2021 • Jessie Finocchiaro, Rafael Frongillo, Bo Waggoner
Given a prediction task, understanding when one can and cannot design a consistent convex surrogate loss, particularly a low-dimensional one, is an important and active area of machine learning research.
no code implementations • NeurIPS 2019 • Jessie Finocchiaro, Rafael Frongillo, Bo Waggoner
Conversely, we show how to construct a consistent polyhedral surrogate for any given discrete loss.
no code implementations • NeurIPS 2018 • Jessica Finocchiaro, Rafael Frongillo
A property or statistic of a distribution is said to be elicitable if it can be expressed as the minimizer of some loss function in expectation.
no code implementations • 27 Feb 2018 • Rafael Frongillo, Nishant A. Mehta, Tom Morgan, Bo Waggoner
Recent work introduced loss functions which measure the error of a prediction based on multiple simultaneous observations or outcomes.
no code implementations • 5 Jun 2017 • Sebastian Casalaina-Martin, Rafael Frongillo, Tom Morgan, Bo Waggoner
We study loss functions that measure the accuracy of a prediction based on multiple data points simultaneously.
no code implementations • NeurIPS 2016 • Chien-Ju Ho, Rafael Frongillo, Yi-Ling Chen
Our model generalizes both categories and enables the joint exploration of optimal elicitation and aggregation.
no code implementations • NeurIPS 2015 • Rafael Frongillo, Mark D. Reid
However, little is known about rates and guarantees for the convergence of these sequential mechanisms, and two recent papers cite this as an important open question. In this paper we show how some previously studied prediction market trading models can be understood as a natural generalization of randomized coordinate descent which we call randomized subspace descent (RSD).
no code implementations • NeurIPS 2015 • Rafael Frongillo, Ian Kash
Elicitation is the study of statistics or properties which are computable via empirical risk minimization.
no code implementations • NeurIPS 2015 • Bo Waggoner, Rafael Frongillo, Jacob D. Abernethy
We propose a mechanism for purchasing information from a sequence of participants. The participants may simply hold data points they wish to sell, or may have more sophisticated information; either way, they are incentivized to participate as long as they believe their data points are representative or their information will improve the mechanism's future prediction on a test set. The mechanism, which draws on the principles of prediction markets, has a bounded budget and minimizes generalization error for Bregman divergence loss functions. We then show how to modify this mechanism to preserve the privacy of participants' information: At any given time, the current prices and predictions of the mechanism reveal almost no information about any one participant, yet in total over all participants, information is accurately aggregated.
no code implementations • 23 Jun 2015 • Rafael Frongillo, Ian A. Kash
We lay the foundation for a general theory of elicitation complexity, including several basic results about how elicitation complexity behaves, and the complexity of standard properties of interest.
no code implementations • 30 Jul 2014 • Miroslav Dudík, Rafael Frongillo, Jennifer Wortman Vaughan
We study information elicitation in cost-function-based combinatorial prediction markets when the market maker's utility for information decreases over time.
no code implementations • NeurIPS 2013 • Jacob Abernethy, Peter L. Bartlett, Rafael Frongillo, Andre Wibisono
We consider a popular problem in finance, option pricing, through the lens of an online learning game between Nature and an Investor.