Search Results for author: Giulia Desalvo

Found 14 papers, 2 papers with code

Online Learning with Dependent Stochastic Feedback Graphs

no code implementations ICML 2020 Corinna Cortes, Giulia Desalvo, Claudio Gentile, Mehryar Mohri, Ningshan Zhang

A general framework for online learning with partial information is one where feedback graphs specify which losses can be observed by the learner.

SpacTor-T5: Pre-training T5 Models with Span Corruption and Replaced Token Detection

no code implementations24 Jan 2024 Ke Ye, Heinrich Jiang, Afshin Rostamizadeh, Ayan Chakrabarti, Giulia Desalvo, Jean-François Kagy, Lazaros Karydas, Gui Citovsky, Sanjiv Kumar

In this paper, we present SpacTor, a new training procedure consisting of (1) a hybrid objective combining span corruption (SC) and token replacement detection (RTD), and (2) a two-stage curriculum that optimizes the hybrid objective over the initial $\tau$ iterations, then transitions to standard SC loss.

Two-Step Active Learning for Instance Segmentation with Uncertainty and Diversity Sampling

no code implementations28 Sep 2023 Ke Yu, Stephen Albro, Giulia Desalvo, Suraj Kothawade, Abdullah Rashwan, Sasan Tavakkol, Kayhan Batmanghelich, Xiaoqi Yin

Training high-quality instance segmentation models requires an abundance of labeled images with instance masks and classifications, which is often expensive to procure.

Active Learning Image Classification +3

Leveraging Importance Weights in Subset Selection

no code implementations28 Jan 2023 Gui Citovsky, Giulia Desalvo, Sanjiv Kumar, Srikumar Ramalingam, Afshin Rostamizadeh, Yunjuan Wang

In such a setting, an algorithm can sample examples one at a time but, in order to limit overhead costs, is only able to update its state (i. e. further train model weights) once a large enough batch of examples is selected.

Active Learning

Learning with Labeling Induced Abstentions

no code implementations NeurIPS 2021 Kareem Amin, Giulia Desalvo, Afshin Rostamizadeh

Consider a setting where we wish to automate an expensive task with a machine learning algorithm using a limited labeling resource.

Active Learning BIG-bench Machine Learning +1

Batch Active Learning at Scale

1 code implementation NeurIPS 2021 Gui Citovsky, Giulia Desalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, Sanjiv Kumar

The ability to train complex and highly effective models often requires an abundance of training data, which can easily become a bottleneck in cost, time, and computational resources.

Active Learning

Adaptive Region-Based Active Learning

no code implementations ICML 2020 Corinna Cortes, Giulia Desalvo, Claudio Gentile, Mehryar Mohri, Ningshan Zhang

We present a new active learning algorithm that adaptively partitions the input space into a finite number of regions, and subsequently seeks a distinct predictor for each region, both phases actively requesting labels.

Active Learning

Discrepancy-Based Algorithms for Non-Stationary Rested Bandits

no code implementations29 Oct 2017 Corinna Cortes, Giulia Desalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang

We show that the notion of discrepancy can be used to design very general algorithms and a unified framework for the analysis of multi-armed rested bandit problems with non-stationary rewards.

Online Learning with Abstention

no code implementations ICML 2018 Corinna Cortes, Giulia Desalvo, Claudio Gentile, Mehryar Mohri, Scott Yang

In the stochastic setting, we first point out a bias problem that limits the straightforward extension of algorithms such as UCB-N to time-varying feedback graphs, as needed in this context.

Boosting with Abstention

no code implementations NeurIPS 2016 Corinna Cortes, Giulia Desalvo, Mehryar Mohri

We present a new boosting algorithm for the key scenario of binary classification with abstention where the algorithm can abstain from predicting the label of a point, at the price of a fixed cost.

Binary Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.