Search Results for author: Koby Crammer

Found 28 papers, 3 papers with code

Weighted Training for Cross-Task Learning

1 code implementation ICLR 2022 Shuxiao Chen, Koby Crammer, Hangfeng He, Dan Roth, Weijie J. Su

In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks.

Chunking named-entity-recognition +6

Finite Sample Analysis Of Dynamic Regression Parameter Learning

no code implementations13 Jun 2019 Mark Kozdoba, Edward Moroshko, Shie Mannor, Koby Crammer

The proposed bounds depend on the shape of a certain spectrum related to the system operator, and thus provide the first known explicit geometric parameter of the data that can be used to bound estimation errors.

regression

Multi Instance Learning For Unbalanced Data

no code implementations17 Dec 2018 Mark Kozdoba, Edward Moroshko, Lior Shani, Takuya Takagi, Takashi Katoh, Shie Mannor, Koby Crammer

In the context of Multi Instance Learning, we analyze the Single Instance (SI) learning objective.

A Better Resource Allocation Algorithm with Semi-Bandit Feedback

no code implementations28 Mar 2018 Yuval Dagan, Koby Crammer

We study a sequential resource allocation problem between a fixed number of arms.

Efficient Loss-Based Decoding on Graphs For Extreme Classification

1 code implementation NeurIPS 2018 Itay Evron, Edward Moroshko, Koby Crammer

We build on a recent extreme classification framework with logarithmic time and space, and on a general approach for error correcting output coding (ECOC) with loss-based decoding, and introduce a flexible and efficient approach accompanied by theoretical bounds.

Classification General Classification

Rotting Bandits

no code implementations NeurIPS 2017 Nir Levine, Koby Crammer, Shie Mannor

In the classical MAB problem, a decision maker must choose an arm at each time step, upon which she receives a reward.

Multi-Armed Bandits

Bandits meet Computer Architecture: Designing a Smartly-allocated Cache

no code implementations31 Jan 2016 Yonatan Glassner, Koby Crammer

In many embedded systems, such as imaging sys- tems, the system has a single designated purpose, and same threads are executed repeatedly.

Multi-Armed Bandits

Linear Multi-Resource Allocation with Semi-Bandit Feedback

no code implementations NeurIPS 2015 Tor Lattimore, Koby Crammer, Csaba Szepesvari

In each time step the learner chooses an allocation of several resource types between a number of tasks.

Learn on Source, Refine on Target:A Model Transfer Learning Framework with Random Forests

2 code implementations4 Nov 2015 Noam Segev, Maayan Harel, Shie Mannor, Koby Crammer, Ran El-Yaniv

We propose novel model transfer-learning methods that refine a decision forest model M learned within a "source" domain using a training set sampled from a "target" domain, assumed to be a variation of the source.

Transfer Learning

CONQUER: Confusion Queried Online Bandit Learning

no code implementations30 Oct 2015 Daniel Barsky, Koby Crammer

We present a new recommendation setting for picking out two items from a given set to be highlighted to a user, based on contextual input.

Belief Flows of Robust Online Learning

no code implementations26 May 2015 Pedro A. Ortega, Koby Crammer, Daniel D. Lee

This paper introduces a new probabilistic model for online learning which dynamically incorporates information from stochastic gradients of an arbitrary loss function.

General Classification regression +1

Learning Multiple Tasks in Parallel with a Shared Annotator

no code implementations NeurIPS 2014 Haim Cohen, Koby Crammer

We introduce a new multi-task framework, in which $K$ online learners are sharing a single annotator with limited bandwidth.

Binary Classification Document Classification +3

Outlier-Robust Convex Segmentation

no code implementations17 Nov 2014 Itamar Katz, Koby Crammer

We derive a convex optimization problem for the task of segmenting sequential data, which explicitly treats presence of outliers.

Segmentation

Optimal Resource Allocation with Semi-Bandit Feedback

no code implementations15 Jun 2014 Tor Lattimore, Koby Crammer, Csaba Szepesvári

We study a sequential resource allocation problem involving a fixed number of recurring jobs.

Selective Sampling with Drift

no code implementations17 Feb 2014 Edward Moroshko, Koby Crammer

Simulations on synthetic and real-world datasets demonstrate the superiority of our algorithms as a selective sampling algorithm in the drifting setting.

Active Learning

Advice-Efficient Prediction with Expert Advice

no code implementations12 Apr 2013 Yevgeny Seldin, Peter Bartlett, Koby Crammer

Advice-efficient prediction with expert advice (in analogy to label-efficient prediction) is a variant of prediction with expert advice game, where on each round of the game we are allowed to ask for advice of a limited number $M$ out of $N$ experts.

A Generalized Online Mirror Descent with Applications to Classification and Regression

no code implementations10 Apr 2013 Francesco Orabona, Koby Crammer, Nicolò Cesa-Bianchi

A unifying perspective on the design and the analysis of online algorithms is provided by online mirror descent, a general prediction strategy from which most first-order algorithms can be obtained as special cases.

General Classification regression

Volume Regularization for Binary Classification

no code implementations NeurIPS 2012 Koby Crammer, Tal Wagner

We introduce a large-volume box classification for binary prediction, which maintains a subset of weight vectors, and specifically axis-aligned boxes.

Binary Classification Classification +3

Learning Multiple Tasks using Shared Hypotheses

no code implementations NeurIPS 2012 Koby Crammer, Yishay Mansour

In this work we consider a setting where we have a very large number of related tasks with few examples from each individual task.

Generalization Bounds

Learning via Gaussian Herding

no code implementations NeurIPS 2010 Koby Crammer, Daniel D. Lee

We introduce a new family of online learning algorithms based upon constraining the velocity flow over a distribution of weight vectors.

New Adaptive Algorithms for Online Classification

no code implementations NeurIPS 2010 Francesco Orabona, Koby Crammer

We propose a general framework to online learning for classification problems with time-varying potential functions in the adversarial setting.

Classification General Classification

Adaptive Regularization of Weight Vectors

no code implementations NeurIPS 2009 Koby Crammer, Alex Kulesza, Mark Dredze

We present AROW, a new online learning algorithm that combines several properties of successful : large margin training, confidence weighting, and the capacity to handle non-separable data.

Exact Convex Confidence-Weighted Learning

no code implementations NeurIPS 2008 Koby Crammer, Mark Dredze, Fernando Pereira

Confidence-weighted (CW) learning [6], an online learning method for linear classifiers, maintains a Gaussian distributions over weight vectors, with a covariance matrix that represents uncertainty about weights and correlations.

Learning Bounds for Domain Adaptation

no code implementations NeurIPS 2007 John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman

Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.