1 code implementation • ICLR 2022 • Shuxiao Chen, Koby Crammer, Hangfeng He, Dan Roth, Weijie J. Su
In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks.
no code implementations • 13 Jun 2019 • Mark Kozdoba, Edward Moroshko, Shie Mannor, Koby Crammer
The proposed bounds depend on the shape of a certain spectrum related to the system operator, and thus provide the first known explicit geometric parameter of the data that can be used to bound estimation errors.
no code implementations • 17 Dec 2018 • Mark Kozdoba, Edward Moroshko, Lior Shani, Takuya Takagi, Takashi Katoh, Shie Mannor, Koby Crammer
In the context of Multi Instance Learning, we analyze the Single Instance (SI) learning objective.
no code implementations • 27 Sep 2018 • Eliyahu Sason, Koby Crammer
The second, zero-shot learning problem deals with making reasonable inferences on novel classes.
no code implementations • 28 Mar 2018 • Yuval Dagan, Koby Crammer
We study a sequential resource allocation problem between a fixed number of arms.
1 code implementation • NeurIPS 2018 • Itay Evron, Edward Moroshko, Koby Crammer
We build on a recent extreme classification framework with logarithmic time and space, and on a general approach for error correcting output coding (ECOC) with loss-based decoding, and introduce a flexible and efficient approach accompanied by theoretical bounds.
no code implementations • NeurIPS 2017 • Nir Levine, Koby Crammer, Shie Mannor
In the classical MAB problem, a decision maker must choose an arm at each time step, upon which she receives a reward.
no code implementations • 31 Jan 2016 • Yonatan Glassner, Koby Crammer
In many embedded systems, such as imaging sys- tems, the system has a single designated purpose, and same threads are executed repeatedly.
no code implementations • NeurIPS 2015 • Tor Lattimore, Koby Crammer, Csaba Szepesvari
In each time step the learner chooses an allocation of several resource types between a number of tasks.
2 code implementations • 4 Nov 2015 • Noam Segev, Maayan Harel, Shie Mannor, Koby Crammer, Ran El-Yaniv
We propose novel model transfer-learning methods that refine a decision forest model M learned within a "source" domain using a training set sampled from a "target" domain, assumed to be a variation of the source.
no code implementations • 30 Oct 2015 • Daniel Barsky, Koby Crammer
We present a new recommendation setting for picking out two items from a given set to be highlighted to a user, based on contextual input.
no code implementations • 26 May 2015 • Pedro A. Ortega, Koby Crammer, Daniel D. Lee
This paper introduces a new probabilistic model for online learning which dynamically incorporates information from stochastic gradients of an arbitrary loss function.
no code implementations • NeurIPS 2014 • Haim Cohen, Koby Crammer
We introduce a new multi-task framework, in which $K$ online learners are sharing a single annotator with limited bandwidth.
no code implementations • 17 Nov 2014 • Itamar Katz, Koby Crammer
We derive a convex optimization problem for the task of segmenting sequential data, which explicitly treats presence of outliers.
no code implementations • 15 Jun 2014 • Tor Lattimore, Koby Crammer, Csaba Szepesvári
We study a sequential resource allocation problem involving a fixed number of recurring jobs.
no code implementations • 17 Feb 2014 • Edward Moroshko, Koby Crammer
Simulations on synthetic and real-world datasets demonstrate the superiority of our algorithms as a selective sampling algorithm in the drifting setting.
no code implementations • 12 Apr 2013 • Yevgeny Seldin, Peter Bartlett, Koby Crammer
Advice-efficient prediction with expert advice (in analogy to label-efficient prediction) is a variant of prediction with expert advice game, where on each round of the game we are allowed to ask for advice of a limited number $M$ out of $N$ experts.
no code implementations • 10 Apr 2013 • Francesco Orabona, Koby Crammer, Nicolò Cesa-Bianchi
A unifying perspective on the design and the analysis of online algorithms is provided by online mirror descent, a general prediction strategy from which most first-order algorithms can be obtained as special cases.
no code implementations • NeurIPS 2012 • Koby Crammer, Tal Wagner
We introduce a large-volume box classification for binary prediction, which maintains a subset of weight vectors, and specifically axis-aligned boxes.
no code implementations • NeurIPS 2012 • Koby Crammer, Yishay Mansour
In this work we consider a setting where we have a very large number of related tasks with few examples from each individual task.
no code implementations • NeurIPS 2010 • Koby Crammer, Daniel D. Lee
We introduce a new family of online learning algorithms based upon constraining the velocity flow over a distribution of weight vectors.
no code implementations • NeurIPS 2010 • Francesco Orabona, Koby Crammer
We propose a general framework to online learning for classification problems with time-varying potential functions in the adversarial setting.
no code implementations • NeurIPS 2009 • Koby Crammer, Alex Kulesza, Mark Dredze
We present AROW, a new online learning algorithm that combines several properties of successful : large margin training, confidence weighting, and the capacity to handle non-separable data.
no code implementations • NeurIPS 2008 • Koby Crammer, Mark Dredze, Fernando Pereira
Confidence-weighted (CW) learning [6], an online learning method for linear classifiers, maintains a Gaussian distributions over weight vectors, with a covariance matrix that represents uncertainty about weights and correlations.
no code implementations • NeurIPS 2007 • John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman
Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain.