no code implementations • 9 May 2022 • Claudia Roberts, Maria Dimakopoulou, Qifeng Qiao, Ashok Chandrashekhar, Tony Jebara
These online learning frameworks learn a treatment assignment policy in the presence of treatment effects that vary with the observed contextual features of the users.
no code implementations • 24 Mar 2021 • Jingxi Xu, Da Tang, Tony Jebara
The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches.
no code implementations • 14 Jun 2019 • Da Tang, Dawen Liang, Nicholas Ruozzi, Tony Jebara
Variational Auto-Encoders (VAEs) have been widely applied for learning compact, low-dimensional latent representations of high-dimensional data.
1 code implementation • NeurIPS 2019 • Andrew Stirn, Tony Jebara, David A. Knowles
We construct a new distribution for the simplex using the Kumaraswamy distribution and an ordered stick-breaking process.
2 code implementations • ICLR Workshop DeepGenStruct 2019 • Da Tang, Dawen Liang, Tony Jebara, Nicholas Ruozzi
Variational Auto-Encoders (VAEs) are capable of learning latent representations for high dimensional data.
no code implementations • 9 May 2019 • David Hubbard, Benoit Rostykus, Yves Raimond, Tony Jebara
This article analyzes the problem of estimating the time until an event occurs, also known as survival modeling.
no code implementations • 3 Dec 2018 • Andrew Stirn, Tony Jebara
Thompson sampling, a Bayesian method for balancing exploration and exploitation in bandit problems, has theoretical guarantees and exhibits strong empirical performance in many domains.
no code implementations • 17 Jul 2018 • Giannis Karamanolakis, Kevin Raji Cherian, Ananth Ravi Narayan, Jie Yuan, Da Tang, Tony Jebara
In recent years, Variational Autoencoders (VAEs) have been shown to be highly effective in both standard collaborative filtering applications and extensions such as incorporation of implicit feedback.
no code implementations • EMNLP 2018 • Da Tang, Xiujun Li, Jianfeng Gao, Chong Wang, Lihong Li, Tony Jebara
Experiments with simulated and real users show that our approach performs competitively against a state-of-the-art method that requires human-defined subgoals.
no code implementations • 16 Apr 2018 • Tony Jebara
The bound is strictly sharper in the homogeneous setting and very often significantly sharper in the heterogeneous setting.
16 code implementations • 16 Feb 2018 • Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, Tony Jebara
This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research. We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation.
Ranked #4 on
Recommendation Systems
on Million Song Dataset
no code implementations • 2 Nov 2016 • Da Tang, Tony Jebara
We consider the problem of consistently matching multiple sets of elements to each other, which is a common task in fields such as computer vision.
1 code implementation • 25 Oct 2016 • Gauthier Gidel, Tony Jebara, Simon Lacoste-Julien
We extend the Frank-Wolfe (FW) optimization algorithm to solve constrained smooth convex-concave saddle point (SP) problems.
no code implementations • 16 Nov 2015 • Anna Choromanska, Krzysztof Choromanski, Mariusz Bojarski, Tony Jebara, Sanjiv Kumar, Yann Lecun
We prove several theoretical results showing that projections via various structured matrices followed by nonlinear mappings accurately preserve the angular distance between input high-dimensional vectors.
no code implementations • 4 Mar 2015 • Kui Tang, Nicholas Ruozzi, David Belanger, Tony Jebara
Many machine learning tasks can be formulated in terms of predicting structured outputs.
no code implementations • NeurIPS 2014 • Nicholas Ruozzi, Tony Jebara
The later has better convergence properties but typically provides poorer estimates.
no code implementations • NeurIPS 2014 • Adrian Weller, Tony Jebara
It was recently proved using graph covers (Ruozzi, 2012) that the Bethe partition function is upper bounded by the true partition function for a binary pairwise model that is attractive.
1 code implementation • 24 Feb 2014 • Felix X. Yu, Krzysztof Choromanski, Sanjiv Kumar, Tony Jebara, Shih-Fu Chang
Learning from Label Proportions (LLP) is a learning setting, where the training data is provided in groups, or "bags", and only the proportion of each class in each bag is known.
no code implementations • 30 Dec 2013 • Adrian Weller, Tony Jebara
When belief propagation (BP) converges, it does so to a stationary point of the Bethe free energy $F$, and is often strikingly accurate.
no code implementations • NeurIPS 2013 • Josh S. Merel, Roy Fox, Tony Jebara, Liam Paninski
In a closed-loop brain-computer interface (BCI), adaptive decoders are used to learn parameters suited to decoding the user's neural response.
no code implementations • NeurIPS 2013 • Krzysztof M. Choromanski, Tony Jebara, Kui Tang
The adaptive anonymity problem is formalized where each individual shares their data along with an integer value to indicate their personal level of desired privacy.
no code implementations • 22 Sep 2013 • Anna Choromanska, Tony Jebara
Recently a majorization method for optimizing partition functions of log-linear models was proposed alongside a novel quadratic variational upper-bound.
no code implementations • 5 Sep 2013 • Aleksandr Y. Aravkin, Anna Choromanska, Tony Jebara, Dimitri Kanevsky
Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques.
no code implementations • 4 Jun 2013 • Felix X. Yu, Dong Liu, Sanjiv Kumar, Tony Jebara, Shih-Fu Chang
We study the problem of learning with label proportions in which the training data is provided in groups and only the proportion of each class in each group is known.
no code implementations • NeurIPS 2012 • Tony Jebara, Anna Choromanska
The partition function plays a key role in probabilistic modeling including conditional random fields, graphical models, and maximum likelihood estimation.
no code implementations • NeurIPS 2011 • Blake Shaw, Bert Huang, Tony Jebara
To better model and understand these networks, we present structure preserving metric learning (SPML), an algorithm for learning a Mahalanobis distance metric from a network such that the learned distances are tied to the inherent connectivity structure of the network.
no code implementations • NeurIPS 2011 • Pannagadatta K. Shivaswamy, Tony Jebara
Thus, the proposed algorithm solves a key limitation of previous empirical Bernstein boosting methods which required brute force enumeration of all possible weak learners.
no code implementations • NeurIPS 2008 • Tony Jebara, Pannagadatta K. Shivaswamy
In classification problems, Support Vector Machines maximize the margin of separation between two classes.