no code implementations • NeurIPS 2009 • Yoshinobu Kawahara, Kiyohito Nagano, Koji Tsuda, Jeff A. Bilmes
Several key problems in machine learning, such as feature selection and active learning, can be formulated as submodular set function maximization.
no code implementations • NeurIPS 2009 • Amarnag Subramanya, Jeff A. Bilmes
We prove certain theoretical properties of a graph-regularized transductive learning objective that is based on minimizing a Kullback-Leibler divergence based loss.
no code implementations • NeurIPS 2009 • Andrew Guillory, Jeff A. Bilmes
We investigate methods for selecting sets of labeled vertices for use in predicting the labels of vertices on a graph.
no code implementations • NeurIPS 2011 • Andrew Guillory, Jeff A. Bilmes
In each round, the learning algorithm chooses a sequence of items.
no code implementations • NeurIPS 2011 • Stefanie Jegelka, Hui Lin, Jeff A. Bilmes
We are motivated by an application to extract a representative subset of machine learning training data and by the poor empirical performance we observe of the popular minimum norm algorithm.
no code implementations • NeurIPS 2013 • Rishabh K. Iyer, Jeff A. Bilmes
We are motivated by a number of real-world applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost).
no code implementations • 9 Aug 2014 • Rishabh Iyer, Jeff A. Bilmes
We show how a number of recently used web ranking models are forms of Lovasz-Bregman rank aggregation and also observe that a natural form of Mallow's model using the LB divergence has been used as conditional ranking models for the "Learning to Rank" problem.
no code implementations • 9 Aug 2014 • Rishabh Iyer, Jeff A. Bilmes
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a dierence between submodular functions.
no code implementations • 29 Oct 2014 • Shengjie Wang, John T. Halloran, Jeff A. Bilmes, William S. Noble
Liquid chromatography coupled with tandem mass spectrometry, also known as shotgun proteomics, is a widely-used high-throughput technology for identifying proteins in complex biological samples.
no code implementations • NeurIPS 2014 • Sebastian Tschiatschek, Rishabh K. Iyer, Haochen Wei, Jeff A. Bilmes
This paper provides, to our knowledge, the first systematic approach for quantifying the problem of image collection summarization, along with a new dataset of image collections and human summaries.
no code implementations • NeurIPS 2015 • Kai Wei, Rishabh K. Iyer, Shengjie Wang, Wenruo Bai, Jeff A. Bilmes
In the present paper, we bridge this gap, by proposing several new algorithms (including greedy, majorization-minimization, minorization-maximization, and relaxation algorithms) that not only scale to large datasets but that also achieve theoretical approximation guarantees comparable to the state-of-the-art.
no code implementations • NeurIPS 2016 • Brian W. Dolhansky, Jeff A. Bilmes
We propose and study a new class of submodular functions called deep submodular functions (DSFs).
no code implementations • NeurIPS 2018 • Wenruo Bai, William Stafford Noble, Jeff A. Bilmes
We study the problem of maximizing deep submodular functions (DSFs) subject to a matroid constraint.
no code implementations • NeurIPS 2018 • Tianyi Zhou, Shengjie Wang, Jeff A. Bilmes
We study a new method (``Diverse Ensemble Evolution (DivE$^2$)'') to train an ensemble of machine learning models that assigns data to models at each training epoch based on each model's current expertise and an intra- and inter-model diversity reward.
no code implementations • 25 Sep 2019 • Tianyi Zhou, Shengjie Wang, Jeff A. Bilmes
The advantages of DIHCL, compared to other curriculum learning approaches, are: (1) DIHCL does not require additional inference steps over the data not selected by DIHCL in each epoch, (2) the dynamic instance hardness, compared to static instance hardness (e. g., instantaneous loss), is more stable as it integrates information over the entire training history up to the present time.
no code implementations • NeurIPS 2020 • Tianyi Zhou, Shengjie Wang, Jeff A. Bilmes
Compared to existing CL methods: (1) DIH is more stable over time than using only instantaneous hardness, which is noisy due to stochastic training and DNN's non-smoothness; (2) DIHCL is computationally inexpensive since it uses only a byproduct of back-propagation and thus does not require extra inference.
no code implementations • NeurIPS 2021 • Shengjie Wang, Tianyi Zhou, Chandrashekhar Lavania, Jeff A. Bilmes
Robust submodular partitioning promotes the diversity of every block in the partition.