Search Results for author: Jeff A. Bilmes

Found 17 papers, 0 papers with code

Submodularity Cuts and Applications

no code implementations NeurIPS 2009 Yoshinobu Kawahara, Kiyohito Nagano, Koji Tsuda, Jeff A. Bilmes

Several key problems in machine learning, such as feature selection and active learning, can be formulated as submodular set function maximization.

Active Learning feature selection

Entropic Graph Regularization in Non-Parametric Semi-Supervised Classification

no code implementations NeurIPS 2009 Amarnag Subramanya, Jeff A. Bilmes

We prove certain theoretical properties of a graph-regularized transductive learning objective that is based on minimizing a Kullback-Leibler divergence based loss.

Classification General Classification +1

Label Selection on Graphs

no code implementations NeurIPS 2009 Andrew Guillory, Jeff A. Bilmes

We investigate methods for selecting sets of labeled vertices for use in predicting the labels of vertices on a graph.

On fast approximate submodular minimization

no code implementations NeurIPS 2011 Stefanie Jegelka, Hui Lin, Jeff A. Bilmes

We are motivated by an application to extract a representative subset of machine learning training data and by the poor empirical performance we observe of the popular minimum norm algorithm.

BIG-bench Machine Learning

Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints

no code implementations NeurIPS 2013 Rishabh K. Iyer, Jeff A. Bilmes

We are motivated by a number of real-world applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost).

The Lovasz-Bregman Divergence and connections to rank aggregation, clustering, and web ranking

no code implementations9 Aug 2014 Rishabh Iyer, Jeff A. Bilmes

We show how a number of recently used web ranking models are forms of Lovasz-Bregman rank aggregation and also observe that a natural form of Mallow's model using the LB divergence has been used as conditional ranking models for the "Learning to Rank" problem.

Clustering Information Retrieval +2

Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications

no code implementations9 Aug 2014 Rishabh Iyer, Jeff A. Bilmes

We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a dierence between submodular functions.

feature selection

Faster graphical model identification of tandem mass spectra using peptide word lattices

no code implementations29 Oct 2014 Shengjie Wang, John T. Halloran, Jeff A. Bilmes, William S. Noble

Liquid chromatography coupled with tandem mass spectrometry, also known as shotgun proteomics, is a widely-used high-throughput technology for identifying proteins in complex biological samples.

Learning Mixtures of Submodular Functions for Image Collection Summarization

no code implementations NeurIPS 2014 Sebastian Tschiatschek, Rishabh K. Iyer, Haochen Wei, Jeff A. Bilmes

This paper provides, to our knowledge, the first systematic approach for quantifying the problem of image collection summarization, along with a new dataset of image collections and human summaries.

Document Summarization Structured Prediction

Mixed Robust/Average Submodular Partitioning: Fast Algorithms, Guarantees, and Applications

no code implementations NeurIPS 2015 Kai Wei, Rishabh K. Iyer, Shengjie Wang, Wenruo Bai, Jeff A. Bilmes

In the present paper, we bridge this gap, by proposing several new algorithms (including greedy, majorization-minimization, minorization-maximization, and relaxation algorithms) that not only scale to large datasets but that also achieve theoretical approximation guarantees comparable to the state-of-the-art.

Clustering Distributed Optimization +3

Deep Submodular Functions: Definitions and Learning

no code implementations NeurIPS 2016 Brian W. Dolhansky, Jeff A. Bilmes

We propose and study a new class of submodular functions called deep submodular functions (DSFs).

Diverse Ensemble Evolution: Curriculum Data-Model Marriage

no code implementations NeurIPS 2018 Tianyi Zhou, Shengjie Wang, Jeff A. Bilmes

We study a new method (``Diverse Ensemble Evolution (DivE$^2$)'') to train an ensemble of machine learning models that assigns data to models at each training epoch based on each model's current expertise and an intra- and inter-model diversity reward.

Dynamic Instance Hardness

no code implementations25 Sep 2019 Tianyi Zhou, Shengjie Wang, Jeff A. Bilmes

The advantages of DIHCL, compared to other curriculum learning approaches, are: (1) DIHCL does not require additional inference steps over the data not selected by DIHCL in each epoch, (2) the dynamic instance hardness, compared to static instance hardness (e. g., instantaneous loss), is more stable as it integrates information over the entire training history up to the present time.

Curriculum Learning by Dynamic Instance Hardness

no code implementations NeurIPS 2020 Tianyi Zhou, Shengjie Wang, Jeff A. Bilmes

Compared to existing CL methods: (1) DIH is more stable over time than using only instantaneous hardness, which is noisy due to stochastic training and DNN's non-smoothness; (2) DIHCL is computationally inexpensive since it uses only a byproduct of back-propagation and thus does not require extra inference.

Cannot find the paper you are looking for? You can Submit a new open access paper.