no code implementations • 13 Jul 2022 • Sree Hari Krishnan Parthasarathi, Lu Zeng, Christin Jose, Joseph Wang
To train effectively with a mix of human and teacher labeled data, we develop a teacher labeling strategy based on confidence heuristics to reduce entropy on the label distribution from the teacher model; the data is then sampled to match the marginal distribution on the labels.
no code implementations • 15 Jun 2022 • Christin Jose, Joseph Wang, Grant P. Strimel, Mohammad Omar Khursheed, Yuriy Mishchenko, Brian Kulis
We also show that when our approach is used in conjunction with a max-pooling loss, we are able to improve relative false accepts by 25 % at a fixed latency when compared to cross entropy loss.
no code implementations • 7 Dec 2020 • Emil Karshalev, Cristian Silva-Lopez, Kyle Chan, Jieming Yan, Elodie Sandraz, Mathieu Gallot, Amir Nourhani, Javier Garay, Joseph Wang
Herein, self-healing small-scale swimmers capable of autonomous propulsion and on-the-fly structural recovery are described.
Soft Condensed Matter Materials Science
no code implementations • 17 Dec 2017 • Yu-Ting Chen, Joseph Wang, Yannan Bai, Gregory Castañón, Venkatesh Saligrama
We present a novel framework for finding complex activities matching user-described queries in cluttered surveillance videos.
no code implementations • 10 Apr 2017 • Zafar Takhirov, Joseph Wang, Marcia S. Louis, Venkatesh Saligrama, Ajay Joshi
In this work, we present a field of groves (FoG) implementation of random forests (RF) that achieves an accuracy comparable to CNNs and SVMs under tight energy budgets.
2 code implementations • ICML 2017 • Tolga Bolukbasi, Joseph Wang, Ofer Dekel, Venkatesh Saligrama
We first pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example.
no code implementations • NeurIPS 2016 • Feng Nan, Joseph Wang, Venkatesh Saligrama
We propose to prune a random forest (RF) for resource-constrained prediction.
no code implementations • 28 Feb 2016 • Tolga Bolukbasi, Kai-Wei Chang, Joseph Wang, Venkatesh Saligrama
We study the problem of structured prediction under test-time budget constraints.
no code implementations • 5 Jan 2016 • Feng Nan, Joseph Wang, Venkatesh Saligrama
We propose a novel 0-1 integer program formulation for ensemble pruning.
no code implementations • NeurIPS 2015 • Joseph Wang, Kirill Trapeznikov, Venkatesh Saligrama
We learn node policies in the DAG by reducing the global objective to a series of cost sensitive learning problems.
no code implementations • 9 Sep 2015 • Joseph Wang, Kirill Trapeznikov, Venkatesh Saligrama
We decompose the problem, which is known to be intractable, into combinatorial (tree structures) and continuous parts (node decision rules) and propose to solve them separately.
no code implementations • 20 Feb 2015 • Feng Nan, Joseph Wang, Venkatesh Saligrama
We seek decision rules for prediction-time cost reduction, where complete data is available for training, but during prediction-time, each feature can only be acquired for an additional cost.
no code implementations • 12 Jan 2015 • Feng Nan, Joseph Wang, Venkatesh Saligrama
We develop a broad class of \emph{admissible} impurity functions that admit monomials, classes of polynomials, and hinge-loss functions that allow for flexible impurity design with provably optimal approximation bounds.
no code implementations • NeurIPS 2012 • Joseph Wang, Venkatesh Saligrama
We show that space partitioning can be equivalently reformulated as a supervised learning problem and consequently any discriminative learning method can be utilized in conjunction with our approach.