no code implementations • 15 Oct 2024 • Jiayu Chen, Wentse Chen, Jeff Schneider
In this paper, we propose modeling offline MBRL as a Bayes Adaptive Markov Decision Process (BAMDP), which is a principled framework for addressing model uncertainty.
no code implementations • 2 Sep 2024 • Youngseog Chung, Dhruv Malik, Jeff Schneider, Yuanzhi Li, Aarti Singh
The traditional viewpoint on Sparse Mixture of Experts (MoE) models is that instead of training a single large expert, which is computationally expensive, we can train many small experts.
1 code implementation • 8 Aug 2024 • Aditya Kapoor, Benjamin Freed, Howie Choset, Jeff Schneider
We empirically demonstrate that our approach, PRD-MAPPO, decouples agents from teammates that do not influence their expected future reward, thereby streamlining credit assignment.
Multi-agent Reinforcement Learning reinforcement-learning +3
1 code implementation • 20 Jun 2024 • Wentse Chen, Shiyu Huang, Jeff Schneider
In this paper, we propose an enhancement to QMIX by incorporating an additional local Q-value learning method within the maximum entropy RL framework.
no code implementations • 15 Jun 2024 • Arun Balajee Vasudevan, Neehar Peri, Jeff Schneider, Deva Ramanan
Motion planning is crucial for safe navigation in complex urban environments.
1 code implementation • 22 May 2024 • Sang Keun Choe, Hwijeen Ahn, Juhan Bae, Kewen Zhao, Minsoo Kang, Youngseog Chung, Adithya Pratapa, Willie Neiswanger, Emma Strubell, Teruko Mitamura, Jeff Schneider, Eduard Hovy, Roger Grosse, Eric Xing
Large language models (LLMs) are trained on a vast amount of human-written data, but data providers often remain uncredited.
1 code implementation • 22 Apr 2024 • Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, Jeff Schneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, Aviral Kumar
Our main finding is that, in general, approaches that use on-policy sampling or attempt to push down the likelihood on certain responses (i. e., employ a "negative gradient") outperform offline and maximum likelihood objectives.
no code implementations • 18 Apr 2024 • Ian Char, Youngseog Chung, Joseph Abbate, Egemen Kolemen, Jeff Schneider
Although tokamaks are one of the most promising devices for realizing nuclear fusion as an energy source, there are still key obstacles when it comes to understanding the dynamics of the plasma and controlling it.
no code implementations • 12 Mar 2024 • Adam Villaflor, Brian Yang, Huangyuan Su, Katerina Fragkiadaki, John Dolan, Jeff Schneider
Although these models have conventionally been evaluated for open-loop prediction, we show that they can be used to parameterize autoregressive closed-loop models without retraining.
1 code implementation • 9 Feb 2024 • Brian Yang, Huangyuan Su, Nikolaos Gkanatsios, Tsung-Wei Ke, Ayush Jain, Jeff Schneider, Katerina Fragkiadaki
Diffusion-ES samples trajectories during evolutionary search from a diffusion model and scores them using a black-box reward function.
no code implementations • 6 Jan 2024 • Arundhati Banerjee, Jeff Schneider
Multi-agent multi-target tracking has a wide range of applications, including wildlife patrolling, security surveillance or environment monitoring.
no code implementations • CVPR 2024 • Brian Yang, Huangyuan Su, Nikolaos Gkanatsios, Tsung-Wei Ke, Ayush Jain, Jeff Schneider, Katerina Fragkiadaki
Diffusion-ES samples trajectories during evolutionary search from a diffusion model and scores them using a black-box reward function.
no code implementations • 1 Dec 2023 • Viraj Mehta, Vikramjeet Das, Ojash Neopane, Yijia Dai, Ilija Bogunovic, Jeff Schneider, Willie Neiswanger
Preference-based feedback is important for many applications in reinforcement learning where direct evaluation of a reward function is not feasible.
1 code implementation • 12 Sep 2023 • Siddarth Venkatraman, Shivesh Khaitan, Ravi Tej Akella, John Dolan, Jeff Schneider, Glen Berseth
However, a key challenge in offline RL lies in effectively stitching portions of suboptimal trajectories from the static dataset while avoiding extrapolation errors arising due to a lack of support in the dataset.
no code implementations • 21 Jul 2023 • Viraj Mehta, Ojash Neopane, Vikramjeet Das, Sen Lin, Jeff Schneider, Willie Neiswanger
Preference-based feedback is important for many applications where direct evaluation of a reward function is not feasible.
no code implementations • 18 Jul 2023 • Vikram Duvvur, Aashay Mehta, Edward Sun, Bo Wu, Ken Yew Chan, Jeff Schneider
In a typical set-up, supervised learning is used to predict the future prices of assets, and those predictions drive a simple trading and execution strategy.
1 code implementation • NeurIPS 2023 • Ian Char, Jeff Schneider
When this is the case, the policy needs to leverage the history of observations to infer the current state.
no code implementations • 16 Jun 2023 • Anirudha Ramesh, Anurag Ghosh, Christoph Mertz, Jeff Schneider
Our Almost Unsupervised Domain Adaptation (AUDA) framework, a label-efficient semi-supervised approach for robotic scenarios -- employs Source Preparation (SP), Unsupervised Domain Adaptation (UDA) and Supervised Alignment (SA) from limited labeled data.
no code implementations • 4 Apr 2023 • Nikhil Angad Bakshi, Tejus Gupta, Ramina Ghods, Jeff Schneider
We conduct field tests using our multi-robot system in an unstructured environment with a search area of approximately 75, 000 sq.
no code implementations • 19 Dec 2022 • Xiang Li, Viraj Mehta, Johannes Kirschner, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, Ilija Bogunovic
Many real-world reinforcement learning tasks require control of complex dynamical systems that involve both costly data acquisition processes and large state spaces.
1 code implementation • 6 Oct 2022 • Viraj Mehta, Ian Char, Joseph Abbate, Rory Conlin, Mark D. Boyer, Stefano Ermon, Jeff Schneider, Willie Neiswanger
In this work, we develop a method that allows us to plan for exploration while taking both the task and the current knowledge about the dynamics into account.
no code implementations • 5 Oct 2022 • Arundhati Banerjee, Ramina Ghods, Jeff Schneider
Multi-agent active search requires autonomous agents to choose sensing actions that efficiently locate targets.
no code implementations • 21 Jul 2022 • Adam Villaflor, Zhe Huang, Swapnil Pande, John Dolan, Jeff Schneider
Impressive results in natural language processing (NLP) based on the Transformer neural network architecture have inspired researchers to explore viewing offline reinforcement learning (RL) as a generic sequence modeling problem.
no code implementations • 20 May 2022 • Conor Igoe, Youngseog Chung, Ian Char, Jeff Schneider
One critical challenge in deploying highly performant machine learning models in real-life applications is out of distribution (OOD) detection.
no code implementations • 26 Apr 2022 • Ian Char, Viraj Mehta, Adam Villaflor, John M. Dolan, Jeff Schneider
Past efforts for developing algorithms in this area have revolved around introducing constraints to online reinforcement learning algorithms to ensure the actions of the learned policy are constrained to the logged data.
no code implementations • 9 Mar 2022 • Arundhati Banerjee, Ramina Ghods, Jeff Schneider
We then build a decision making algorithm on this inference method that uses Thompson sampling to enable decentralized multi-agent active search.
no code implementations • 17 Feb 2022 • Yeeho Song, Jeff Schneider
Some of the state of the art approaches try to address the problem with adversarial agents, but these agents often require expert supervision to fine tune and prevent the adversary from becoming too challenging to the trainee agent.
no code implementations • 23 Dec 2021 • Benjamin Freed, Aditya Kapoor, Ian Abraham, Jeff Schneider, Howie Choset
One of the preeminent obstacles to scaling multi-agent reinforcement learning to large numbers of agents is assigning credit to individual agents' actions.
1 code implementation • 9 Dec 2021 • Viraj Mehta, Biswajit Paria, Jeff Schneider, Stefano Ermon, Willie Neiswanger
In particular, we leverage ideas from Bayesian optimal experimental design to guide the selection of state-action queries for efficient learning.
no code implementations • ICLR 2022 • Viraj Mehta, Biswajit Paria, Jeff Schneider, Willie Neiswanger, Stefano Ermon
In particular, we leverage ideas from Bayesian optimal experimental design to guide the selection of state-action queries for efficient learning.
1 code implementation • 21 Sep 2021 • Youngseog Chung, Ian Char, Han Guo, Jeff Schneider, Willie Neiswanger
With increasing deployment of machine learning systems in various real-world tasks, there is a greater need for accurate quantification of predictive uncertainty.
1 code implementation • 25 May 2021 • Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider
Instead, the learning of a good internal bird's eye view feature representation is effective for layout estimation.
no code implementations • 2 Feb 2021 • Kevin Tran, Willie Neiswanger, Kirby Broderick, Erix Xing, Jeff Schneider, Zachary W. Ulissi
We address this issue by relaxing the catalyst discovery goal into a classification problem: "What is the set of catalysts that is worth testing experimentally?"
Chemical Physics
no code implementations • 15 Jan 2021 • Tanmay Agarwal, Hitesh Arora, Jeff Schneider
Traditional autonomous vehicle pipelines that follow a modular approach have been very successful in the past both in academia and industry, which has led to autonomy deployed on road.
no code implementations • 1 Jan 2021 • Adam Villaflor, John Dolan, Jeff Schneider
Then, we can optionally enter a second stage where we fine-tune the policy using our novel Model-Based Behavior-Regularized Policy Optimization (MB2PO) algorithm.
2 code implementations • NeurIPS 2021 • Youngseog Chung, Willie Neiswanger, Ian Char, Jeff Schneider
However, this loss restricts the scope of applicable regression models, limits the ability to target many desirable properties (e. g. calibration, sharpness, centered intervals), and may produce poor conditional quantiles.
1 code implementation • 9 Nov 2020 • Ramina Ghods, William J. Durkin, Jeff Schneider
The active search for objects of interest in an unknown environment has many robotics applications including search and rescue, detecting gas leaks or locating animal poachers.
no code implementations • 9 Nov 2020 • Zhiqian Qiao, Jeff Schneider, John M. Dolan
In this work, we propose a behavior planning structure based on reinforcement learning (RL) which is capable of performing autonomous vehicle behavior planning with a hierarchical structure in simulated urban environments.
no code implementations • 14 Aug 2020 • Shuby Deshpande, Benjamin Eysenbach, Jeff Schneider
Visualization tools for supervised learning allow users to interpret, introspect, and gain an intuition for the successes and failures of their models.
no code implementations • 10 Jul 2020 • Shuby Deshpande, Jeff Schneider
Visualization tools for supervised learning have allowed users to interpret, introspect, and gain intuition for the successes and failures of their models.
no code implementations • 25 Jun 2020 • Ramina Ghods, Arundhati Banerjee, Jeff Schneider
Active search refers to the problem of efficiently locating targets in an unknown environment by actively making data-collection decisions, and has many applications including detecting gas leaks, radiation sources or human survivors of disasters using aerial and/or ground robots (agents).
no code implementations • 23 Jun 2020 • Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Andrew Oakleigh Nelson, Mark D Boyer, Egemen Kolemen, Jeff Schneider
We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models in various gray-box settings which incorporates prior knowledge in the form of systems of ordinary differential equations.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Andrew Oakleigh Nelson, Mark D Boyer, Egemen Kolemen, Jeff Schneider
We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models which incorporates prior knowledge in the form of systems of ordinary differential equations.
no code implementations • 6 Jan 2020 • Youngseog Chung, Ian Char, Willie Neiswanger, Kirthevasan Kandasamy, Andrew Oakleigh Nelson, Mark D Boyer, Egemen Kolemen, Jeff Schneider
One obstacle in utilizing fusion as a feasible energy source is the stability of the reaction.
no code implementations • 9 Nov 2019 • Zhiqian Qiao, Zachariah Tyree, Priyantha Mudalige, Jeff Schneider, John M. Dolan
In this work, we propose a hierarchical reinforcement learning (HRL) structure which is capable of performing autonomous vehicle planning tasks in simulated environments with multiple sub-goals.
Hierarchical Reinforcement Learning reinforcement-learning +2
no code implementations • 9 Nov 2019 • Zhiqian Qiao, Jing Zhao, Zachariah Tyree, Priyantha Mudalige, Jeff Schneider, John M. Dolan
How autonomous vehicles and human drivers share public transportation systems is an important problem, as fully automatic transportation environments are still a long way off.
1 code implementation • 5 Aug 2019 • Ksenia Korovina, Sailun Xu, Kirthevasan Kandasamy, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, Eric P. Xing
In applications such as molecule design or drug discovery, it is desirable to have an algorithm which recommends new candidate molecules based on the results of past tests.
no code implementations • 1 Aug 2019 • Henggang Cui, Thi Nguyen, Fang-Chieh Chou, Tsung-Han Lin, Jeff Schneider, David Bradley, Nemanja Djuric
Self-driving vehicles (SDVs) hold great potential for improving traffic safety and are poised to positively affect the quality of life of millions of people.
1 code implementation • 20 Jun 2019 • Fang-Chieh Chou, Tsung-Han Lin, Henggang Cui, Vladan Radosavljevic, Thi Nguyen, Tzu-Kuo Huang, Matthew Niedoba, Jeff Schneider, Nemanja Djuric
Following detection and tracking of traffic actors, prediction of their future motion is the next critical component of a self-driving vehicle (SDV) technology, allowing the SDV to operate safely and efficiently in its environment.
no code implementations • 4 May 2019 • Liang Xiong, Xi Chen, Tzu-Kuo Huang, Jeff Schneider, Jaime G. Carbonell
Motivated by our sales prediction problem, we propose a factor-based algorithm that is able to take time into account.
1 code implementation • 15 Mar 2019 • Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, Eric P. Xing
We compare Dragonfly to a suite of other packages and algorithms for global optimisation and demonstrate that when the above methods are integrated, they enable significant improvements in the performance of BO.
1 code implementation • 31 Jan 2019 • Willie Neiswanger, Kirthevasan Kandasamy, Barnabas Poczos, Jeff Schneider, Eric Xing
Optimizing an expensive-to-query function is a common task in science and engineering, where it is beneficial to keep the number of queries to a minimum.
4 code implementations • 18 Sep 2018 • Henggang Cui, Vladan Radosavljevic, Fang-Chieh Chou, Tsung-Han Lin, Thi Nguyen, Tzu-Kuo Huang, Jeff Schneider, Nemanja Djuric
Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact.
no code implementations • 17 Aug 2018 • Nemanja Djuric, Vladan Radosavljevic, Henggang Cui, Thi Nguyen, Fang-Chieh Chou, Tsung-Han Lin, Nitin Singh, Jeff Schneider
We address one of the crucial aspects necessary for safe and efficient operations of autonomous vehicles, namely predicting future state of traffic actors in the autonomous vehicle's surroundings.
1 code implementation • 25 May 2018 • Kirthevasan Kandasamy, Willie Neiswanger, Reed Zhang, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos
We design a new myopic strategy for a wide class of sequential design of experiment (DOE) problems, where the goal is to collect data in order to to fulfil a certain problem specific goal.
1 code implementation • NeurIPS 2018 • Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, Eric Xing
A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.
no code implementations • ICML 2018 • Junier B. Oliva, Avinava Dubey, Manzil Zaheer, Barnabás Póczos, Ruslan Salakhutdinov, Eric P. Xing, Jeff Schneider
Further, through a comprehensive study over both real world and synthetic data, we show for that jointly leveraging transformations of variables and autoregressive conditional models, results in a considerable improvement in performance.
Ranked #1 on Density Estimation on BSDS300
no code implementations • 6 Nov 2017 • Siamak Ravanbakhsh, Junier Oliva, Sebastien Fromenteau, Layne C. Price, Shirley Ho, Jeff Schneider, Barnabas Poczos
A major approach to estimating the cosmological parameters is to use the large-scale matter distribution of the Universe.
no code implementations • 30 May 2017 • Junier B. Oliva, Kumar Avinava Dubey, Barnabas Poczos, Eric Xing, Jeff Schneider
After, an RNN is used to compute the conditional distributions of the latent covariates.
1 code implementation • 25 May 2017 • Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos
We design and analyse variations of the classical Thompson sampling (TS) procedure for Bayesian optimisation (BO) in settings where function evaluations are expensive, but can be performed in parallel.
1 code implementation • 30 Apr 2017 • Sibi Venkatesan, James K. Miller, Jeff Schneider, Artur Dubrawski
In this paper, we consider the problem of Active Search where we are given a similarity function between data points.
no code implementations • ICML 2017 • Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabas Poczos
Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design.
2 code implementations • ICML 2017 • Junier B. Oliva, Barnabas Poczos, Jeff Schneider
Sophisticated gated recurrent neural network architectures like LSTMs and GRUs have been shown to be highly effective in a myriad of applications.
1 code implementation • ICML 2017 • Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos
We propose to study equivariance in deep neural networks through parameter symmetries.
no code implementations • 3 Feb 2017 • Kirthevasan Kandasamy, Jeff Schneider, Barnabás Póczos
In this paper, we study active posterior estimation in a Bayesian setting when the likelihood is expensive to evaluate.
no code implementations • 2 Dec 2016 • Yifei Ma, Roman Garnett, Jeff Schneider
Autonomous systems can be used to search for sparse signals in a large space; e. g., aerial robots can be deployed to localize threats, detect gas leaks, or respond to distress calls.
1 code implementation • NeurIPS 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, Barnabas Poczos
However, in many cases, cheap approximations to $\func$ may be obtainable.
no code implementations • 14 Nov 2016 • Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos
We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set.
no code implementations • NeurIPS 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabás Póczos
We study a variant of the classical stochastic $K$-armed bandit where observing the outcome of each arm is expensive, but cheap approximations to this outcome are available.
no code implementations • 19 Sep 2016 • Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, Barnabas Poczos
To this end, we study the application of deep conditional generative models in generating realistic galaxy images.
4 code implementations • 14 May 2016 • Roman Garnett, Shirley Ho, Simeon Bird, Jeff Schneider
We develop an automated technique for detecting damped Lyman-$\alpha$ absorbers (DLAs) along spectroscopic lines of sight to quasi-stellar objects (QSOs or quasars).
Cosmology and Nongalactic Astrophysics Data Analysis, Statistics and Probability
1 code implementation • 20 Mar 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, Barnabas Poczos
However, in many cases, cheap approximations to $f$ may be obtainable.
no code implementations • 1 Jan 2016 • Siamak Ravanbakhsh, Barnabas Poczos, Jeff Schneider, Dale Schuurmans, Russell Greiner
We propose a Laplace approximation that creates a stochastic unit from any smooth monotonic activation function, using only Gaussian noise.
no code implementations • 13 Nov 2015 • Junier B. Oliva, Danica J. Sutherland, Barnabás Póczos, Jeff Schneider
The use of distributions and high-level features from deep architecture has become commonplace in modern computer vision.
no code implementations • 24 Sep 2015 • Danica J. Sutherland, Junier B. Oliva, Barnabás Póczos, Jeff Schneider
This work develops the first random features for pdfs whose dot product approximates kernels using these non-Euclidean metrics, allowing estimators using such kernels to scale to large datasets by working in a primal space, without computing large Gram matrices.
no code implementations • 29 Jun 2015 • Junier Oliva, Avinava Dubey, Andrew G. Wilson, Barnabas Poczos, Jeff Schneider, Eric P. Xing
In this paper we introduce Bayesian nonparmetric kernel-learning (BaNK), a generic, data-driven framework for scalable learning of kernels.
no code implementations • 9 Jun 2015 • Dougal J. Sutherland, Jeff Schneider
Kernel methods give powerful, flexible, and theoretically grounded approaches to solving many problems in machine learning.
no code implementations • 5 Mar 2015 • Kirthevasan Kandasamy, Jeff Schneider, Barnabas Poczos
We prove that, for additive functions the regret has only linear dependence on $D$ even though the function depends on all $D$ dimensions.
no code implementations • NeurIPS 2014 • Xuezhi Wang, Jeff Schneider
Similarly, work on target/conditional shift focuses on matching marginal distributions on labels $Y$ and adjusting conditional distributions $P(X|Y)$, such that $P(X)$ can be matched across domains.
no code implementations • 27 Oct 2014 • Junier Oliva, Willie Neiswanger, Barnabas Poczos, Eric Xing, Jeff Schneider
Function to function regression (FFR) covers a large range of interesting applications including time-series prediction problems, and also more general tasks like studying a mapping between two separate types of distributions.
no code implementations • 2 Oct 2014 • Michelle Ntampaka, Hy Trac, Dougal J. Sutherland, Nicholas Battaglia, Barnabas Poczos, Jeff Schneider
In the conventional method, we use a standard M(sigma_v) power law scaling relation to infer cluster mass, M, from line-of-sight (LOS) galaxy velocity dispersion, sigma_v.
Cosmology and Nongalactic Astrophysics
no code implementations • NeurIPS 2013 • Yifei Ma, Roman Garnett, Jeff Schneider
For active learning on GRFs, the commonly used V-optimality criterion queries nodes that reduce the L2 (regression) loss.
no code implementations • NeurIPS 2013 • Tzu-Kuo Huang, Jeff Schneider
Under that framework, we identify reasonable assumptions on the generative process of non-sequence data, and propose learning algorithms based on the tensor decomposition method \cite{anandkumar2012tensor} to \textit{provably} recover first-order Markov models and hidden Markov models.
no code implementations • 10 Nov 2013 • Junier B. Oliva, Barnabas Poczos, Timothy Verstynen, Aarti Singh, Jeff Schneider, Fang-Cheng Yeh, Wen-Yih Tseng
We present the FuSSO, a functional analogue to the LASSO, that efficiently finds a sparse set of functional input covariates to regress a real-valued response against.
no code implementations • 10 Nov 2013 • Junier B. Oliva, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, Eric Xing
We study the problem of distribution to real-value regression, where one aims to regress a mapping $f$ that takes in a distribution input covariate $P\in \mathcal{I}$ (for a non-parametric family of distributions $\mathcal{I}$) and outputs a real-valued response $Y=f(P) + \epsilon$.
no code implementations • 5 Mar 2013 • Xiaoying Xu, Shirley Ho, Hy Trac, Jeff Schneider, Barnabas Poczos, Michelle Ntampaka
We investigate machine learning (ML) techniques for predicting the number of galaxies (N_gal) that occupy a halo, given the halo's properties.
Cosmology and Nongalactic Astrophysics
no code implementations • 27 Jun 2012 • Roman Garnett, Yamuna Krishnamurthy, Xuehan Xiong, Jeff Schneider, Richard Mann
In the second, active surveying, our goal is to actively query points to ultimately predict the proportion of a given class.
no code implementations • 1 Feb 2012 • Danica J. Sutherland, Liang Xiong, Barnabás Póczos, Jeff Schneider
Most machine learning algorithms, such as classification or regression, treat the individual data point as the object of interest.