no code implementations • 22 Mar 2024 • Argaman Mordoch, Enrico Scala, Roni Stern, Brendan Juba
We prove that learning non-trivial safe action models with conditional effects may require an exponential number of samples.
no code implementations • 27 Jan 2024 • Daniel Hsu, Jizhou Huang, Brendan Juba
In this work, we give positive and negative results on auditing for Gaussian distributions: On the positive side, we present an alternative approach to leverage these advances in agnostic learning and thereby obtain the first polynomial-time approximation scheme (PTAS) for auditing nontrivial combinatorial subgroup fairness: we show how to audit statistical notions of fairness over homogeneous halfspace subgroups when the features are Gaussian.
no code implementations • 8 Jun 2023 • Ionela G. Mocanu, Vaishak Belle, Brendan Juba
To circumvent the negative results in the literature on the difficulty of robust learning with the PAC semantics, we consider so-called implicit learning where we are able to incorporate observations to the background theory in service of deciding the entailment of an epistemic query.
no code implementations • 24 May 2022 • Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie
In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2011) that the problem of finding a maximum likelihood DPP model for a given data set is NP-complete.
no code implementations • 23 Mar 2022 • Brendan Juba, Roni Stern
In this technical report, we provide a complete example of running the SAM+ algorithm, an algorithm for learning stochastic planning action models, on a simplified PPDDL version of the Coffee problem.
no code implementations • 15 Nov 2021 • Brendan Juba, Leda Liang
Often machine learning and statistical models will attempt to describe the majority of the data.
no code implementations • 29 Sep 2021 • Rina Panigrahy, Brendan Juba, Zihao Deng, Xin Wang, Zee Fryer
We propose a modular architecture for lifelong learning of hierarchically structured tasks.
no code implementations • 12 Jul 2021 • Zihao Deng, Siddartha Devic, Brendan Juba
Many reinforcement learning (RL) environments in practice feature enormous state spaces that may be described compactly by a "factored" structure, that may be modeled by Factored Markov Decision Processes (FMDPs).
1 code implementation • 9 Jul 2021 • Brendan Juba, Hai S. Le, Roni Stern
However, model learning approaches frequently do not provide safety guarantees: the learned model may assume actions are applicable when they are not, and may incorrectly capture actions' effects.
no code implementations • ICLR 2021 • Atish Agarwala, Abhimanyu Das, Brendan Juba, Rina Panigrahy, Vatsal Sharan, Xin Wang, Qiuyi Zhang
Can deep learning solve multiple tasks simultaneously, even when they are unrelated and very different?
1 code implementation • 19 Feb 2021 • Honghua Zhang, Brendan Juba, Guy Van Den Broeck
Generating functions, which are widely used in combinatorics and probability theory, encode function values into the coefficients of a polynomial.
1 code implementation • 23 Oct 2020 • Alexander P. Rader, Ionela G. Mocanu, Vaishak Belle, Brendan Juba
In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic.
no code implementations • 11 Jun 2020 • Mahdi Cheraghchi, Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie
We introduce and study the model of list learning with attribute noise.
no code implementations • 24 Jun 2019 • Brendan Juba
We consider the problem of learning rules from a data set that support a proof of a given query, under Valiant's PAC-Semantics.
no code implementations • NeurIPS 2019 • Vaishak Belle, Brendan Juba
We consider the problem of answering queries about formulas of first-order logic based on background knowledge partially represented explicitly as other formulas, and partially represented as examples independently drawn from a fixed probability distribution.
no code implementations • 28 Jun 2018 • Brendan Juba
In this work we consider the use of bounded-degree fragments of the "sum-of-squares" logic as a probability logic.
no code implementations • 26 Jun 2018 • John Hainline, Brendan Juba, Hai S. Le, David Woodruff
We consider the following conditional linear regression problem: the task is to identify both (i) a $k$-DNF condition $c$ and (ii) a linear rule $f$ such that the probability of $c$ is (approximately) at least some given bound $\mu$, and $f$ minimizes the $\ell_p$ loss of predicting the target $z$ in the distribution of examples conditioned on $c$.
1 code implementation • 6 Jun 2018 • Diego Calderon, Brendan Juba, Sirui Li, Zongyi Li, Lisa Ruan
Work in machine learning and statistics commonly focuses on building models that capture the vast majority of data, possibly ignoring a segment of the population as outliers.
no code implementations • 13 Nov 2017 • Brendan Juba, Zongyi Li, Evan Miller
The main shortcoming of this formulation of the task is that it assumes access to full-information (i. e., fully specified) examples; relatedly, it offers no role for declarative background knowledge, as such knowledge is rendered redundant in the abduction task by complete information.
no code implementations • 24 May 2017 • Roni Stern, Brendan Juba
In this paper we explore the theoretical boundaries of planning in a setting where no model of the agent's actions is given.
no code implementations • 18 Aug 2016 • Brendan Juba
Machine learning and statistics typically focus on building models that capture the vast majority of the data, possibly ignoring a small subset of data as "noise" or "outliers."
no code implementations • 16 Apr 2013 • Brendan Juba
The learning setting we consider is a partial-information, restricted-distribution setting that generalizes learning parities over the uniform distribution from partial information, another task that is known not to be achievable directly in various models (cf.