no code implementations • 25 Jun 2024 • Depen Morwani, Itai Shapira, Nikhil Vyas, Eran Malach, Sham Kakade, Lucas Janson

Shampoo, a second-order optimization algorithm which uses a Kronecker product preconditioner, has recently garnered increasing attention from the machine learning community.

no code implementations • 19 Feb 2024 • Niclas Boehmer, Yash Nair, Sanket Shah, Lucas Janson, Aparna Taneja, Milind Tambe

When resources are scarce, an allocation policy is needed to decide who receives a resource.

1 code implementation • 7 Feb 2024 • Biyonka Liang, Lily Xu, Aparna Taneja, Milind Tambe, Lucas Janson

Public health programs often provide interventions to encourage beneficiary adherence, and effectively allocating interventions is vital for producing the greatest overall health outcomes.

no code implementations • 14 Feb 2022 • Kelly W. Zhang, Lucas Janson, Susan A. Murphy

In this work, we focus on longitudinal user data collected by a large class of adaptive sampling algorithms that are designed to optimize treatment decisions online using accruing data from multiple users.

no code implementations • 11 Feb 2022 • Feicheng Wang, Lucas Janson

The linear quadratic regulator with unknown dynamics is a fundamental reinforcement learning setting with significant structure in its dynamics and cost function, yet even in this setting there is a gap between the best known regret lower-bound of $\Omega_p(\sqrt{T})$ and the best known upper-bound of $O_p(\sqrt{T}\,\text{polylog}(T))$.

1 code implementation • 20 Jan 2022 • Dae Woong Ham, Kosuke Imai, Lucas Janson

We propose a new hypothesis testing approach based on the conditional randomization test to answer the most fundamental question of conjoint analysis: Does a factor of interest matter in any way given the other factors?

2 code implementations • 10 Dec 2021 • Thomas Lew, Lucas Janson, Riccardo Bonalli, Marco Pavone

In this work, we analyze an efficient sampling-based algorithm for general-purpose reachability analysis, which remains a notoriously challenging problem with applications ranging from neural network verification to safety analysis of dynamical systems.

2 code implementations • 23 Sep 2021 • Alexander Koenig, Zixi Liu, Lucas Janson, Robert Howe

Our first experiment investigates the need for rich tactile sensing in the rewards of RL-based grasp refinement algorithms for multi-fingered robotic hands.

no code implementations • NeurIPS 2021 • Kelly W. Zhang, Lucas Janson, Susan A. Murphy

Yet there is a lack of general methods for conducting statistical inference using more complex models on data collected with (contextual) bandit algorithms; for example, current methods cannot be used for valid inference on parameters in a logistic regression model for a binary reward.

2 code implementations • 2 Nov 2020 • Feicheng Wang, Lucas Janson

Recent progress in reinforcement learning has led to remarkable performance in a range of applications, but its deployment in high-stakes settings remains quite rare.

1 code implementation • NeurIPS 2020 • Pierre Bayle, Alexandre Bayle, Lucas Janson, Lester Mackey

This work develops central limit theorems for cross-validation and consistent estimators of its asymptotic variance under weak stability conditions on the learning algorithm.

1 code implementation • 2 Jul 2020 • Lu Zhang, Lucas Janson

Many modern applications seek to understand the relationship between an outcome variable $Y$ and a covariate $X$ in the presence of a (possibly high-dimensional) confounding variable $Z$.

Methodology

1 code implementation • 6 Jun 2020 • Molei Liu, Eugene Katsevich, Lucas Janson, Aaditya Ramdas

We propose the distilled CRT, a novel approach to using state-of-the-art machine learning algorithms in the CRT while drastically reducing the number of times those algorithms need to be run, thereby taking advantage of their power and the CRT's statistical guarantees without suffering the usual computational expense.

Methodology

no code implementations • NeurIPS 2020 • Kelly W. Zhang, Lucas Janson, Susan A. Murphy

As bandit algorithms are increasingly utilized in scientific studies and industrial applications, there is an associated increasing need for reliable inference methods based on the resulting adaptively-collected data.

1 code implementation • 7 Mar 2019 • Dongming Huang, Lucas Janson

The recent paper Cand\`es et al. (2018) introduced model-X knockoffs, a method for variable selection that provably and non-asymptotically controls the false discovery rate with no restrictions or assumptions on the dimensionality of the data or the conditional distribution of the response given the covariates.

Methodology

1 code implementation • 1 Mar 2019 • Stephen Bates, Emmanuel Candès, Lucas Janson, Wenshuo Wang

Model-X knockoffs is a wrapper that transforms essentially any feature importance measure into a variable selection algorithm, which discovers true effects while rigorously controlling the expected fraction of false positives.

Methodology

no code implementations • 16 Apr 2018 • Lucas Janson, Tommy Hu, Marco Pavone

This paper addresses the problem of planning a safe (i. e., collision-free) trajectory from an initial state to a goal region when the obstacle space is a-priori unknown and is incrementally revealed online, e. g., through line-of-sight perception.

3 code implementations • 7 Oct 2016 • Emmanuel Candes, Yingying Fan, Lucas Janson, Jinchi Lv

Whereas the knockoffs procedure is constrained to homoscedastic linear models with $n\ge p$, the key innovation here is that model-X knockoffs provide valid inference from finite samples in settings in which the conditional distribution of the response is arbitrary and completely unknown.

Methodology Statistics Theory Applications Statistics Theory

no code implementations • 5 Dec 2015 • Yin-Lam Chow, Mohammad Ghavamzadeh, Lucas Janson, Marco Pavone

In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account \emph{risk}, i. e., increased awareness of events of small probability and high consequences.

1 code implementation • 30 Apr 2015 • Lucas Janson, Edward Schmerling, Marco Pavone

MCMP applies this CP estimation procedure to motion planning by iteratively (i) computing an (approximately) optimal path for the deterministic version of the problem (here, using the FMT* algorithm), (ii) computing the CP of this path, and (iii) inflating or deflating the obstacles by a common factor depending on whether the CP is higher or lower than a target value.

Robotics

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.