no code implementations • 16 Oct 2023 • Andrey Zhitnikov, Ori Sztyglic, Vadim Indelman
Using the general theoretical results, we present three algorithms to accelerate continuous POMDP online planning with belief-dependent rewards.
no code implementations • 13 Feb 2023 • Andrey Zhitnikov, Vadim Indelman
Moreover, using our proposed framework, we contribute an adaptive method to find a maximal feasible return (e. g., information gain) in terms of Value at Risk for the candidate action sequence with substantial acceleration.
no code implementations • 6 Sep 2022 • Andrey Zhitnikov, Vadim Indelman
In addition, with an arbitrary confidence parameter, we did not find any analogs to our approach.
no code implementations • 29 May 2021 • Ori Sztyglic, Andrey Zhitnikov, Vadim Indelman
In particular, we present Simplified Information-Theoretic Particle Filter Tree (SITH-PFT), a novel variant to the MCTS algorithm that considers information-theoretic rewards but avoids the need to calculate them completely.
no code implementations • 12 May 2021 • Andrey Zhitnikov, Vadim Indelman
On top of this extension, our key contribution is a novel framework to simplify decision making while assessing and controlling online the simplification's impact.
no code implementations • 13 Apr 2020 • Gregory Naitzat, Andrey Zhitnikov, Lek-Heng Lim
We study how the topology of a data set $M = M_a \cup M_b \subseteq \mathbb{R}^d$, representing two classes $a$ and $b$ in a binary classification problem, changes as it passes through the layers of a well-trained neural network, i. e., with perfect accuracy on training set and near-zero generalization error ($\approx 0. 01\%$).
no code implementations • ICML 2018 • Andrey Zhitnikov, Rotem Mulayoff, Tomer Michaeli
In many areas of neuroscience and biological data analysis, it is desired to reveal common patterns among a group of subjects.