no code implementations • ICML 2020 • Michal Moshkovitz, Sanjoy Dasgupta, Cyrus Rashtchian, Nave Frost
In terms of negative results, we show that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and we prove that any explainable clustering must incur an \Omega(\log k) approximation compared to the optimal clustering.
no code implementations • 24 Nov 2024 • Liran Nochumsohn, Michal Moshkovitz, Orly Avner, Dotan Di Castro, Omri Azencot
Time series forecasting is critical in numerous real-world applications, requiring accurate predictions of future values based on observed patterns.
no code implementations • 13 Oct 2024 • Dotan Di Castro, Omkar Joglekar, Shir Kozlovsky, Vladimir Tchuiev, Michal Moshkovitz
Training neural networks is computationally heavy and energy-intensive.
no code implementations • 12 Jan 2024 • Zhili Feng, Michal Moshkovitz, Dotan Di Castro, J. Zico Kolter
Concept explanation is a popular approach for examining how human-interpretable concepts impact the predictions of a model.
1 code implementation • 30 Dec 2023 • Omer Ben-Porat, Yishay Mansour, Michal Moshkovitz, Boaz Taitler
Principal-agent problems arise when one party acts on behalf of another, leading to conflicts of interest.
no code implementations • 9 Jun 2022 • Yishay Mansour, Michal Moshkovitz, Cynthia Rudin
Interpretability is an essential building block for trustworthiness in reinforcement learning systems.
no code implementations • 9 Jun 2022 • Chhavi Yadav, Michal Moshkovitz, Kamalika Chaudhuri
This work formalizes the role of explanations in auditing and investigates if and how model explanations can help audits.
no code implementations • 23 Feb 2022 • Lee Cohen, Yishay Mansour, Michal Moshkovitz
Given a policy of a Markov Decision Process, we define a SafeZone as a subset of states, such that most of the policy's trajectories are confined to this subset.
no code implementations • 1 Feb 2022 • Sanjoy Dasgupta, Nave Frost, Michal Moshkovitz
We study the faithfulness of an explanation system to the underlying prediction model.
no code implementations • 18 Feb 2021 • Robi Bhattacharjee, Jacob Imola, Michal Moshkovitz, Sanjoy Dasgupta
We propose a data parameter, $\Lambda(X)$, such that for any algorithm maintaining $O(k\text{poly}(\log n))$ centers at time $n$, there exists a data stream $X$ for which a loss of $\Omega(\Lambda(X))$ is inevitable.
1 code implementation • 14 Feb 2021 • Michal Moshkovitz, Yao-Yuan Yang, Kamalika Chaudhuri
We then show that a tighter bound on the size is possible when the data is linearly separated.
no code implementations • 9 Feb 2021 • Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz
The explosive growth of easily-accessible unlabeled data has lead to growing interest in active learning, a paradigm in which data-hungry learning algorithms adaptively select informative examples in order to lower prohibitively expensive labeling costs.
no code implementations • NeurIPS 2021 • Tom Hess, Michal Moshkovitz, Sivan Sabato
We give the first algorithm for this setting that obtains a constant approximation factor on the optimal risk under a random arrival order, an exponential improvement over previous work.
no code implementations • 28 Dec 2020 • Robi Bhattacharjee, Michal Moshkovitz
We also prove that if the data is sampled from a ``natural" distribution, such as a mixture of $k$ Gaussians, then the new complexity measure is equal to $O(k^2\log(n))$.
no code implementations • NeurIPS 2020 • Alon Gonen, Shachar Lovett, Michal Moshkovitz
We propose a candidate solution for the case of realizable strong learning under a known distribution, based on the SQ dimension of neighboring distributions.
2 code implementations • 3 Jun 2020 • Nave Frost, Michal Moshkovitz, Cyrus Rashtchian
To allow flexibility, we develop a new explainable $k$-means clustering algorithm, ExKMC, that takes an additional parameter $k' \geq k$ and outputs a decision tree with $k'$ leaves.
3 code implementations • 28 Feb 2020 • Sanjoy Dasgupta, Nave Frost, Michal Moshkovitz, Cyrus Rashtchian
In terms of negative results, we show, first, that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and second, that any tree-induced clustering must in general incur an $\Omega(\log k)$ approximation factor compared to the optimal clustering.
no code implementations • 8 Feb 2020 • Alon Gonen, Shachar Lovett, Michal Moshkovitz
In this paper we aim to develop combinatorial dimensions that characterize bounded memory learning.
no code implementations • 9 Aug 2019 • Michal Moshkovitz
For example, for k-means cost with constant k>1 and random order, Theta(log n) centers are enough to achieve a constant approximation, while the mere a priori knowledge of n reduces the number of centers to a constant.
no code implementations • 9 Apr 2019 • Tal Kachman, Michal Moshkovitz, Michal Rosen-Zvi
Deep neural networks have become the default choice for many of the machine learning tasks such as classification and regression.
no code implementations • 10 Dec 2017 • Michal Moshkovitz, Naftali Tishby
Designing bounded-memory algorithms is becoming increasingly important nowadays.
no code implementations • 2 Mar 2017 • Michal Moshkovitz, Naftali Tishby
We suggest analyzing neural networks through the prism of space constraints.
no code implementations • 18 Sep 2016 • Roy Fox, Michal Moshkovitz, Naftali Tishby
It is well known that options can make planning more efficient, among their many benefits.