Search Results for author: Michal Moshkovitz

Found 21 papers, 4 papers with code

Explainable k-Means and k-Medians Clustering

no code implementations ICML 2020 Michal Moshkovitz, Sanjoy Dasgupta, Cyrus Rashtchian, Nave Frost

In terms of negative results, we show that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and we prove that any explainable clustering must incur an \Omega(\log k) approximation compared to the optimal clustering.

Clustering

An Axiomatic Approach to Model-Agnostic Concept Explanations

no code implementations12 Jan 2024 Zhili Feng, Michal Moshkovitz, Dotan Di Castro, J. Zico Kolter

Concept explanation is a popular approach for examining how human-interpretable concepts impact the predictions of a model.

Model Selection

Principal-Agent Reward Shaping in MDPs

1 code implementation30 Dec 2023 Omer Ben-Porat, Yishay Mansour, Michal Moshkovitz, Boaz Taitler

Principal-agent problems arise when one party acts on behalf of another, leading to conflicts of interest.

XAudit : A Theoretical Look at Auditing with Explanations

no code implementations9 Jun 2022 Chhavi Yadav, Michal Moshkovitz, Kamalika Chaudhuri

This work formalizes the role of explanations in auditing and investigates if and how model explanations can help audits.

BIG-bench Machine Learning counterfactual

Finding Safe Zones of policies Markov Decision Processes

no code implementations23 Feb 2022 Lee Cohen, Yishay Mansour, Michal Moshkovitz

Given a policy of a Markov Decision Process, we define a SafeZone as a subset of states, such that most of the policy's trajectories are confined to this subset.

Framework for Evaluating Faithfulness of Local Explanations

no code implementations1 Feb 2022 Sanjoy Dasgupta, Nave Frost, Michal Moshkovitz

We study the faithfulness of an explanation system to the underlying prediction model.

Online $k$-means Clustering on Arbitrary Data Streams

no code implementations18 Feb 2021 Robi Bhattacharjee, Jacob Imola, Michal Moshkovitz, Sanjoy Dasgupta

We propose a data parameter, $\Lambda(X)$, such that for any algorithm maintaining $O(k\text{poly}(\log n))$ centers at time $n$, there exists a data stream $X$ for which a loss of $\Omega(\Lambda(X))$ is inevitable.

Clustering

Connecting Interpretability and Robustness in Decision Trees through Separation

1 code implementation14 Feb 2021 Michal Moshkovitz, Yao-Yuan Yang, Kamalika Chaudhuri

We then show that a tighter bound on the size is possible when the data is linearly separated.

Bounded Memory Active Learning through Enriched Queries

no code implementations9 Feb 2021 Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz

The explosive growth of easily-accessible unlabeled data has lead to growing interest in active learning, a paradigm in which data-hungry learning algorithms adaptively select informative examples in order to lower prohibitively expensive labeling costs.

Active Learning

A Constant Approximation Algorithm for Sequential Random-Order No-Substitution k-Median Clustering

no code implementations NeurIPS 2021 Tom Hess, Michal Moshkovitz, Sivan Sabato

We give the first algorithm for this setting that obtains a constant approximation factor on the optimal risk under a random arrival order, an exponential improvement over previous work.

Clustering

No-substitution k-means Clustering with Adversarial Order

no code implementations28 Dec 2020 Robi Bhattacharjee, Michal Moshkovitz

We also prove that if the data is sampled from a ``natural" distribution, such as a mixture of $k$ Gaussians, then the new complexity measure is equal to $O(k^2\log(n))$.

Clustering

Towards a Combinatorial Characterization of Bounded-Memory Learning

no code implementations NeurIPS 2020 Alon Gonen, Shachar Lovett, Michal Moshkovitz

We propose a candidate solution for the case of realizable strong learning under a known distribution, based on the SQ dimension of neighboring distributions.

PAC learning

ExKMC: Expanding Explainable $k$-Means Clustering

2 code implementations3 Jun 2020 Nave Frost, Michal Moshkovitz, Cyrus Rashtchian

To allow flexibility, we develop a new explainable $k$-means clustering algorithm, ExKMC, that takes an additional parameter $k' \geq k$ and outputs a decision tree with $k'$ leaves.

Clustering

Explainable $k$-Means and $k$-Medians Clustering

3 code implementations28 Feb 2020 Sanjoy Dasgupta, Nave Frost, Michal Moshkovitz, Cyrus Rashtchian

In terms of negative results, we show, first, that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and second, that any tree-induced clustering must in general incur an $\Omega(\log k)$ approximation factor compared to the optimal clustering.

Clustering

Towards a combinatorial characterization of bounded memory learning

no code implementations8 Feb 2020 Alon Gonen, Shachar Lovett, Michal Moshkovitz

In this paper we aim to develop combinatorial dimensions that characterize bounded memory learning.

PAC learning

Unexpected Effects of Online no-Substitution k-means Clustering

no code implementations9 Aug 2019 Michal Moshkovitz

For example, for k-means cost with constant k>1 and random order, Theta(log n) centers are enough to achieve a constant approximation, while the mere a priori knowledge of n reduces the number of centers to a constant.

Clustering Online Clustering

Novel Uncertainty Framework for Deep Learning Ensembles

no code implementations9 Apr 2019 Tal Kachman, Michal Moshkovitz, Michal Rosen-Zvi

Deep neural networks have become the default choice for many of the machine learning tasks such as classification and regression.

BIG-bench Machine Learning Gaussian Processes +2

A General Memory-Bounded Learning Algorithm

no code implementations10 Dec 2017 Michal Moshkovitz, Naftali Tishby

Designing bounded-memory algorithms is becoming increasingly important nowadays.

Mixing Complexity and its Applications to Neural Networks

no code implementations2 Mar 2017 Michal Moshkovitz, Naftali Tishby

We suggest analyzing neural networks through the prism of space constraints.

Principled Option Learning in Markov Decision Processes

no code implementations18 Sep 2016 Roy Fox, Michal Moshkovitz, Naftali Tishby

It is well known that options can make planning more efficient, among their many benefits.

Cannot find the paper you are looking for? You can Submit a new open access paper.