Search Results for author: Frederik Mallmann-Trenn

Found 9 papers, 0 papers with code

Hierarchical Clustering: Objective Functions and Algorithms

no code implementations7 Apr 2017 Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn, Claire Mathieu

For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an $O(\log^{3/2} n)$-approximation.

Clustering Combinatorial Optimization +1

Instance-Optimality in the Noisy Value-and Comparison-Model --- Accept, Accept, Strong Accept: Which Papers get in?

no code implementations21 Jun 2018 Vincent Cohen-Addad, Frederik Mallmann-Trenn, Claire Mathieu

In this paper, we show optimal worst-case query complexity for the \textsc{max},\textsc{threshold-$v$} and \textsc{Top}-$k$ problems.

Recommendation Systems

Clustering Redemption–Beyond the Impossibility of Kleinberg’s Axioms

no code implementations NeurIPS 2018 Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn

In this work, we take a different approach, based on the observation that the consistency axiom fails to be satisfied when the “correct” number of clusters changes.

Clustering

Hierarchical Clustering Beyond the Worst-Case

no code implementations NeurIPS 2017 Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn

Hiererachical clustering, that is computing a recursive partitioning of a dataset to obtain clusters at increasingly finer granularity is a fundamental problem in data analysis.

Clustering General Classification +1

Learning Hierarchically Structured Concepts

no code implementations10 Sep 2019 Nancy Lynch, Frederik Mallmann-Trenn

Our main goal is to introduce a general framework for these tasks and prove formally how both (recognition and learning) can be achieved.

Online Page Migration with ML Advice

no code implementations9 Jun 2020 Piotr Indyk, Frederik Mallmann-Trenn, Slobodan Mitrović, Ronitt Rubinfeld

In contrast, we show that if the algorithm is given a prediction of the input sequence, then it can achieve a competitive ratio that tends to $1$ as the prediction error rate tends to $0$.

On the Power of Louvain in the Stochastic Block Model

no code implementations NeurIPS 2020 Vincent Cohen-Addad, Adrian Kosowski, Frederik Mallmann-Trenn, David Saulpic

A classic problem in machine learning and data analysis is to partition the vertices of a network in such a way that vertices in the same set are densely connected and vertices in different sets are loosely connected.

BIG-bench Machine Learning Stochastic Block Model

Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy

no code implementations24 May 2022 Limor Gultchin, Vincent Cohen-Addad, Sophie Giffard-Roisin, Varun Kanade, Frederik Mallmann-Trenn

Among the various aspects of algorithmic fairness studied in recent years, the tension between satisfying both \textit{sufficiency} and \textit{separation} -- e. g. the ratios of positive or negative predictive values, and false positive or false negative rates across groups -- has received much attention.

Fairness

Learning Hierarchically-Structured Concepts II: Overlapping Concepts, and Networks With Feedback

no code implementations19 Apr 2023 Nancy Lynch, Frederik Mallmann-Trenn

We continue our study from Lynch and Mallmann-Trenn (Neural Networks, 2021), of how concepts that have hierarchical structure might be represented in brain-like neural networks, how these representations might be used to recognize the concepts, and how these representations might be learned.

Cannot find the paper you are looking for? You can Submit a new open access paper.