no code implementations • 22 Dec 2016 • Johannes Blömer, Sascha Brauer, Kathrin Bujna
The fuzzy $K$-means problem is a popular generalization of the well-known $K$-means problem to soft clusterings.
no code implementations • 21 Mar 2016 • Johannes Blömer, Sascha Brauer, Kathrin Bujna
Training the parameters of statistical models to describe a given data set is a central task in the field of data mining and machine learning.
no code implementations • 26 Feb 2016 • Johannes Blömer, Christiane Lammersen, Melanie Schmidt, Christian Sohler
The $k$-means algorithm is one of the most widely used clustering heuristics.
no code implementations • 18 Dec 2015 • Johannes Blömer, Sascha Brauer, Kathrin Bujna
We complement these results with a randomized algorithm which imposes some natural restrictions on the input set and whose runtime is comparable to some of the most efficient approximation algorithms for $K$-means, i. e. linear in the number of points and the dimension, but exponential in the number of clusters.
no code implementations • 20 Dec 2013 • Johannes Blömer, Kathrin Bujna
Our methods are adaptions of the well-known $K$-means++ initialization and the Gonzalez algorithm.
no code implementations • 18 Oct 2013 • Johannes Blömer, Kathrin Bujna, Daniel Kuntze
In this paper we provide a new analysis of the SEM algorithm.
no code implementations • 16 Dec 2010 • Marcel R. Ackermann, Johannes Blömer, Daniel Kuntze, Christian Sohler
Assuming that the dimension $d$ is a constant, we show that for any $k$ the solution computed by this algorithm is an $O(\log k)$-approximation to the diameter $k$-clustering problem.