no code implementations • 9 Jan 2025 • Timo Saala, Lucie Flek, Alexander Jung, Akbar Karimi, Alexander Schmidt, Matthias Schott, Philipp Soldin, Christopher Wiebusch
Correlations between input parameters play a crucial role in many scientific classification tasks, since these are often related to fundamental laws of nature.
no code implementations • 25 Oct 2024 • Diana Pfau, Alexander Jung
AI systems increasingly shape critical decisions across personal and societal domains.
no code implementations • 3 Sep 2024 • Alexander Jung, Yasmin SarcheshmehPour, Amirhossein Mohammadi
The available local datasets might fail to provide sufficient statistical power to train high-dimensional models (such as deep neural networks) effectively.
no code implementations • 9 Apr 2024 • Christophe Nicolet, Antoine Béguin, Matthieu Dreyer, Sébastien Alligné, Alexander Jung, Diogo Cordeiro, Carlos Moreira
This paper is addressing the quantification and the comparison of pumped storage power plants, PSPP, contribution to synchronous inertia and synthetic inertia when fixed speed and variable speed motor-generators technologies are considered, respectively.
no code implementations • 31 Aug 2023 • Reza Mirzaeifard, Naveen K. D. Venkategowda, Alexander Jung, Stefan Werner
This paper proposes a proximal variant of the alternating direction method of multipliers (ADMM) for distributed optimization.
1 code implementation • 17 Nov 2021 • Pasi Pyrrö, Hassan Naseri, Alexander Jung
This final processing stage used in the AIR detector significantly improves its performance and usability in the face of real-world aerial SAR missions.
1 code implementation • 26 May 2021 • Yasmin SarcheshmehPour, Yu Tian, Linli Zhang, Alexander Jung
Our main analytic contribution is an upper bound on the deviation between the local model parameters learnt by our algorithm and an oracle-based clustered federated learning method.
no code implementations • 26 Oct 2020 • Dick Carrillo, Lam Duc Nguyen, Pedro H. J. Nardelli, Evangelos Pournaras, Plinio Morita, Demóstenes Z. Rodríguez, Merim Dzaferagic, Harun Siljak, Alexander Jung, Laurent Hébert-Dufresne, Irene Macaluso, Mehar Ullah, Gustavo Fraidenraich, Petar Popovski
In this sense, we expect active participation of empowered citizens to supplement the more usual top-down management of epidemics.
Distributed, Parallel, and Cluster Computing
no code implementations • 25 Apr 2020 • Alexander Jung, Yasmin SarcheshmehPour
We study the statistical and computational properties of a network Lasso method for local graph clustering.
no code implementations • 1 Mar 2020 • Alexander Jung, Pedro H. J. Nardelli
Recommender systems decide which jobs, movies, or other user profiles might be interesting to us.
no code implementations • 18 Nov 2019 • Alexander Jung, Ivan Baranov
Clustering methods group a set of data points into a few coherent groups or clusters of similar data points.
1 code implementation • 3 Nov 2019 • Alexander Jung
In such a partially labeled stochastic block model, clustering amounts to estimating the cluster assignments of the remaining data points.
no code implementations • 25 Oct 2019 • Alexander Jung
Many machine learning problems and methods are combinations of three components: data, hypothesis space and loss function.
no code implementations • 4 Oct 2019 • Alexander Jung
Many applications generate data with an intrinsic network structure such as time series data, image data or social network data.
no code implementations • 1 Jul 2019 • Roope Tervo, Joonas Karjalainen, Alexander Jung
We propose a new machine learning approach to predict the damage caused by storms.
1 code implementation • 22 May 2019 • Alexander Jung
We propose networked exponential families to jointly leverage the information in the topology as well as the attributes (features) of networked data points.
no code implementations • 26 Mar 2019 • Nguyen Tran, Henrik Ambos, Alexander Jung
We apply the network Lasso to classify partially labeled data points which are characterized by high-dimensional feature vectors.
1 code implementation • 26 Mar 2019 • Alexander Jung, Nguyen Tran
The network Lasso (nLasso) has been proposed recently as an efficient learning algorithm for massive networked data sets (big data over networks).
no code implementations • 28 Jan 2019 • Alexander Jung, Alfred O. Hero III, Alexandru Mara, Saeed Jahromi, Ayelet Heimowitz, Yonina C. Eldar
This lends naturally to learning the labels by total variation (TV) minimization, which we solve by applying a recently proposed primal-dual method for non-smooth convex optimization.
no code implementations • 16 Sep 2018 • Markku Hinkka, Teemu Lehto, Keijo Heljanko, Alexander Jung
Recurrent neural networks and its subclasses, such as Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM), have been demonstrated to be able to learn relevant temporal features for subsequent classification tasks.
no code implementations • 21 May 2018 • Roope Tervo, Joonas Karjalainen, Alexander Jung
We consider the problem of predicting power outages in an electrical power grid due to hazards produced by convective storms.
no code implementations • 15 May 2018 • Oleksii Abramenko, Alexander Jung
We formulate the problem of sampling and recovering clustered graph signal as a multi-armed bandit (MAB) problem.
no code implementations • 14 May 2018 • Alexander Jung
This tutorial is based on the lecture notes for, and the plentiful student feedback received from, the courses "Machine Learning: Basic Principles" and "Artificial Intelligence", which I have co-taught since 2015 at Aalto University.
no code implementations • 7 May 2018 • Henrik Ambos, Nguyen Tran, Alexander Jung
We apply the network Lasso to solve binary classification and clustering problems for network-structured data.
no code implementations • 25 Apr 2018 • Alexander Jung
This paper investigates the computational complexity of sparse label propagation which has been proposed recently for processing network structured data.
no code implementations • 1 Mar 2018 • Buse Gul Atli, Alexander Jung
Many current approaches to the design of intrusion detection systems apply feature selection in a static, non-adaptive fashion.
no code implementations • 11 Oct 2017 • Nguyen Tran, Saeed Basirian, Alexander Jung
A recently proposed learning algorithm for massive network-structured data sets (big data over networks) is the network Lasso (nLasso), which extends the well- known Lasso estimator from sparse models to network-structured datasets.
no code implementations • 8 Oct 2017 • Markku Hinkka, Teemu Lehto, Keijo Heljanko, Alexander Jung
The main motivation is to provide machine learning based techniques with quick response times for interactive computer assisted root cause analysis.
no code implementations • 3 Sep 2017 • Alexandru Mara, Alexander Jung
By generalizing the concept of the compatibility condition put forward by van de Geer and Buehlmann as a powerful tool for the analysis of plain Lasso, we derive a sufficient condition, i. e., the network compatibility condition, on the underlying network topology such that network Lasso accurately learns a clustered underlying graph signal.
no code implementations • 29 Jun 2017 • Alexander Jung
In particular, we will show how gradient descent can be accelerated by a fixed-point preserving transformation of an operator associated with the objective function.
no code implementations • 11 May 2017 • Alexander Jung, Madelon Hulsebos
The network nullspace property couples the cluster structure of the underlying network-structure with the geometry of the sampling set.
no code implementations • 16 Apr 2017 • Saeed Basirian, Alexander Jung
Numerical experiments demonstrate the effectiveness of this approach for graph signals obtained from a synthetic random graph model as well as a real-world dataset.
no code implementations • 7 Apr 2017 • Alexander Jung, Nguyen Tran Quang, Alexandru Mara
By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal.
1 code implementation • 17 Jan 2017 • Nguyen Q. Tran, Oleksii Abramenko, Alexander Jung
We characterize the sample size required for accurate graphical model selection from non-stationary samples.
1 code implementation • 5 Dec 2016 • Alexander Jung, Alfred O. Hero III, Alexandru Mara, Saeed Jahromi
This learning algorithm allows for a highly scalable implementation as message passing over the underlying data graph.
no code implementations • 2 Nov 2016 • Alexander Jung, Alfred O. Hero III, Alexandru Mara, Sabeur Aridhi
We propose a scalable method for semi-supervised (transductive) learning from massive network-structured datasets.
no code implementations • 13 Sep 2016 • Nguyen Tran Quang, Alexander Jung
We formulate and analyze a graphical model selection method for inferring the conditional independence graph of a high-dimensional nonstationary Gaussian random process (time series) from a finite-length observation.
no code implementations • 20 Jul 2015 • Alexander Jung, Yonina C. Eldar, Norbert Görtz
The main conceptual contribution of this paper is the adaption of the information-theoretic approach to minimax estimation for the DL problem in order to derive lower bounds on the worst case MSE of any DL scheme.
no code implementations • 5 Oct 2014 • Alexander Jung, Gabor Hannak, Norbert Görtz
We propose a novel graphical model selection (GMS) scheme for high-dimensional stationary time series or discrete time process.
no code implementations • 4 Apr 2014 • Alexander Jung
A theoretical performance analysis provides conditions which guarantee that the probability of the proposed inference method to deliver a wrong CIG is below a prescribed value.
no code implementations • 17 Feb 2014 • Alexander Jung, Yonina C. Eldar, Norbert Görtz
We consider the problem of dictionary learning under the assumption that the observed signals can be represented as sparse linear combinations of the columns of a single large dictionary matrix.
no code implementations • 13 Nov 2013 • Alexander Jung, Reinhard Heckel, Helmut Bölcskei, Franz Hlawatsch
We propose a method for inferring the conditional indepen- dence graph (CIG) of a high-dimensional discrete-time Gaus- sian vector random process from finite-length observations.