no code implementations • 21 Jun 2023 • Sajad Daei, Saeed Razavikia, Marios Kountouris, Mikael Skoglund, Gabor Fodor, Carlo Fischione
Resource allocation and multiple access schemes are instrumental for the success of communication networks, which facilitate seamless wireless connectivity among a growing population of uncoordinated and non-synchronized users.
no code implementations • 21 Jun 2023 • Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
Firstly, for losses with a bounded range, we recover a strengthened version of Catoni's bound that holds uniformly for all parameter values.
no code implementations • 26 Apr 2023 • Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund
In this work, we study the performance of the Thompson Sampling algorithm for Contextual Bandit problems based on the framework introduced by Neu et al. and their concept of lifted information ratio.
no code implementations • 27 Dec 2022 • Mahdi Haghifam, Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund, Daniel M. Roy, Gintare Karolina Dziugaite
To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization.
no code implementations • 18 Jul 2022 • Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund
Building on the framework introduced by Xu and Raginksy [1] for supervised learning problems, we study the best achievable performance for model-based Bayesian reinforcement learning problems.
no code implementations • 7 Feb 2022 • Hao Chen, Yu Ye, Ming Xiao, Mikael Skoglund
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices and the learning algorithm run on-device, with the aim of relaxing the burden at a central entity/server.
no code implementations • 22 Oct 2021 • Hao Chen, Shaocheng Huang, Deyou Zhang, Ming Xiao, Mikael Skoglund, H. Vincent Poor
Hence, we investigate the problem of jointly optimized communication efficiency and resources for FL over wireless Internet of things (IoT) networks.
no code implementations • 17 Sep 2021 • Shuchan Wang, Photios A. Stavrou, Mikael Skoglund
We evaluate for a wide variety of distributions this term whereas for Gaussian and i. i. d.
no code implementations • 30 Jun 2021 • Wanlu Lei, Yu Ye, Ming Xiao, Mikael Skoglund, Zhu Han
Alternating direction method of multipliers (ADMM) has a structure that allows for decentralized implementation, and has shown faster convergence than gradient descent based methods.
no code implementations • 22 May 2021 • Ehsan Nekouei, Henrik Sandberg, Mikael Skoglund, Karl H. Johansson
To ensure parameter privacy, we propose a filter design framework which consists of two components: a randomizer and a nonlinear transformation.
no code implementations • 5 Mar 2021 • Baptiste Cavarec, Hasan Basri Celebi, Mats Bengtsson, Mikael Skoglund
We show that using artificial neural networks to predict the required order of an ordered statistics based decoder helps in reducing the average complexity and hence the latency of the decoder.
no code implementations • 24 Feb 2021 • Hasan Basri Celebi, Antonios Pitarokoilis, Mikael Skoglund
In this paper, we introduce a multi-objective optimization framework for the optimal design of URLLC in the presence of decoding complexity constraints.
Information Theory Information Theory
no code implementations • 3 Feb 2021 • Serkan Sarıtaş, Photios A. Stavrou, Ragnar Thobaben, Mikael Skoglund
Regarding the Nash equilibrium, we explicitly characterize affine equilibria for the single-stage setup and show that the optimal encoder (resp.
Optimization and Control Information Theory Information Theory
no code implementations • NeurIPS 2021 • Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, Mikael Skoglund
This work presents several expected generalization error bounds based on the Wasserstein distance.
no code implementations • 23 Nov 2020 • Sina Molavipour, Germán Bassi, Mladen Čičić, Mikael Skoglund, Karl Henrik Johansson
In an intelligent transportation system, the effects and relations of traffic flow at different points in a network are valuable features which can be exploited for control system design and traffic forecasting.
1 code implementation • 22 Oct 2020 • Alireza M. Javid, Sandipan Das, Mikael Skoglund, Saikat Chatterjee
We use a combination of random weights and rectified linear unit (ReLU) activation function to add a ReLU dense (ReDense) layer to the trained neural network such that it can achieve a lower training loss.
no code implementations • 21 Oct 2020 • Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, Mikael Skoglund
In this work, we unify several expected generalization error bounds based on random subsets using the framework developed by Hellstr\"om and Durisi [1].
no code implementations • 2 Oct 2020 • Hao Chen, Yu Ye, Ming Xiao, Mikael Skoglund, H. Vincent Poor
A class of mini-batch stochastic alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
no code implementations • 29 Sep 2020 • Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee
We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers).
no code implementations • 22 Jun 2020 • Shaocheng Huang, Yu Ye, Ming Xiao, H. Vincent Poor, Mikael Skoglund
Cell-free networks are considered as a promising distributed network architecture to satisfy the increasing number of users and high rate expectations in beyond-5G systems.
1 code implementation • 12 Jun 2020 • Sina Molavipour, Germán Bassi, Mikael Skoglund
The estimation of mutual information (MI) or conditional mutual information (CMI) from a set of samples is a long-standing problem.
2 code implementations • 11 Jun 2020 • Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
In this article, we propose a new variational approach to learn private and/or fair representations.
no code implementations • 12 May 2020 • Borja Rodríguez-Gálvez, Germán Bassi, Mikael Skoglund
In this work, we study the generalization capability of algorithms from an information-theoretic perspective.
no code implementations • 10 Apr 2020 • Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee
In this work, we exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.
no code implementations • 29 Mar 2020 • Alireza M. Javid, Arun Venkitaraman, Mikael Skoglund, Saikat Chatterjee
We show that the proposed architecture is norm-preserving and provides an invertible feature vector, and therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.
no code implementations • 19 Mar 2020 • Hossein S. Ghadikolaei, Hadi Ghauch, Gabor Fodor, Mikael Skoglund, Carlo Fischione
Inter-operator spectrum sharing in millimeter-wave bands has the potential of substantially increasing the spectrum utilization and providing a larger bandwidth to individual user equipment at the expense of increasing inter-operator interference.
2 code implementations • 25 Nov 2019 • Borja Rodríguez Gálvez, Ragnar Thobaben, Mikael Skoglund
In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate $r$ for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes.
no code implementations • 6 Nov 2019 • Sina Molavipour, Germán Bassi, Mikael Skoglund
Several recent works in communication systems have proposed to leverage the power of neural networks in the design of encoders and decoders.
no code implementations • 30 Oct 2019 • Antoine Honore, Dong Liu, David Forsberg, Karen Coste, Eric Herlenius, Saikat Chatterjee, Mikael Skoglund
We explore the use of traditional and contemporary hidden Markov models (HMMs) for sequential physiological data analysis and sepsis prediction in preterm infants.
no code implementations • 22 Aug 2019 • Yu Ye, Ming Xiao, Mikael Skoglund
To determine the caching scheme for decentralized caching networks, the content preference learning problem based on mobility prediction is studied.
no code implementations • 17 May 2019 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Shumpei Kikuta, Dong Liu, Partha P. Mitra, Mikael Skoglund
We design a self size-estimating feed-forward network (SSFN) using a joint optimization approach for estimation of number of layers, number of nodes and learning of weight matrices.
no code implementations • 25 Apr 2019 • Yu Ye, Ming Xiao, Mikael Skoglund
We first present the ELM based MTL problem in the centralized setting, which is solved by the proposed MTL-ELM algorithm.
no code implementations • 9 Apr 2019 • Song Fang, Mikael Skoglund, Karl Henrik Johansson, Hideaki Ishii, Quanyan Zhu
In this paper, we obtain generic bounds on the variances of estimation and prediction errors in time series analysis via an information-theoretic approach.
no code implementations • 6 Jun 2018 • Hadi Ghauch, Mikael Skoglund, Hossein Shokri-Ghadikolaei, Carlo Fischione, Ali H. Sayed
We summarize our recent findings, where we proposed a framework for learning a Kolmogorov model, for a collection of binary random variables.
BIG-bench Machine Learning
Interpretable Machine Learning
+1
no code implementations • 23 May 2018 • Hadi Ghauch, Hossein Shokri-Ghadikolaei, Carlo Fischione, Mikael Skoglund
The lack of mathematical tractability of Deep Neural Networks (DNNs) has hindered progress towards having a unified convergence analysis of training algorithms, in the general setting.
1 code implementation • 23 Oct 2017 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Partha P. Mitra, Mikael Skoglund
The developed network is expected to show good generalization power due to appropriate regularization and use of random weights in the layers.
no code implementations • 14 Jul 2014 • Mohammadreza Malek-Mohammadi, Massoud Babaie-Zadeh, Mikael Skoglund
We address some theoretical guarantees for Schatten-$p$ quasi-norm minimization ($p \in (0, 1]$) in recovering low-rank matrices from compressed linear measurements.