no code implementations • 17 Feb 2025 • Amaury Gouverneur, Tobias J. Oechtering, Mikael Skoglund
In this paper, we present refined probabilistic bounds on empirical reward estimates for off-policy learning in bandit problems.
no code implementations • 4 Feb 2025 • Amaury Gouverneur, Borja Rodriguez Gálvez, Tobias Oechtering, Mikael Skoglund
Additionally, we specialize our results to bandit problems with expected rewards that are Lipschitz continuous with respect to the action space, deriving a regret bound that explicitly accounts for the complexity of the action space.
no code implementations • 19 Jan 2025 • Hasan Basri Celebi, Mikael Skoglund
This paper presents a comprehensive system model for goodput maximization with quantized feedback in Ultra-Reliable Low-Latency Communication (URLLC), focusing on dynamic channel conditions and feedback schemes.
no code implementations • 3 Dec 2024 • Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund
Adopting the information-theoretic framework introduced by (Russo $\&$ Van Roy, 2015), we analyze the information ratio, which is defined as the ratio of the expected squared difference between the optimal and actual rewards to the mutual information between the optimal action and the reward.
no code implementations • 13 Nov 2024 • Steven Rivetti, Ozlem Tugfe Demir, Emil Bjornson, Mikael Skoglund
In this paper, we investigate the performance of an integrated sensing and communication (ISAC) system within a cell-free massive multiple-input multiple-output (MIMO) system.
no code implementations • 21 Oct 2024 • Raghav Bongole, Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund
We study agents acting in an unknown environment where the agent's goal is to find a robust policy.
no code implementations • 7 Oct 2024 • Sajad Daei, Amirreza Zamani, Saikat Chatterjee, Mikael Skoglund, Gabor Fodor
In contrast, in the near-field spherical wave model, this phase relationship becomes nonlinear.
no code implementations • 20 Aug 2024 • Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
For discrete data, we derive new bounds for differentially private algorithms that guarantee generalization even with a constant privacy parameter, which is in contrast to previous bounds in the literature.
1 code implementation • 10 Jul 2024 • Martin Lindström, Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
Hyperspherical Prototypical Learning (HPL) is a supervised approach to representation learning that designs class prototypes on the unit hypersphere.
no code implementations • 13 May 2024 • Sajad Daei, Mikael Skoglund, Gabor Fodor
This establishes an analytical closed-form relationship between the optimal weights and the angular domain characteristics.
no code implementations • 13 May 2024 • Hamideh. Sadat Fazael Ardakani, Sajad Daei, Arash Amini, Mikael Skoglund, Gabor Fodor
To this end, we introduce a multi-weight nuclear norm optimization problem that concurrently promotes the low-rank property as well the information about the available subspaces.
no code implementations • 13 May 2024 • Sajad Daei, Gabor Fodor, Mikael Skoglund
Moreover, optimal pilot spacing remains unaffected by interference components such as path loss and the velocity of interference users.
no code implementations • 13 May 2024 • Sajad Daei, Gabor Fodor, Mikael Skoglund
Previous studies have shown that multiple transmit and receive antennas can substantially enhance the sum-capacity of all users when the channel is known at the transmitter and in the case of uncorrelated transmit and receive antennas.
no code implementations • 31 Mar 2024 • Shudi Weng, Chengxi Li, Ming Xiao, Mikael Skoglund
Stragglers' effects are known to degrade FL performance.
no code implementations • 25 Mar 2024 • Borja Rodríguez-Gálvez, Omar Rivasplata, Ragnar Thobaben, Mikael Skoglund
Moreover, the paper derives a high-probability PAC-Bayes bound for losses with a bounded variance.
no code implementations • 22 Mar 2024 • Chengxi Li, Ming Xiao, Mikael Skoglund
In ACFL, before the training, each device uploads a coded local dataset with additive noise to the central server to generate a global coded dataset under privacy preservation requirements.
no code implementations • 19 Mar 2024 • Chengxi Li, Mikael Skoglund
For this problem, DL methods based on gradient coding have been widely investigated, which redundantly distribute the training data to the workers to guarantee convergence when some workers are stragglers.
no code implementations • 5 Mar 2024 • Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund
This paper studies the Bayesian regret of a variant of the Thompson-Sampling algorithm for bandit problems.
no code implementations • 20 Feb 2024 • Steven Rivetti, Ozlem Tugfe Demir, Emil Bjornson, Mikael Skoglund
Reconfigurable intelligent surfaces (RISs) have demonstrated significant potential for enhancing communication system performance if properly configured.
no code implementations • 6 Feb 2024 • Chengxi Li, Mikael Skoglund
In this paper, we consider a decentralized learning problem in the presence of stragglers.
no code implementations • 24 Jan 2024 • Sajad Daei, Gabor Fodor, Mikael Skoglund, Miklos Telek
In particular, when the channel changes rapidly in time, channel aging degrades the performance in terms of spectral efficiency without proper pilot spacing and power control.
no code implementations • 21 Jun 2023 • Sajad Daei, Saeed Razavikia, Marios Kountouris, Mikael Skoglund, Gabor Fodor, Carlo Fischione
Resource allocation and multiple access schemes are instrumental for the success of communication networks, which facilitate seamless wireless connectivity among a growing population of uncoordinated and non-synchronized users.
no code implementations • 21 Jun 2023 • Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
Firstly, for losses with a bounded range, we recover a strengthened version of Catoni's bound that holds uniformly for all parameter values.
no code implementations • 16 Jun 2023 • Chao Ren, Rudai Yan, Huihui Zhu, Han Yu, Minrui Xu, Yuan Shen, Yan Xu, Ming Xiao, Zhao Yang Dong, Mikael Skoglund, Dusit Niyato, Leong Chuan Kwek
This review serves as a first-of-its-kind comprehensive guide for researchers and practitioners interested in understanding and advancing the field of QFL.
no code implementations • 26 Apr 2023 • Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund
In this work, we study the performance of the Thompson Sampling algorithm for Contextual Bandit problems based on the framework introduced by Neu et al. and their concept of lifted information ratio.
no code implementations • 27 Dec 2022 • Mahdi Haghifam, Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund, Daniel M. Roy, Gintare Karolina Dziugaite
To date, no "information-theoretic" frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization.
no code implementations • 18 Jul 2022 • Amaury Gouverneur, Borja Rodríguez-Gálvez, Tobias J. Oechtering, Mikael Skoglund
Building on the framework introduced by Xu and Raginksy [1] for supervised learning problems, we study the best achievable performance for model-based Bayesian reinforcement learning problems.
no code implementations • 7 Feb 2022 • Hao Chen, Yu Ye, Ming Xiao, Mikael Skoglund
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices and the learning algorithm run on-device, with the aim of relaxing the burden at a central entity/server.
no code implementations • 22 Oct 2021 • Hao Chen, Shaocheng Huang, Deyou Zhang, Ming Xiao, Mikael Skoglund, H. Vincent Poor
Hence, we investigate the problem of jointly optimized communication efficiency and resources for FL over wireless Internet of things (IoT) networks.
no code implementations • 17 Sep 2021 • Shuchan Wang, Photios A. Stavrou, Mikael Skoglund
We evaluate for a wide variety of distributions this term whereas for Gaussian and i. i. d.
no code implementations • 30 Jun 2021 • Wanlu Lei, Yu Ye, Ming Xiao, Mikael Skoglund, Zhu Han
Alternating direction method of multipliers (ADMM) has a structure that allows for decentralized implementation, and has shown faster convergence than gradient descent based methods.
no code implementations • 22 May 2021 • Ehsan Nekouei, Henrik Sandberg, Mikael Skoglund, Karl H. Johansson
To ensure parameter privacy, we propose a filter design framework which consists of two components: a randomizer and a nonlinear transformation.
no code implementations • 5 Mar 2021 • Baptiste Cavarec, Hasan Basri Celebi, Mats Bengtsson, Mikael Skoglund
We show that using artificial neural networks to predict the required order of an ordered statistics based decoder helps in reducing the average complexity and hence the latency of the decoder.
no code implementations • 24 Feb 2021 • Hasan Basri Celebi, Antonios Pitarokoilis, Mikael Skoglund
In this paper, we introduce a multi-objective optimization framework for the optimal design of URLLC in the presence of decoding complexity constraints.
Information Theory Information Theory
no code implementations • 3 Feb 2021 • Serkan Sarıtaş, Photios A. Stavrou, Ragnar Thobaben, Mikael Skoglund
Regarding the Nash equilibrium, we explicitly characterize affine equilibria for the single-stage setup and show that the optimal encoder (resp.
Optimization and Control Information Theory Information Theory
no code implementations • NeurIPS 2021 • Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, Mikael Skoglund
This work presents several expected generalization error bounds based on the Wasserstein distance.
no code implementations • 23 Nov 2020 • Sina Molavipour, Germán Bassi, Mladen Čičić, Mikael Skoglund, Karl Henrik Johansson
In an intelligent transportation system, the effects and relations of traffic flow at different points in a network are valuable features which can be exploited for control system design and traffic forecasting.
1 code implementation • 22 Oct 2020 • Alireza M. Javid, Sandipan Das, Mikael Skoglund, Saikat Chatterjee
We use a combination of random weights and rectified linear unit (ReLU) activation function to add a ReLU dense (ReDense) layer to the trained neural network such that it can achieve a lower training loss.
no code implementations • 21 Oct 2020 • Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, Mikael Skoglund
In this work, we unify several expected generalization error bounds based on random subsets using the framework developed by Hellstr\"om and Durisi [1].
no code implementations • 2 Oct 2020 • Hao Chen, Yu Ye, Ming Xiao, Mikael Skoglund, H. Vincent Poor
A class of mini-batch stochastic alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
no code implementations • 29 Sep 2020 • Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee
We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers).
no code implementations • 22 Jun 2020 • Shaocheng Huang, Yu Ye, Ming Xiao, H. Vincent Poor, Mikael Skoglund
Cell-free networks are considered as a promising distributed network architecture to satisfy the increasing number of users and high rate expectations in beyond-5G systems.
1 code implementation • 12 Jun 2020 • Sina Molavipour, Germán Bassi, Mikael Skoglund
The estimation of mutual information (MI) or conditional mutual information (CMI) from a set of samples is a long-standing problem.
2 code implementations • 11 Jun 2020 • Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
In this article, we propose a new variational approach to learn private and/or fair representations.
no code implementations • 12 May 2020 • Borja Rodríguez-Gálvez, Germán Bassi, Mikael Skoglund
In this work, we study the generalization capability of algorithms from an information-theoretic perspective.
no code implementations • 10 Apr 2020 • Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee
In this work, we exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.
no code implementations • 29 Mar 2020 • Alireza M. Javid, Arun Venkitaraman, Mikael Skoglund, Saikat Chatterjee
We show that the proposed architecture is norm-preserving and provides an invertible feature vector, and therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.
no code implementations • 19 Mar 2020 • Hossein S. Ghadikolaei, Hadi Ghauch, Gabor Fodor, Mikael Skoglund, Carlo Fischione
Inter-operator spectrum sharing in millimeter-wave bands has the potential of substantially increasing the spectrum utilization and providing a larger bandwidth to individual user equipment at the expense of increasing inter-operator interference.
2 code implementations • 25 Nov 2019 • Borja Rodríguez Gálvez, Ragnar Thobaben, Mikael Skoglund
In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate $r$ for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes.
no code implementations • 6 Nov 2019 • Sina Molavipour, Germán Bassi, Mikael Skoglund
Several recent works in communication systems have proposed to leverage the power of neural networks in the design of encoders and decoders.
no code implementations • 30 Oct 2019 • Antoine Honore, Dong Liu, David Forsberg, Karen Coste, Eric Herlenius, Saikat Chatterjee, Mikael Skoglund
We explore the use of traditional and contemporary hidden Markov models (HMMs) for sequential physiological data analysis and sepsis prediction in preterm infants.
no code implementations • 22 Aug 2019 • Yu Ye, Ming Xiao, Mikael Skoglund
To determine the caching scheme for decentralized caching networks, the content preference learning problem based on mobility prediction is studied.
no code implementations • 17 May 2019 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Shumpei Kikuta, Dong Liu, Partha P. Mitra, Mikael Skoglund
We design a self size-estimating feed-forward network (SSFN) using a joint optimization approach for estimation of number of layers, number of nodes and learning of weight matrices.
no code implementations • 25 Apr 2019 • Yu Ye, Ming Xiao, Mikael Skoglund
We first present the ELM based MTL problem in the centralized setting, which is solved by the proposed MTL-ELM algorithm.
no code implementations • 9 Apr 2019 • Song Fang, Mikael Skoglund, Karl Henrik Johansson, Hideaki Ishii, Quanyan Zhu
In this paper, we obtain generic bounds on the variances of estimation and prediction errors in time series analysis via an information-theoretic approach.
no code implementations • 6 Jun 2018 • Hadi Ghauch, Mikael Skoglund, Hossein Shokri-Ghadikolaei, Carlo Fischione, Ali H. Sayed
We summarize our recent findings, where we proposed a framework for learning a Kolmogorov model, for a collection of binary random variables.
BIG-bench Machine Learning
Interpretable Machine Learning
+1
no code implementations • 23 May 2018 • Hadi Ghauch, Hossein Shokri-Ghadikolaei, Carlo Fischione, Mikael Skoglund
The lack of mathematical tractability of Deep Neural Networks (DNNs) has hindered progress towards having a unified convergence analysis of training algorithms, in the general setting.
1 code implementation • 23 Oct 2017 • Saikat Chatterjee, Alireza M. Javid, Mostafa Sadeghi, Partha P. Mitra, Mikael Skoglund
The developed network is expected to show good generalization power due to appropriate regularization and use of random weights in the layers.
no code implementations • 14 Jul 2014 • Mohammadreza Malek-Mohammadi, Massoud Babaie-Zadeh, Mikael Skoglund
We address some theoretical guarantees for Schatten-$p$ quasi-norm minimization ($p \in (0, 1]$) in recovering low-rank matrices from compressed linear measurements.