Search Results for author: Jingge Zhu

Found 18 papers, 1 papers with code

Accelerating Graph Neural Networks via Edge Pruning for Power Allocation in Wireless Networks

no code implementations22 May 2023 Lili Chen, Jingge Zhu, Jamie Evans

Since unpaired transmitters and receivers are often spatially distant, the distanced-based threshold is proposed to reduce the computation time by excluding or including the channel state information in GNNs.

Stability Bounds for Learning-Based Adaptive Control of Discrete-Time Multi-Dimensional Stochastic Linear Systems with Input Constraints

no code implementations2 Apr 2023 Seth Siriya, Jingge Zhu, Dragan Nešić, Ye Pu

We consider the problem of adaptive stabilization for discrete-time, multi-dimensional linear systems with bounded control input constraints and unbounded stochastic disturbances, where the parameters of the true system are unknown.

Graph Neural Networks for Power Allocation in Wireless Networks with Full Duplex Nodes

no code implementations27 Mar 2023 Lili Chen, Jingge Zhu, Jamie Evans

We further refine this trade-off by introducing a distance-based threshold for inclusion or exclusion of edges in the network.

On the tightness of information-theoretic bounds on generalization error of learning algorithms

no code implementations26 Mar 2023 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

However, such a learning rate is typically considered to be ``slow", compared to a ``fast rate" of $O(\lambda/n)$ in many learning scenarios.

On the Value of Stochastic Side Information in Online Learning

no code implementations9 Mar 2023 Junzhang Jia, Xuetong Wu, Jingge Zhu, Jamie Evans

We study the effectiveness of stochastic side information in deterministic online learning scenarios.

Committed Private Information Retrieval

1 code implementation3 Feb 2023 Quang Cao, Hong Yen Tran, Son Hoang Dau, Xun Yi, Emanuele Viterbo, Chen Feng, Yu-Chih Huang, Jingge Zhu, Stanislav Kruglik, Han Mao Kiah

A PIR scheme is $v$-verifiable if the client can verify the correctness of the retrieved $x_i$ even when $v \leq k$ servers collude and try to fool the client by sending manipulated data.

Information Retrieval Retrieval

Learning-Based Adaptive Control for Stochastic Linear Systems with Input Constraints

no code implementations15 Sep 2022 Seth Siriya, Jingge Zhu, Dragan Nešić, Ye Pu

We propose a certainty-equivalence scheme for adaptive control of scalar linear systems subject to additive, i. i. d.

Design and Analysis of Hardware-limited Non-uniform Task-based Quantizers

no code implementations16 Aug 2022 Neil Irwin Bernardo, Jingge Zhu, Yonina C. Eldar, Jamie Evans

Here, we propose a new framework based on generalized Bussgang decomposition that enables the design and analysis of hardware-limited task-based quantizers that are equipped with non-uniform scalar quantizers or that have inputs with unbounded support.

Quantization

An Information-Theoretic Analysis for Transfer Learning: Error Bounds and Applications

no code implementations12 Jul 2022 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

Specifically, we provide generalization error upper bounds for the empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase.

Domain Adaptation Transfer Learning

A Learning-Based Approach to Approximate Coded Computation

no code implementations19 May 2022 Navneet Agrawal, Yuqin Qiu, Matthias Frey, Igor Bjelakovic, Setareh Maghsudi, Slawomir Stanczak, Jingge Zhu

Lagrange coded computation (LCC) is essential to solving problems about matrix polynomials in a coded distributed fashion; nevertheless, it can only solve the problems that are representable as matrix polynomials.

On Causality in Domain Adaptation and Semi-Supervised Learning: an Information-Theoretic Analysis

no code implementations10 May 2022 Xuetong Wu, Mingming Gong, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

We show that in causal learning, the excess risk depends on the size of the source sample at a rate of O(1/m) only if the labelling distribution between the source and target domains remains unchanged.

Unsupervised Domain Adaptation

Fast Rate Generalization Error Bounds: Variations on a Theme

no code implementations6 May 2022 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

However, such a learning rate is typically considered to be "slow", compared to a "fast rate" of O(1/n) in many learning scenarios.

A Bayesian Approach to (Online) Transfer Learning: Theory and Algorithms

no code implementations3 Sep 2021 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

Transfer learning is a machine learning paradigm where knowledge from one problem is utilized to solve a new but related problem.

Learning Theory Transfer Learning

On Minimizing Symbol Error Rate Over Fading Channels with Low-Resolution Quantization

no code implementations22 Jun 2021 Neil Irwin Bernardo, Jingge Zhu, Jamie Evans

We analyze the symbol error probability (SEP) of $M$-ary pulse amplitude modulation ($M$-PAM) receivers equipped with optimal low-resolution quantizers.

Quantization

Online Transfer Learning: Negative Transfer and Effect of Prior Knowledge

no code implementations4 May 2021 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

On the one hand, it is conceivable that knowledge from one task could be useful for solving a related problem.

Transfer Learning

Semi-Supervised Learning: the Case When Unlabeled Data is Equally Useful

no code implementations22 May 2020 Jingge Zhu

Semi-supervised learning algorithms attempt to take advantage of relatively inexpensive unlabeled data to improve learning performance.

Information-theoretic analysis for transfer learning

no code implementations18 May 2020 Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

Specifically, we provide generalization error upper bounds for general transfer learning algorithms and extend the results to a specific empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase.

Domain Adaptation Transfer Learning

A Sequential Approximation Framework for Coded Distributed Optimization

no code implementations24 Oct 2017 Jingge Zhu, Ye Pu, Vipul Gupta, Claire Tomlin, Kannan Ramchandran

As an application of the results, we demonstrate solving optimization problems using a sequential approximation approach, which accelerates the algorithm in a distributed system with stragglers.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.