no code implementations • 22 May 2023 • Lili Chen, Jingge Zhu, Jamie Evans
Since unpaired transmitters and receivers are often spatially distant, the distanced-based threshold is proposed to reduce the computation time by excluding or including the channel state information in GNNs.
no code implementations • 2 Apr 2023 • Seth Siriya, Jingge Zhu, Dragan Nešić, Ye Pu
We consider the problem of adaptive stabilization for discrete-time, multi-dimensional linear systems with bounded control input constraints and unbounded stochastic disturbances, where the parameters of the true system are unknown.
no code implementations • 27 Mar 2023 • Lili Chen, Jingge Zhu, Jamie Evans
We further refine this trade-off by introducing a distance-based threshold for inclusion or exclusion of edges in the network.
no code implementations • 26 Mar 2023 • Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu
However, such a learning rate is typically considered to be ``slow", compared to a ``fast rate" of $O(\lambda/n)$ in many learning scenarios.
no code implementations • 9 Mar 2023 • Junzhang Jia, Xuetong Wu, Jingge Zhu, Jamie Evans
We study the effectiveness of stochastic side information in deterministic online learning scenarios.
1 code implementation • 3 Feb 2023 • Quang Cao, Hong Yen Tran, Son Hoang Dau, Xun Yi, Emanuele Viterbo, Chen Feng, Yu-Chih Huang, Jingge Zhu, Stanislav Kruglik, Han Mao Kiah
A PIR scheme is $v$-verifiable if the client can verify the correctness of the retrieved $x_i$ even when $v \leq k$ servers collude and try to fool the client by sending manipulated data.
no code implementations • 15 Sep 2022 • Seth Siriya, Jingge Zhu, Dragan Nešić, Ye Pu
We propose a certainty-equivalence scheme for adaptive control of scalar linear systems subject to additive, i. i. d.
no code implementations • 16 Aug 2022 • Neil Irwin Bernardo, Jingge Zhu, Yonina C. Eldar, Jamie Evans
Here, we propose a new framework based on generalized Bussgang decomposition that enables the design and analysis of hardware-limited task-based quantizers that are equipped with non-uniform scalar quantizers or that have inputs with unbounded support.
no code implementations • 12 Jul 2022 • Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu
Specifically, we provide generalization error upper bounds for the empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase.
no code implementations • 19 May 2022 • Navneet Agrawal, Yuqin Qiu, Matthias Frey, Igor Bjelakovic, Setareh Maghsudi, Slawomir Stanczak, Jingge Zhu
Lagrange coded computation (LCC) is essential to solving problems about matrix polynomials in a coded distributed fashion; nevertheless, it can only solve the problems that are representable as matrix polynomials.
no code implementations • 10 May 2022 • Xuetong Wu, Mingming Gong, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu
We show that in causal learning, the excess risk depends on the size of the source sample at a rate of O(1/m) only if the labelling distribution between the source and target domains remains unchanged.
no code implementations • 6 May 2022 • Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu
However, such a learning rate is typically considered to be "slow", compared to a "fast rate" of O(1/n) in many learning scenarios.
no code implementations • 3 Sep 2021 • Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu
Transfer learning is a machine learning paradigm where knowledge from one problem is utilized to solve a new but related problem.
no code implementations • 22 Jun 2021 • Neil Irwin Bernardo, Jingge Zhu, Jamie Evans
We analyze the symbol error probability (SEP) of $M$-ary pulse amplitude modulation ($M$-PAM) receivers equipped with optimal low-resolution quantizers.
no code implementations • 4 May 2021 • Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu
On the one hand, it is conceivable that knowledge from one task could be useful for solving a related problem.
no code implementations • 22 May 2020 • Jingge Zhu
Semi-supervised learning algorithms attempt to take advantage of relatively inexpensive unlabeled data to improve learning performance.
no code implementations • 18 May 2020 • Xuetong Wu, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu
Specifically, we provide generalization error upper bounds for general transfer learning algorithms and extend the results to a specific empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase.
no code implementations • 24 Oct 2017 • Jingge Zhu, Ye Pu, Vipul Gupta, Claire Tomlin, Kannan Ramchandran
As an application of the results, we demonstrate solving optimization problems using a sequential approximation approach, which accelerates the algorithm in a distributed system with stragglers.