Strictly Decentralized Adaptive Estimation of External Fields using Reproducing Kernels

This paper describes an adaptive method in continuous time for the estimation of external fields by a team of $N$ agents. The agents $i$ each explore subdomains $\Omega^i$ of a bounded subset of interest $\Omega\subset X := \mathbb{R}^d$. Ideal adaptive estimates $\hat{g}^i_t$ are derived for each agent from a distributed parameter system (DPS) that takes values in the scalar-valued reproducing kernel Hilbert space $H_X$ of functions over $X$. Approximations of the evolution of the ideal local estimate $\hat{g}^i_t$ of agent $i$ is constructed solely using observations made by agent $i$ on a fine time scale. Since the local estimates on the fine time scale are constructed independently for each agent, we say that the method is strictly decentralized. On a coarse time scale, the individual local estimates $\hat{g}^i_t$ are fused via the expression $\hat{g}_t:=\sum_{i=1}^N\Psi^i \hat{g}^i_t$ that uses a partition of unity $\{\Psi^i\}_{1\leq i\leq N}$ subordinate to the cover $\{\Omega^i\}_{i=1,\ldots,N}$ of $\Omega$. Realizable algorithms are obtained by constructing finite dimensional approximations of the DPS in terms of scattered bases defined by each agent from samples along their trajectories. Rates of convergence of the error in the finite dimensional approximations are derived in terms of the fill distance of the samples that define the scattered centers in each subdomain. The qualitative performance of the convergence rates for the decentralized estimation method is illustrated via numerical simulations.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here