Search Results for author: Ingvar Ziemann

Found 16 papers, 1 papers with code

State space models, emergence, and ergodicity: How many parameters are needed for stable predictions?

no code implementations20 Sep 2024 Ingvar Ziemann, Nikolai Matni, George J. Pappas

For this situation, we show that there exists no learner using a linear filter which can succesfully learn the random walk unless the filter length exceeds a certain threshold depending on the effective memory length and horizon of the problem.

A Short Information-Theoretic Analysis of Linear Auto-Regressive Learning

no code implementations10 Sep 2024 Ingvar Ziemann

In this note, we give a short information-theoretic proof of the consistency of the Gaussian maximum likelihood estimator in linear auto-regressive models.

Finite Sample Analysis for a Class of Subspace Identification Methods

no code implementations26 Apr 2024 Jiabao He, Ingvar Ziemann, Cristian R. Rojas, Håkan Hjalmarsson

While subspace identification methods (SIMs) are appealing due to their simple parameterization for MIMO systems and robust numerical realizations, a comprehensive statistical analysis of SIMs remains an open problem, especially in the non-asymptotic regime.

Rate-Optimal Non-Asymptotics for the Quadratic Prediction Error Method

no code implementations11 Apr 2024 Charis Stamouli, Ingvar Ziemann, George J. Pappas

We study the quadratic prediction error method -- i. e., nonlinear least squares -- for a class of time-varying parametric predictor models satisfying a certain identifiability condition.

Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss

no code implementations8 Feb 2024 Ingvar Ziemann, Stephen Tu, George J. Pappas, Nikolai Matni

In this work, we study statistical learning with dependent ($\beta$-mixing) data and square loss in a hypothesis class $\mathscr{F}\subset L_{\Psi_p}$ where $\Psi_p$ is the norm $\|f\|_{\Psi_p} \triangleq \sup_{m\geq 1} m^{-1/p} \|f\|_{L^m} $ for some $p\in [2,\infty]$.

Learning Theory

A Tutorial on the Non-Asymptotic Theory of System Identification

no code implementations7 Sep 2023 Ingvar Ziemann, Anastasios Tsiamis, Bruce Lee, Yassir Jedra, Nikolai Matni, George J. Pappas

This tutorial serves as an introduction to recently developed non-asymptotic methods in the theory of -- mainly linear -- system identification.

The Fundamental Limitations of Learning Linear-Quadratic Regulators

no code implementations27 Mar 2023 Bruce D. Lee, Ingvar Ziemann, Anastasios Tsiamis, Henrik Sandberg, Nikolai Matni

We present a local minimax lower bound on the excess cost of designing a linear-quadratic controller from offline data.

valid

A note on the smallest eigenvalue of the empirical covariance of causal Gaussian processes

no code implementations19 Dec 2022 Ingvar Ziemann

We present a simple proof for bounding the smallest eigenvalue of the empirical covariance in a causal Gaussian process.

Gaussian Processes

Statistical Learning Theory for Control: A Finite Sample Perspective

no code implementations12 Sep 2022 Anastasios Tsiamis, Ingvar Ziemann, Nikolai Matni, George J. Pappas

This tutorial survey provides an overview of recent non-asymptotic advances in statistical learning theory as relevant to control and system identification.

Learning Theory

Learning with little mixing

1 code implementation16 Jun 2022 Ingvar Ziemann, Stephen Tu

We study square loss in a realizable time-series framework with martingale difference noise.

Time Series Time Series Analysis

How are policy gradient methods affected by the limits of control?

no code implementations14 Jun 2022 Ingvar Ziemann, Anastasios Tsiamis, Henrik Sandberg, Nikolai Matni

We study stochastic policy gradient methods from the perspective of control-theoretic limitations.

Policy Gradient Methods

Learning to Control Linear Systems can be Hard

no code implementations27 May 2022 Anastasios Tsiamis, Ingvar Ziemann, Manfred Morari, Nikolai Matni, George J. Pappas

In this paper, we study the statistical difficulty of learning to control linear systems.

Single Trajectory Nonparametric Learning of Nonlinear Dynamics

no code implementations16 Feb 2022 Ingvar Ziemann, Henrik Sandberg, Nikolai Matni

Given a single trajectory of a dynamical system, we analyze the performance of the nonparametric least squares estimator (LSE).

counterfactual

Regret Lower Bounds for Learning Linear Quadratic Gaussian Systems

no code implementations5 Jan 2022 Ingvar Ziemann, Henrik Sandberg

TWe establish regret lower bounds for adaptively controlling an unknown linear Gaussian system with quadratic costs.

On Uninformative Optimal Policies in Adaptive LQR with Unknown B-Matrix

no code implementations18 Nov 2020 Ingvar Ziemann, Henrik Sandberg

After defining the intrinsic notion of an uninformative optimal policy in terms of a singularity condition for Fisher information we obtain local minimax regret lower bounds for such uninformative instances of LQR by appealing to van Trees' inequality (Bayesian Cram\'er-Rao) and a representation of regret in terms of a quadratic form (Bellman error).

Cannot find the paper you are looking for? You can Submit a new open access paper.