Gaussian Filtering and Smoothing

Predict

ProbNumDiffEq.predictFunction
predict(x::Gaussian, A::AbstractMatrix, Q::AbstractMatrix)

Prediction step in Kalman filtering for linear dynamics models.

Given a Gaussian $x = \mathcal{N}(μ, Σ)$, compute and return $\mathcal{N}(A μ, A Σ A^T + Q)$.

See also the non-allocating square-root version predict!.

source
ProbNumDiffEq.predict!Function
predict!(x_out, x_curr, Ah, Qh, cachemat)

In-place and square-root implementation of predict which saves the result into x_out.

Only works with PSDMatrices.PSDMatrix types as Ah, Qh, and in the covariances of x_curr and x_out (both of type Gaussian). To prevent allocations, a cache matrix cachemat of size $D \times 2D$ (where $D \times D$ is the size of Ah and Qh) needs to be passed.

See also: predict.

source

Update

ProbNumDiffEq.updateFunction
update(x, measurement, H)

Update step in Kalman filtering for linear dynamics models.

Given a Gaussian $x = \mathcal{N}(μ, Σ)$ and a measurement $z = \mathcal{N}(\hat{z}, S)$, with $S = H Σ H^T$, compute

\[\begin{aligned} K &= Σ^P H^T S^{-1}, \\ μ^F &= μ + K (0 - \hat{z}), \\ Σ^F &= Σ - K S K^T, \end{aligned}\]

and return an updated state \mathcal{N}(μ^F, Σ^F). Note that this assumes zero-measurements. When called with ProbNumDiffEq.SquarerootMatrix type arguments it performs the update in Joseph / square-root form.

For better performance, we recommend to use the non-allocating update!.

source
ProbNumDiffEq.update!Function
update!(x_out, x_pred, measurement, H, K_cache, M_cache, S_cache)

In-place and square-root implementation of update which saves the result into x_out.

Implemented in Joseph Form to retain the PSDMatrix covariances:

\[\begin{aligned} K &= Σ^P H^T S^{-1}, \\ μ^F &= μ + K (0 - \hat{z}), \\ \sqrt{Σ}^F &= (I - KH) \sqrt(Σ), \end{aligned}\]

where $\sqrt{M}$ denotes the left square-root of a matrix M, i.e. $M = \sqrt{M} \sqrt{M}^T$.

To prevent allocations, write into caches K_cache and M_cache, both of size D × D, and S_cache of same type as measurement.Σ.

See also: update.

source

Smooth

ProbNumDiffEq.smoothFunction
smooth(x_curr, x_next_smoothed, A, Q)

Update step of the Kalman smoother, aka. Rauch-Tung-Striebel smoother, for linear dynamics models.

Given Gaussians $x_n = \mathcal{N}(μ_{n}, Σ_{n})$ and $x_{n+1} = \mathcal{N}(μ_{n+1}^S, Σ_{n+1}^S)$, compute

\[\begin{aligned} μ_{n+1}^P &= A μ_n^F, \\ P_{n+1}^P &= A Σ_n^F A + Q, \\ G &= Σ_n^S A^T (Σ_{n+1}^P)^{-1}, \\ μ_n^S &= μ_n^F + G (μ_{n+1}^S - μ_{n+1}^P), \\ Σ_n^S &= (I - G A) Σ_n^F (I - G A)^T + G Q G^T + G Σ_{n+1}^S G^T, \end{aligned}\]

and return a smoothed state \mathcal{N}(μ_n^S, Σ_n^S). When called with ProbNumDiffEq.SquarerootMatrix type arguments it performs the update in Joseph / square-root form.

source

Markov Kernels

ProbNumDiffEq.AffineNormalKernelType
AffineNormalKernel(A[, b], C)

Structure to represent affine Normal Markov kernels, i.e. conditional distributions of the form

\[\begin{aligned} y \mid x \sim \mathcal{N} \left( y; A x + b, C \right). \end{aligned}\]

At the point of writing, AffineNormalKernels are only used to precompute and store the backward representation of the posterior (via compute_backward_kernel!) and for smoothing (via marginalize!).

source
ProbNumDiffEq.marginalize!Function
marginalize!(
    xout::Gaussian{Vector{T},PSDMatrix{T,S}}
    x::Gaussian{Vector{T},PSDMatrix{T,S}},
    K::AffineNormalKernel{<:AbstractMatrix,Union{<:Number,<:AbstractVector,Missing},<:PSDMatrix};
    C_DxD, C_3DxD
)

Basically the same as predict!), but in kernel language and with support for affine transitions. At the time of writing, this is only used to smooth the posterior using it's backward representation, where the kernels are precomputed with compute_backward_kernel!.

Note that this function assumes certain shapes:

  • size(x.μ) == (D, D)
  • size(x.Σ) == (D, D)
  • size(K.A) == (D, D)
  • size(K.b) == (D,), or missing
  • size(K.C) == (D, D), _but with a tall square-root size(K.C.R) == (3D, D)

xout is assumes to have the same shapes as x.

source
ProbNumDiffEq.compute_backward_kernel!Function
compute_backward_kernel!(Kout, xpred, x, K; C_DxD[, diffusion=1])

Compute the backward representation of the posterior, i.e. the conditional distribution of the current state given the next state and the transition kernel.

More precisely, given a distribution (x)

\[\begin{aligned} x \sim \mathcal{N} \left( x; μ, Σ \right), \end{aligned}\]

a kernel (K)

\[\begin{aligned} y \mid x \sim \mathcal{N} \left( y; A x + b, C \right), \end{aligned}\]

and a distribution (xpred) obtained via marginalization

\[\begin{aligned} y &\sim \mathcal{N} \left( y; μ^P, Σ^P \right), \\ μ^P &= A μ + b, \\ Σ^P &= A Σ A^\top + C, \end{aligned}\]

this function computes the conditional distribution

\[\begin{aligned} x \mid y \sim \mathcal{N} \left( x; G y + d, Λ \right), \end{aligned}\]

where

\[\begin{aligned} G &= Σ A^\top (Σ^P)^{-1}, \\ d &= μ - G μ^P, \\ Λ &= Σ - G Σ^P G^\top. \end{aligned}\]

Everything is computed in square-root form and with minimal allocations (thus the cache C_DxD), so the actual formulas implemented here differ a bit.

The resulting backward kernels are used to smooth the posterior, via marginalize!.

source