diff --git a/previews/PR94/.documenter-siteinfo.json b/previews/PR94/.documenter-siteinfo.json
index 2343cb80c..fbe476451 100644
--- a/previews/PR94/.documenter-siteinfo.json
+++ b/previews/PR94/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-08-18T01:44:21","documenter_version":"1.5.0"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-08-18T14:03:56","documenter_version":"1.5.0"}}
\ No newline at end of file
diff --git a/previews/PR94/func_index/index.html b/previews/PR94/func_index/index.html
index 21e9deefd..0671d2fb8 100644
--- a/previews/PR94/func_index/index.html
+++ b/previews/PR94/func_index/index.html
@@ -1,2 +1,2 @@
-
The objective is to provide a simple, clear and modular framework to quickly design model predictive controllers (MPCs) in Julia, while preserving the flexibility for advanced real-time optimization. Modern MPCs based on closed-loop state estimators are the main focus of the package, but classical approaches that rely on internal models are also possible. The JuMP.jl interface allows the user to test different solvers easily if the performance of the default settings is not satisfactory.
The documentation is divided in two parts:
Manual — This section includes step-by-step guides to design predictive controllers on multiple case studies.
Functions — Documentation of methods and types exported by the package. The "Internals" section provides implementation details of functions that are not exported.
The objective is to provide a simple, clear and modular framework to quickly design model predictive controllers (MPCs) in Julia, while preserving the flexibility for advanced real-time optimization. Modern MPCs based on closed-loop state estimators are the main focus of the package, but classical approaches that rely on internal models are also possible. The JuMP.jl interface allows the user to test different solvers easily if the performance of the default settings is not satisfactory.
The documentation is divided in two parts:
Manual — This section includes step-by-step guides to design predictive controllers on multiple case studies.
Functions — Documentation of methods and types exported by the package. The "Internals" section provides implementation details of functions that are not exported.
Augment manipulated inputs constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{S̃}$, similar to the one described at init_ΔUtoU. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
Augment manipulated inputs constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{S̃}$, similar to the one described at init_ΔUtoU. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
in which $\mathbf{U_{min}, U_{max}}$ and $\mathbf{U_{op}}$ vectors respectively contains $\mathbf{u_{min}, u_{max}}$ and $\mathbf{u_{op}}$ repeated $H_p$ times.
in which $\mathbf{U_{min}, U_{max}}$ and $\mathbf{U_{op}}$ vectors respectively contains $\mathbf{u_{min}, u_{max}}$ and $\mathbf{u_{op}}$ repeated $H_p$ times.
Augment input increments constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented input increment weights $\mathbf{Ñ}_{H_c}$ (that incorporate $C$). It also returns the augmented constraints $\mathbf{ΔŨ_{min}}$ and $\mathbf{ΔŨ_{max}}$ and the $\mathbf{A}$ matrices for the inequality constraints:
Augment linear output prediction constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ_0 = Ẽ ΔŨ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:
Augment linear output prediction constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ_0 = Ẽ ΔŨ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:
in which $\mathbf{Y_{min}, Y_{max}}$ and $\mathbf{Y_{op}}$ vectors respectively contains $\mathbf{y_{min}, y_{max}}$ and $\mathbf{y_{op}}$ repeated $H_p$ times.
Augment terminal state constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{ẽ_{x̂}}$ matrix that appears in the terminal state equation $\mathbf{x̂_0}(k + H_p) = \mathbf{ẽ_x̂ ΔŨ + f_x̂}$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix}
+\end{bmatrix}\]
in which $\mathbf{Y_{min}, Y_{max}}$ and $\mathbf{Y_{op}}$ vectors respectively contains $\mathbf{y_{min}, y_{max}}$ and $\mathbf{y_{op}}$ repeated $H_p$ times.
Augment terminal state constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{ẽ_{x̂}}$ matrix that appears in the terminal state equation $\mathbf{x̂_0}(k + H_p) = \mathbf{ẽ_x̂ ΔŨ + f_x̂}$, and the $\mathbf{A}$ matrices for the inequality constraints:
The vector $\mathbf{q̃}$ and scalar $p$ need recalculation each control period $k$, see initpred!. $p$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value.
Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].
The vector $\mathbf{q̃}$ and scalar $p$ need recalculation each control period $k$, see initpred!. $p$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value.
Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].
i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$ (empty if model is a LinModel). The method also returns the $\mathbf{A}$ matrix if args is provided. In such a case, args needs to contain all the inequality constraint matrices: A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ymin, A_Ymax, A_x̂min, A_x̂max.
Init linear model prediction matrices F, q̃, p and current estimated output ŷ.
See init_predmat and init_quadprog for the definition of the matrices. They are computed with these equations using in-place operations:
\[\begin{aligned}
+\end{aligned}\]
i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$ (empty if model is a LinModel). The method also returns the $\mathbf{A}$ matrix if args is provided. In such a case, args needs to contain all the inequality constraint matrices: A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ymin, A_Ymax, A_x̂min, A_x̂max.
where $\mathbf{Δu}_{k-1}(k+j)$ is the input increment for time $k+j$ computed at the last control period $k-1$. It then calls JuMP.optimize!(mpc.optim) and extract the solution. A failed optimization prints an @error log in the REPL and returns the warm-start value.
1Maciejowski, J. 2000, "Predictive control : with constraints", 1st ed., Prentice Hall, ISBN 978-0201398236.
2Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control: a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
Settings
This document was generated with Documenter.jl version 1.5.0 on Sunday 18 August 2024. Using Julia version 1.9.4.
+\end{bmatrix}\]
where $\mathbf{Δu}_{k-1}(k+j)$ is the input increment for time $k+j$ computed at the last control period $k-1$. It then calls JuMP.optimize!(mpc.optim) and extract the solution. A failed optimization prints an @error log in the REPL and returns the warm-start value.
1Maciejowski, J. 2000, "Predictive control : with constraints", 1st ed., Prentice Hall, ISBN 978-0201398236.
2Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control: a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
Settings
This document was generated with Documenter.jl version 1.5.0 on Sunday 18 August 2024. Using Julia version 1.9.4.
with constant manipulated inputs $\mathbf{u_0 = u - u_{op}}$ and measured disturbances $\mathbf{d_0 = d - d_{op}}$. The Moore-Penrose pseudo-inverse computes $\mathbf{(I - A)^{-1}}$ to support integrating model (integrator states will be 0).
with constant manipulated inputs $\mathbf{u_0 = u - u_{op}}$ and measured disturbances $\mathbf{d_0 = d - d_{op}}$. The Moore-Penrose pseudo-inverse computes $\mathbf{(I - A)^{-1}}$ to support integrating model (integrator states will be 0).
Mutating state function $\mathbf{f̂}$ of the augmented model.
By introducing an augmented state vector $\mathbf{x̂_0}$ like in augment_model, the function returns the next state of the augmented model, defined as:
where $\mathbf{x̂_0}(k+1)$ is stored in x̂next0 argument. The method mutates x̂next0 and û0 in place, the latter stores the input vector of the augmented model $\mathbf{u_0 + ŷ_{s_u}}$.
Init stochastic model matrices from integrator specifications for state estimation.
The arguments nint_u and nint_ym specify how many integrators are added to each manipulated input and measured outputs. The function returns the state-space matrices As, Cs_u and Cs_y of the stochastic model:
\[\begin{aligned}
+\end{aligned}\]
where $\mathbf{x̂_0}(k+1)$ is stored in x̂next0 argument. The method mutates x̂next0 and û0 in place, the latter stores the input vector of the augmented model $\mathbf{u_0 + ŷ_{s_u}}$.
Init stochastic model matrices from integrator specifications for state estimation.
The arguments nint_u and nint_ym specify how many integrators are added to each manipulated input and measured outputs. The function returns the state-space matrices As, Cs_u and Cs_y of the stochastic model:
where $\mathbf{e}(k)$ is an unknown zero mean white noise and $\mathbf{A_s} = \mathrm{diag}(\mathbf{A_{s_{u}}, A_{s_{ym}}})$. The estimations does not use $\mathbf{B_s}$, it is thus ignored. The function init_integrators builds the state-space matrices.
init_integrators(nint, ny, varname::String) -> A, C, nint
Calc A, C state-space matrices from integrator specifications nint.
This function is used to initialize the stochastic part of the augmented model for the design of state estimators. The vector nint provides how many integrators (in series) should be incorporated for each output. The argument should have ny element, except for nint=0 which is an alias for no integrator at all. The specific case of one integrator per output results in A = I and C = I. The estimation does not use the B matrix, it is thus ignored. This function is called twice :
for the unmeasured disturbances at manipulated inputs $\mathbf{u}$
for the unmeasured disturbances at measured outputs $\mathbf{y^m}$
where $\mathbf{e}(k)$ is an unknown zero mean white noise and $\mathbf{A_s} = \mathrm{diag}(\mathbf{A_{s_{u}}, A_{s_{ym}}})$. The estimations does not use $\mathbf{B_s}$, it is thus ignored. The function init_integrators builds the state-space matrices.
init_integrators(nint, ny, varname::String) -> A, C, nint
Calc A, C state-space matrices from integrator specifications nint.
This function is used to initialize the stochastic part of the augmented model for the design of state estimators. The vector nint provides how many integrators (in series) should be incorporated for each output. The argument should have ny element, except for nint=0 which is an alias for no integrator at all. The specific case of one integrator per output results in A = I and C = I. The estimation does not use the B matrix, it is thus ignored. This function is called twice :
for the unmeasured disturbances at manipulated inputs $\mathbf{u}$
for the unmeasured disturbances at measured outputs $\mathbf{y^m}$
Augment LinModel state-space matrices with stochastic ones As, Cs_u, Cs_y.
If $\mathbf{x_0}$ are model.x0 states, and $\mathbf{x_s}$, the states defined at init_estimstoch, we define an augmented state vector $\mathbf{x̂} = [ \begin{smallmatrix} \mathbf{x_0} \\ \mathbf{x_s} \end{smallmatrix} ]$. The method returns the augmented matrices Â, B̂u, Ĉ, B̂d and D̂d:
An error is thrown if the augmented model is not observable and verify_obsv == true. The augmented operating points x̂op and f̂op are simply $\mathbf{x_{op}}$ and $\mathbf{f_{op}}$ vectors appended with zeros (see setop!).
An error is thrown if the augmented model is not observable and verify_obsv == true. The augmented operating points x̂op and f̂op are simply $\mathbf{x_{op}}$ and $\mathbf{f_{op}}$ vectors appended with zeros (see setop!).
With $n_\mathbf{x̂}$ elements in the state vector $\mathbf{x̂}$ and $n_σ = 2 n_\mathbf{x̂} + 1$ sigma points, the scaling factor applied on standard deviation matrices $\sqrt{\mathbf{P̂}}$ is:
\[ γ = α \sqrt{ n_\mathbf{x̂} + κ }\]
The weight vector $(n_σ × 1)$ for the mean and the weight matrix $(n_σ × n_σ)$ for the covariance are respectively:
With $n_\mathbf{x̂}$ elements in the state vector $\mathbf{x̂}$ and $n_σ = 2 n_\mathbf{x̂} + 1$ sigma points, the scaling factor applied on standard deviation matrices $\sqrt{\mathbf{P̂}}$ is:
\[ γ = α \sqrt{ n_\mathbf{x̂} + κ }\]
The weight vector $(n_σ × 1)$ for the mean and the weight matrix $(n_σ × n_σ)$ for the covariance are respectively:
where $\mathbf{e}(k)$ is conceptual and unknown zero mean white noise. Its optimal estimation is $\mathbf{ê=0}$, the expected value. Thus, the Âs and B̂s matrices that optimally update the stochastic estimate $\mathbf{x̂_s}$ are:
with current stochastic outputs estimation $\mathbf{ŷ_s}(k)$, composed of the measured $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs. See [1].
with current stochastic outputs estimation $\mathbf{ŷ_s}(k)$, composed of the measured $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs. See [1].
Construct the MHE prediction matrices for LinModelmodel.
We first introduce the deviation vector of the estimated state at arrival $\mathbf{x̂_0}(k-N_k+1) = \mathbf{x̂}_k(k-N_k+1) - \mathbf{x̂_{op}}$ (see setop!), and the vector $\mathbf{Z} = [\begin{smallmatrix} \mathbf{x̂_0}(k-N_k+1) \\ \mathbf{Ŵ} \end{smallmatrix}]$ with the decision variables. The estimated sensor noises from time $k-N_k+1$ to $k$ are computed by:
Augment arrival state constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{ẽ_x̄}$ matrix that appears in the estimation error at arrival equation $\mathbf{x̄} = \mathbf{ẽ_x̄ Z̃ + f_x̄}$. It also returns the augmented constraints $\mathbf{x̃_{min}}$ and $\mathbf{x̃_{max}}$, and the $\mathbf{A}$ matrices for the inequality constraints:
Augment estimated state constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ_x̂}$ matrix that appears in estimated states equation $\mathbf{X̂} = \mathbf{Ẽ_x̂ Z̃ + F_x̂}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix}
+\end{bmatrix}\]
in which $\mathbf{x̃_{min}} = [\begin{smallmatrix} 0 \\ \mathbf{x̂_{min}} \end{smallmatrix}]$, $\mathbf{x̃_{max}} = [\begin{smallmatrix} ∞ \\ \mathbf{x̂_{max}} \end{smallmatrix}]$ and $\mathbf{x̃_{op}} = [\begin{smallmatrix} 0 \\ \mathbf{x̂_{op}} \end{smallmatrix}]$
Augment estimated state constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ_x̂}$ matrix that appears in estimated states equation $\mathbf{X̂} = \mathbf{Ẽ_x̂ Z̃ + F_x̂}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
in which $\mathbf{X̂_{min}, X̂_{max}}$ and $\mathbf{X̂_{op}}$ vectors respectively contains $\mathbf{x̂_{min}, x̂_{max}}$ and $\mathbf{x̂_{op}}$ repeated $H_e$ times.
Augment estimated process noise constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix}
+\end{bmatrix}\]
in which $\mathbf{X̂_{min}, X̂_{max}}$ and $\mathbf{X̂_{op}}$ vectors respectively contains $\mathbf{x̂_{min}, x̂_{max}}$ and $\mathbf{x̂_{op}}$ repeated $H_e$ times.
Augment estimated process noise constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{A}$ matrices for the inequality constraints:
Augment estimated sensor noise constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in estimated sensor noise equation $\mathbf{V̂} = \mathbf{Ẽ Z̃ + F}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
Augment estimated sensor noise constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in estimated sensor noise equation $\mathbf{V̂} = \mathbf{Ẽ Z̃ + F}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$ (empty if model is a LinModel). The method also returns the $\mathbf{A}$ matrix if args is provided. In such a case, args needs to contain all the inequality constraint matrices: A_x̃min, A_x̃max, A_X̂min, A_X̂max, A_Ŵmin, A_Ŵmax, A_V̂min, A_V̂max.
See init_predmat_mhe for the definition of the vectors $\mathbf{F, f_x̄}$. It also inits estim.optim objective function, expressed as the quadratic general form:
in which $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$. Note that $p$ is useless at optimization but required to evaluate the objective minima $J$. The Hessian $\mathbf{H̃}$ matrix of the quadratic general form is not constant here because of the time-varying $\mathbf{P̄}$ covariance . The computed variables are:
\[\begin{aligned}
+\end{aligned}\]
i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$ (empty if model is a LinModel). The method also returns the $\mathbf{A}$ matrix if args is provided. In such a case, args needs to contain all the inequality constraint matrices: A_x̃min, A_x̃max, A_X̂min, A_X̂max, A_Ŵmin, A_Ŵmax, A_V̂min, A_V̂max.
See init_predmat_mhe for the definition of the vectors $\mathbf{F, f_x̄}$. It also inits estim.optim objective function, expressed as the quadratic general form:
in which $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$. Note that $p$ is useless at optimization but required to evaluate the objective minima $J$. The Hessian $\mathbf{H̃}$ matrix of the quadratic general form is not constant here because of the time-varying $\mathbf{P̄}$ covariance . The computed variables are:
InternalModel estimator needs current stochastic output $\mathbf{ŷ_s}(k)$ to estimate its outputs $\mathbf{ŷ}(k)$. The method preparestate! store this value inside estim object, it should be thus called before evalŷ.
Init estim.x̂0 estimate with the steady-state solution if model is a LinModel.
Using u0, y0m and d0 arguments (deviation values, see setop!), the steadystate problem combined to the equality constraint $\mathbf{ŷ_0^m} = \mathbf{y_0^m}$ engenders the following system to solve:
InternalModel estimator needs current stochastic output $\mathbf{ŷ_s}(k)$ to estimate its outputs $\mathbf{ŷ}(k)$. The method preparestate! store this value inside estim object, it should be thus called before evalŷ.
Init estim.x̂0 estimate with the steady-state solution if model is a LinModel.
Using u0, y0m and d0 arguments (deviation values, see setop!), the steadystate problem combined to the equality constraint $\mathbf{ŷ_0^m} = \mathbf{y_0^m}$ engenders the following system to solve:
Based on y0m argument and current stochastic outputs estimation $\mathbf{ŷ_s}$, composed of the measured $\mathbf{ŷ_s^m} = \mathbf{y_0^m} - \mathbf{ŷ_{d0}^m}$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs, the stochastic estimates also start at steady-state:
All these methods assume that the u0, y0m and d0 arguments are deviation vectors from their respective operating points (see setop!). The associated equations in the documentation drops the $\mathbf{0}$ in subscript to simplify the notation. Strictly speaking, the manipulated inputs, measured outputs, measured disturbances and estimated states should be denoted with $\mathbf{u_0, y_0^m, d_0}$ and $\mathbf{x̂_0}$, respectively.
Update estim.x̂0 estimate with current inputs u0, measured outputs y0m and dist. d0.
If estim.direct == false, the SteadyKalmanFilter first corrects the state estimate with the precomputed Kalman gain $\mathbf{K̂}$. Afterward, it predicts the next state with the augmented process model. The correction step is skipped if direct == true since it is already done by the user through the preparestate! function (that calls correct_estimate!). The correction and prediction step equations are provided below.
Update KalmanFilter state estim.x̂0 and estimation error covariance estim.P̂.
It implements the classical time-varying Kalman Filter based on the process model described in SteadyKalmanFilter. If estim.direct == false, it first corrects the estimate before predicting the next state. The correction step is skipped if estim.direct == true since it's already done by the user. The correction and prediction step equations are provided below, see [2] for details.
Correction Step
\[\begin{aligned}
+\end{bmatrix}\]
in which $\mathbf{Ĉ^m, D̂_d^m}$ are the rows of estim.Ĉ, estim.D̂d that correspond to measured outputs $\mathbf{y^m}$.
Based on y0m argument and current stochastic outputs estimation $\mathbf{ŷ_s}$, composed of the measured $\mathbf{ŷ_s^m} = \mathbf{y_0^m} - \mathbf{ŷ_{d0}^m}$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs, the stochastic estimates also start at steady-state:
All these methods assume that the u0, y0m and d0 arguments are deviation vectors from their respective operating points (see setop!). The associated equations in the documentation drops the $\mathbf{0}$ in subscript to simplify the notation. Strictly speaking, the manipulated inputs, measured outputs, measured disturbances and estimated states should be denoted with $\mathbf{u_0, y_0^m, d_0}$ and $\mathbf{x̂_0}$, respectively.
Update estim.x̂0 estimate with current inputs u0, measured outputs y0m and dist. d0.
If estim.direct == false, the SteadyKalmanFilter first corrects the state estimate with the precomputed Kalman gain $\mathbf{K̂}$. Afterward, it predicts the next state with the augmented process model. The correction step is skipped if direct == true since it is already done by the user through the preparestate! function (that calls correct_estimate!). The correction and prediction step equations are provided below.
Update KalmanFilter state estim.x̂0 and estimation error covariance estim.P̂.
It implements the classical time-varying Kalman Filter based on the process model described in SteadyKalmanFilter. If estim.direct == false, it first corrects the estimate before predicting the next state. The correction step is skipped if estim.direct == true since it's already done by the user. The correction and prediction step equations are provided below, see [2] for details.
It implements the unscented Kalman Filter based on the generalized unscented transform[3]. See init_ukf for the definition of the constants $\mathbf{m̂, Ŝ}$ and $γ$. The superscript in e.g. $\mathbf{X̂}_{k-1}^j(k)$ refers the vector at the $j$th column of $\mathbf{X̂}_{k-1}(k)$. The symbol $\mathbf{0}$ is a vector with zeros. The number of sigma points is $n_σ = 2 n_\mathbf{x̂} + 1$. The matrices $\sqrt{\mathbf{P̂}_{k-1}(k)}$ and $\sqrt{\mathbf{P̂}_{k}(k)}$ are the the lower triangular factors of cholesky results. The correction and prediction step equations are provided below. The correction step is skipped if estim.direct == true since it's already done by the user.
It implements the unscented Kalman Filter based on the generalized unscented transform[3]. See init_ukf for the definition of the constants $\mathbf{m̂, Ŝ}$ and $γ$. The superscript in e.g. $\mathbf{X̂}_{k-1}^j(k)$ refers the vector at the $j$th column of $\mathbf{X̂}_{k-1}(k)$. The symbol $\mathbf{0}$ is a vector with zeros. The number of sigma points is $n_σ = 2 n_\mathbf{x̂} + 1$. The matrices $\sqrt{\mathbf{P̂}_{k-1}(k)}$ and $\sqrt{\mathbf{P̂}_{k}(k)}$ are the the lower triangular factors of cholesky results. The correction and prediction step equations are provided below. The correction step is skipped if estim.direct == true since it's already done by the user.
The equations are similar to update_estimate!(::KalmanFilter) but with the substitutions $\mathbf{Ĉ^m = Ĥ^m}(k)$ and $\mathbf{Â = F̂}(k)$, the Jacobians of the augmented process model:
The equations are similar to update_estimate!(::KalmanFilter) but with the substitutions $\mathbf{Ĉ^m = Ĥ^m}(k)$ and $\mathbf{Â = F̂}(k)$, the Jacobians of the augmented process model:
The matrix $\mathbf{Ĥ^m}$ is the rows of $\mathbf{Ĥ}$ that are measured outputs. The function ForwardDiff.jacobian automatically computes them. The correction and prediction step equations are provided below. The correction step is skipped if estim.direct == true since it's already done by the user.
The optimization problem of MovingHorizonEstimator documentation is solved at each discrete time $k$. Once solved, the next estimate $\mathbf{x̂}_k(k+1)$ is computed by inserting the optimal values of $\mathbf{x̂}_k(k-N_k+1)$ and $\mathbf{Ŵ}$ in the augmented model from $j = N_k-1$ to $0$ inclusively. Afterward, if $k ≥ H_e$, the arrival covariance for the next time step $\mathbf{P̂}_{k-N_k+1}(k-N_k+2)$ is estimated using estim.covestim object.
The optimization problem of MovingHorizonEstimator documentation is solved at each discrete time $k$. Once solved, the next estimate $\mathbf{x̂}_k(k+1)$ is computed by inserting the optimal values of $\mathbf{x̂}_k(k-N_k+1)$ and $\mathbf{Ŵ}$ in the augmented model from $j = N_k-1$ to $0$ inclusively. Afterward, if $k ≥ H_e$, the arrival covariance for the next time step $\mathbf{P̂}_{k-N_k+1}(k-N_k+2)$ is estimated using estim.covestim object.
1Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
3Simon, D. 2006, "Chapter 14: The unscented Kalman filter" in "Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches", John Wiley & Sons, p. 433–459, https://doi.org/10.1002/0470045345.ch14, ISBN9780470045343.
Settings
This document was generated with Documenter.jl version 1.5.0 on Sunday 18 August 2024. Using Julia version 1.9.4.
+\end{aligned}\]
This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel for details.
1Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
3Simon, D. 2006, "Chapter 14: The unscented Kalman filter" in "Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches", John Wiley & Sons, p. 433–459, https://doi.org/10.1002/0470045345.ch14, ISBN9780470045343.
Settings
This document was generated with Documenter.jl version 1.5.0 on Sunday 18 August 2024. Using Julia version 1.9.4.