diff --git a/previews/PR94/.documenter-siteinfo.json b/previews/PR94/.documenter-siteinfo.json index 2343cb80c..fbe476451 100644 --- a/previews/PR94/.documenter-siteinfo.json +++ b/previews/PR94/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-08-18T01:44:21","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-08-18T14:03:56","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/previews/PR94/func_index/index.html b/previews/PR94/func_index/index.html index 21e9deefd..0671d2fb8 100644 --- a/previews/PR94/func_index/index.html +++ b/previews/PR94/func_index/index.html @@ -1,2 +1,2 @@ -Index · ModelPredictiveControl.jl

Index

+Index · ModelPredictiveControl.jl

Index

diff --git a/previews/PR94/index.html b/previews/PR94/index.html index 3486bed34..81289d3b6 100644 --- a/previews/PR94/index.html +++ b/previews/PR94/index.html @@ -1,2 +1,2 @@ -Home · ModelPredictiveControl.jl

ModelPredictiveControl.jl

An open source model predictive control package for Julia.

The package depends on ControlSystemsBase.jl for the linear systems and JuMP.jl for the solving.

The objective is to provide a simple, clear and modular framework to quickly design model predictive controllers (MPCs) in Julia, while preserving the flexibility for advanced real-time optimization. Modern MPCs based on closed-loop state estimators are the main focus of the package, but classical approaches that rely on internal models are also possible. The JuMP.jl interface allows the user to test different solvers easily if the performance of the default settings is not satisfactory.

The documentation is divided in two parts:

  • Manual — This section includes step-by-step guides to design predictive controllers on multiple case studies.
  • Functions — Documentation of methods and types exported by the package. The "Internals" section provides implementation details of functions that are not exported.

Manual

Functions: Public

Functions: Internals

+Home · ModelPredictiveControl.jl

ModelPredictiveControl.jl

An open source model predictive control package for Julia.

The package depends on ControlSystemsBase.jl for the linear systems and JuMP.jl for the solving.

The objective is to provide a simple, clear and modular framework to quickly design model predictive controllers (MPCs) in Julia, while preserving the flexibility for advanced real-time optimization. Modern MPCs based on closed-loop state estimators are the main focus of the package, but classical approaches that rely on internal models are also possible. The JuMP.jl interface allows the user to test different solvers easily if the performance of the default settings is not satisfactory.

The documentation is divided in two parts:

  • Manual — This section includes step-by-step guides to design predictive controllers on multiple case studies.
  • Functions — Documentation of methods and types exported by the package. The "Internals" section provides implementation details of functions that are not exported.

Manual

Functions: Public

Functions: Internals

diff --git a/previews/PR94/internals/predictive_control/index.html b/previews/PR94/internals/predictive_control/index.html index 5115d9b56..8812d7a29 100644 --- a/previews/PR94/internals/predictive_control/index.html +++ b/previews/PR94/internals/predictive_control/index.html @@ -1,5 +1,5 @@ -Predictive Controllers · ModelPredictiveControl.jl

Functions: PredictiveController Internals

The prediction methodology of this module is mainly based on Maciejowski textbook [1].

Controller Construction

ModelPredictiveControl.init_ΔUtoUFunction
init_ΔUtoU(model, Hp, Hc) -> S, T

Init manipulated input increments to inputs conversion matrices.

The conversion from the input increments $\mathbf{ΔU}$ to manipulated inputs over $H_p$ are calculated by:

\[\mathbf{U} = \mathbf{S} \mathbf{ΔU} + \mathbf{T} \mathbf{u}(k-1) \\\]

source
ModelPredictiveControl.init_predmatFunction
init_predmat(estim, model::LinModel, Hp, Hc) -> E, G, J, K, V, ex̂, gx̂, jx̂, kx̂, vx̂

Construct the prediction matrices for LinModel model.

The model predictions are evaluated from the deviation vectors (see setop!) and:

\[\begin{aligned} +Predictive Controllers · ModelPredictiveControl.jl

Functions: PredictiveController Internals

The prediction methodology of this module is mainly based on Maciejowski textbook [1].

Controller Construction

ModelPredictiveControl.init_ΔUtoUFunction
init_ΔUtoU(model, Hp, Hc) -> S, T

Init manipulated input increments to inputs conversion matrices.

The conversion from the input increments $\mathbf{ΔU}$ to manipulated inputs over $H_p$ are calculated by:

\[\mathbf{U} = \mathbf{S} \mathbf{ΔU} + \mathbf{T} \mathbf{u}(k-1) \\\]

source
ModelPredictiveControl.init_predmatFunction
init_predmat(estim, model::LinModel, Hp, Hc) -> E, G, J, K, V, ex̂, gx̂, jx̂, kx̂, vx̂

Construct the prediction matrices for LinModel model.

The model predictions are evaluated from the deviation vectors (see setop!) and:

\[\begin{aligned} \mathbf{Ŷ_0} &= \mathbf{E ΔU} + \mathbf{G d_0}(k) + \mathbf{J D̂_0} + \mathbf{K x̂_0}(k) + \mathbf{V u_0}(k-1) + \mathbf{B} + \mathbf{Ŷ_s} \\ @@ -56,14 +56,14 @@ \mathbf{k_x̂} &= \mathbf{Â}^{H_p} \\ \mathbf{v_x̂} &= \mathbf{W}(H_p-1)\mathbf{B̂_u} \\ \mathbf{b_x̂} &= \mathbf{W}(H_p-1) \mathbf{\big(f̂_{op} - x̂_{op}\big)} -\end{aligned}\]

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.relaxUFunction
relaxU(model, nϵ, C_umin, C_umax, S) -> A_Umin, A_Umax, S̃

Augment manipulated inputs constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{S̃}$, similar to the one described at init_ΔUtoU. It also returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +\end{aligned}\]

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.relaxUFunction
relaxU(model, nϵ, C_umin, C_umax, S) -> A_Umin, A_Umax, S̃

Augment manipulated inputs constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{S̃}$, similar to the one described at init_ΔUtoU. It also returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{U_{min}}} \\ \mathbf{A_{U_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{(U_{min} - U_{op}) + T} \mathbf{u_0}(k-1) \\ + \mathbf{(U_{max} - U_{op}) - T} \mathbf{u_0}(k-1) -\end{bmatrix}\]

in which $\mathbf{U_{min}, U_{max}}$ and $\mathbf{U_{op}}$ vectors respectively contains $\mathbf{u_{min}, u_{max}}$ and $\mathbf{u_{op}}$ repeated $H_p$ times.

source
ModelPredictiveControl.relaxΔUFunction
relaxΔU(
+\end{bmatrix}\]

in which $\mathbf{U_{min}, U_{max}}$ and $\mathbf{U_{op}}$ vectors respectively contains $\mathbf{u_{min}, u_{max}}$ and $\mathbf{u_{op}}$ repeated $H_p$ times.

source
ModelPredictiveControl.relaxΔUFunction
relaxΔU(
     model, nϵ, C, C_Δumin, C_Δumax, ΔUmin, ΔUmax, N_Hc
 ) -> A_ΔŨmin, A_ΔŨmax, ΔŨmin, ΔŨmax, Ñ_Hc

Augment input increments constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented input increment weights $\mathbf{Ñ}_{H_c}$ (that incorporate $C$). It also returns the augmented constraints $\mathbf{ΔŨ_{min}}$ and $\mathbf{ΔŨ_{max}}$ and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{ΔŨ_{min}}} \\ @@ -72,27 +72,27 @@ \begin{bmatrix} - \mathbf{ΔŨ_{min}} \\ + \mathbf{ΔŨ_{max}} -\end{bmatrix}\]

source
ModelPredictiveControl.relaxŶFunction
relaxŶ(::LinModel, nϵ, C_ymin, C_ymax, E) -> A_Ymin, A_Ymax, Ẽ

Augment linear output prediction constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ_0 = Ẽ ΔŨ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +\end{bmatrix}\]

source
ModelPredictiveControl.relaxŶFunction
relaxŶ(::LinModel, nϵ, C_ymin, C_ymax, E) -> A_Ymin, A_Ymax, Ẽ

Augment linear output prediction constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ_0 = Ẽ ΔŨ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{Y_{min}}} \\ \mathbf{A_{Y_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{(Y_{min} - Y_{op}) + F} \\ + \mathbf{(Y_{max} - Y_{op}) - F} -\end{bmatrix}\]

in which $\mathbf{Y_{min}, Y_{max}}$ and $\mathbf{Y_{op}}$ vectors respectively contains $\mathbf{y_{min}, y_{max}}$ and $\mathbf{y_{op}}$ repeated $H_p$ times.

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.relaxterminalFunction
relaxterminal(::LinModel, nϵ, c_x̂min, c_x̂max, ex̂) -> A_x̂min, A_x̂max, ẽx̂

Augment terminal state constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{ẽ_{x̂}}$ matrix that appears in the terminal state equation $\mathbf{x̂_0}(k + H_p) = \mathbf{ẽ_x̂ ΔŨ + f_x̂}$, and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +\end{bmatrix}\]

in which $\mathbf{Y_{min}, Y_{max}}$ and $\mathbf{Y_{op}}$ vectors respectively contains $\mathbf{y_{min}, y_{max}}$ and $\mathbf{y_{op}}$ repeated $H_p$ times.

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.relaxterminalFunction
relaxterminal(::LinModel, nϵ, c_x̂min, c_x̂max, ex̂) -> A_x̂min, A_x̂max, ẽx̂

Augment terminal state constraints with slack variable ϵ for softening.

Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{ẽ_{x̂}}$ matrix that appears in the terminal state equation $\mathbf{x̂_0}(k + H_p) = \mathbf{ẽ_x̂ ΔŨ + f_x̂}$, and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{x̂_{min}}} \\ \mathbf{A_{x̂_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{(x̂_{min} - x̂_{op}) + f_x̂} \\ + \mathbf{(x̂_{max} - x̂_{op}) - f_x̂} -\end{bmatrix}\]

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.init_quadprogFunction
init_quadprog(model::LinModel, Ẽ, S, M_Hp, N_Hc, L_Hp) -> H̃

Init the quadratic programming Hessian for MPC.

The matrix appear in the quadratic general form:

\[ J = \min_{\mathbf{ΔŨ}} \frac{1}{2}\mathbf{(ΔŨ)'H̃(ΔŨ)} + \mathbf{q̃'(ΔŨ)} + p \]

The Hessian matrix is constant if the model and weights are linear and time invariant (LTI):

\[ \mathbf{H̃} = 2 ( \mathbf{Ẽ}'\mathbf{M}_{H_p}\mathbf{Ẽ} + \mathbf{Ñ}_{H_c} - + \mathbf{S̃}'\mathbf{L}_{H_p}\mathbf{S̃} )\]

The vector $\mathbf{q̃}$ and scalar $p$ need recalculation each control period $k$, see initpred!. $p$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value.

source

Return empty matrices if model is not a LinModel.

source
ModelPredictiveControl.init_stochpredFunction
init_stochpred(estim::InternalModel, Hp) -> Ks, Ps

Init the stochastic prediction matrices for InternalModel.

Ks and Ps matrices are defined as:

\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k) + \mathbf{P_s ŷ_s}(k)\]

Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].

source

Return empty matrices if estim is not a InternalModel.

source
ModelPredictiveControl.init_matconstraint_mpcFunction
init_matconstraint_mpc(model::LinModel,
+\end{bmatrix}\]

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.init_quadprogFunction
init_quadprog(model::LinModel, Ẽ, S, M_Hp, N_Hc, L_Hp) -> H̃

Init the quadratic programming Hessian for MPC.

The matrix appear in the quadratic general form:

\[ J = \min_{\mathbf{ΔŨ}} \frac{1}{2}\mathbf{(ΔŨ)'H̃(ΔŨ)} + \mathbf{q̃'(ΔŨ)} + p \]

The Hessian matrix is constant if the model and weights are linear and time invariant (LTI):

\[ \mathbf{H̃} = 2 ( \mathbf{Ẽ}'\mathbf{M}_{H_p}\mathbf{Ẽ} + \mathbf{Ñ}_{H_c} + + \mathbf{S̃}'\mathbf{L}_{H_p}\mathbf{S̃} )\]

The vector $\mathbf{q̃}$ and scalar $p$ need recalculation each control period $k$, see initpred!. $p$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value.

source

Return empty matrices if model is not a LinModel.

source
ModelPredictiveControl.init_stochpredFunction
init_stochpred(estim::InternalModel, Hp) -> Ks, Ps

Init the stochastic prediction matrices for InternalModel.

Ks and Ps matrices are defined as:

\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k) + \mathbf{P_s ŷ_s}(k)\]

Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].

source

Return empty matrices if estim is not a InternalModel.

source
ModelPredictiveControl.init_matconstraint_mpcFunction
init_matconstraint_mpc(model::LinModel,
     i_Umin, i_Umax, i_ΔŨmin, i_ΔŨmax, i_Ymin, i_Ymax, i_x̂min, i_x̂max, args...
 ) -> i_b, i_g, A

Init i_b, i_g and A matrices for the linear and nonlinear inequality constraints.

The linear and nonlinear inequality constraints are respectively defined as:

\[\begin{aligned} \mathbf{A ΔŨ } &≤ \mathbf{b} \\ \mathbf{g(ΔŨ)} &≤ \mathbf{0} -\end{aligned}\]

i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$ (empty if model is a LinModel). The method also returns the $\mathbf{A}$ matrix if args is provided. In such a case, args needs to contain all the inequality constraint matrices: A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ymin, A_Ymax, A_x̂min, A_x̂max.

source

Init i_b, A without outputs and terminal constraints if model is not a LinModel.

source

Update Quadratic Optimization

ModelPredictiveControl.initpred!Method
initpred!(mpc::PredictiveController, model::LinModel, d, D̂, R̂y, R̂u) -> nothing

Init linear model prediction matrices F, q̃, p and current estimated output .

See init_predmat and init_quadprog for the definition of the matrices. They are computed with these equations using in-place operations:

\[\begin{aligned} +\end{aligned}\]

i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$ (empty if model is a LinModel). The method also returns the $\mathbf{A}$ matrix if args is provided. In such a case, args needs to contain all the inequality constraint matrices: A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ymin, A_Ymax, A_x̂min, A_x̂max.

source

Init i_b, A without outputs and terminal constraints if model is not a LinModel.

source

Update Quadratic Optimization

ModelPredictiveControl.initpred!Method
initpred!(mpc::PredictiveController, model::LinModel, d, D̂, R̂y, R̂u) -> nothing

Init linear model prediction matrices F, q̃, p and current estimated output .

See init_predmat and init_quadprog for the definition of the matrices. They are computed with these equations using in-place operations:

\[\begin{aligned} \mathbf{F} &= \mathbf{G d_0}(k) + \mathbf{J D̂_0} + \mathbf{K x̂_0}(k) + \mathbf{V u_0}(k-1) + \mathbf{B} + \mathbf{Ŷ_s} \\ \mathbf{C_y} &= \mathbf{F} - (\mathbf{R̂_y - Y_{op}}) \\ @@ -101,7 +101,7 @@ + (\mathbf{L}_{H_p} \mathbf{S̃})' \mathbf{C_u}] \\ p &= \mathbf{C_y}' \mathbf{M}_{H_p} \mathbf{C_y} + \mathbf{C_u}' \mathbf{L}_{H_p} \mathbf{C_u} -\end{aligned}\]

source
ModelPredictiveControl.linconstraint!Method
linconstraint!(mpc::PredictiveController, model::LinModel)

Set b vector for the linear model inequality constraints ($\mathbf{A ΔŨ ≤ b}$).

Also init $\mathbf{f_x̂} = \mathbf{g_x̂ d}(k) + \mathbf{j_x̂ D̂} + \mathbf{k_x̂ x̂_0}(k) + \mathbf{v_x̂ u}(k-1) + \mathbf{b_x̂}$ vector for the terminal constraints, see init_predmat.

source

Solve Optimization Problem

ModelPredictiveControl.optim_objective!Function
optim_objective!(mpc::PredictiveController) -> ΔŨ

Optimize the objective function of mpc PredictiveController and return the solution ΔŨ.

If supported by mpc.optim, it warm-starts the solver at:

\[\mathbf{ΔŨ} = +\end{aligned}\]

source
ModelPredictiveControl.linconstraint!Method
linconstraint!(mpc::PredictiveController, model::LinModel)

Set b vector for the linear model inequality constraints ($\mathbf{A ΔŨ ≤ b}$).

Also init $\mathbf{f_x̂} = \mathbf{g_x̂ d}(k) + \mathbf{j_x̂ D̂} + \mathbf{k_x̂ x̂_0}(k) + \mathbf{v_x̂ u}(k-1) + \mathbf{b_x̂}$ vector for the terminal constraints, see init_predmat.

source

Solve Optimization Problem

ModelPredictiveControl.optim_objective!Function
optim_objective!(mpc::PredictiveController) -> ΔŨ

Optimize the objective function of mpc PredictiveController and return the solution ΔŨ.

If supported by mpc.optim, it warm-starts the solver at:

\[\mathbf{ΔŨ} = \begin{bmatrix} \mathbf{Δu}_{k-1}(k+0) \\ \mathbf{Δu}_{k-1}(k+1) \\ @@ -109,4 +109,4 @@ \mathbf{Δu}_{k-1}(k+H_c-2) \\ \mathbf{0} \\ ϵ_{k-1} -\end{bmatrix}\]

where $\mathbf{Δu}_{k-1}(k+j)$ is the input increment for time $k+j$ computed at the last control period $k-1$. It then calls JuMP.optimize!(mpc.optim) and extract the solution. A failed optimization prints an @error log in the REPL and returns the warm-start value.

source
optim_objective!(mpc::ExplicitMPC) -> ΔŨ

Analytically solve the optimization problem for ExplicitMPC.

The solution is $\mathbf{ΔŨ = - H̃^{-1} q̃}$, see init_quadprog.

source
+\end{bmatrix}\]

where $\mathbf{Δu}_{k-1}(k+j)$ is the input increment for time $k+j$ computed at the last control period $k-1$. It then calls JuMP.optimize!(mpc.optim) and extract the solution. A failed optimization prints an @error log in the REPL and returns the warm-start value.

source
optim_objective!(mpc::ExplicitMPC) -> ΔŨ

Analytically solve the optimization problem for ExplicitMPC.

The solution is $\mathbf{ΔŨ = - H̃^{-1} q̃}$, see init_quadprog.

source
  • 1Maciejowski, J. 2000, "Predictive control : with constraints", 1st ed., Prentice Hall, ISBN 978-0201398236.
  • 2Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control: a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
diff --git a/previews/PR94/internals/sim_model/index.html b/previews/PR94/internals/sim_model/index.html index ebcb13ef3..a285a8980 100644 --- a/previews/PR94/internals/sim_model/index.html +++ b/previews/PR94/internals/sim_model/index.html @@ -1,2 +1,2 @@ -Plant Models · ModelPredictiveControl.jl

Functions: SimModel Internals

State-Space Functions

Steady-State Calculation

ModelPredictiveControl.steadystate!Function
steadystate!(model::LinModel, u0, d0)

Set model.x0 to u0 and d0 steady-state if model is a LinModel.

Following setop! notation, the method evaluates the equilibrium from:

\[ \mathbf{x_0} = \mathbf{(I - A)^{-1}(B_u u_0 + B_d d_0 + f_{op} - x_{op})}\]

with constant manipulated inputs $\mathbf{u_0 = u - u_{op}}$ and measured disturbances $\mathbf{d_0 = d - d_{op}}$. The Moore-Penrose pseudo-inverse computes $\mathbf{(I - A)^{-1}}$ to support integrating model (integrator states will be 0).

source

Do nothing if model is a NonLinModel.

source
+Plant Models · ModelPredictiveControl.jl

Functions: SimModel Internals

State-Space Functions

Steady-State Calculation

ModelPredictiveControl.steadystate!Function
steadystate!(model::LinModel, u0, d0)

Set model.x0 to u0 and d0 steady-state if model is a LinModel.

Following setop! notation, the method evaluates the equilibrium from:

\[ \mathbf{x_0} = \mathbf{(I - A)^{-1}(B_u u_0 + B_d d_0 + f_{op} - x_{op})}\]

with constant manipulated inputs $\mathbf{u_0 = u - u_{op}}$ and measured disturbances $\mathbf{d_0 = d - d_{op}}$. The Moore-Penrose pseudo-inverse computes $\mathbf{(I - A)^{-1}}$ to support integrating model (integrator states will be 0).

source

Do nothing if model is a NonLinModel.

source
diff --git a/previews/PR94/internals/state_estim/index.html b/previews/PR94/internals/state_estim/index.html index bd2b80dff..73124a2f3 100644 --- a/previews/PR94/internals/state_estim/index.html +++ b/previews/PR94/internals/state_estim/index.html @@ -2,28 +2,28 @@ State Estimators · ModelPredictiveControl.jl

Functions: StateEstimator Internals

Augmented Model

ModelPredictiveControl.f̂!Function
f̂!(x̂next0, û0, estim::StateEstimator, model::SimModel, x̂0, u0, d0) -> nothing

Mutating state function $\mathbf{f̂}$ of the augmented model.

By introducing an augmented state vector $\mathbf{x̂_0}$ like in augment_model, the function returns the next state of the augmented model, defined as:

\[\begin{aligned} \mathbf{x̂_0}(k+1) &= \mathbf{f̂}\Big(\mathbf{x̂_0}(k), \mathbf{u_0}(k), \mathbf{d_0}(k)\Big) \\ \mathbf{ŷ_0}(k) &= \mathbf{ĥ}\Big(\mathbf{x̂_0}(k), \mathbf{d_0}(k)\Big) -\end{aligned}\]

where $\mathbf{x̂_0}(k+1)$ is stored in x̂next0 argument. The method mutates x̂next0 and û0 in place, the latter stores the input vector of the augmented model $\mathbf{u_0 + ŷ_{s_u}}$.

source
f̂!(x̂next0, _ , estim::StateEstimator, model::LinModel, x̂0, u0, d0) -> nothing

Use the augmented model matrices if model is a LinModel.

source
f̂!(x̂next0, _ , estim::InternalModel, model::NonLinModel, x̂0, u0, d0)

State function $\mathbf{f̂}$ of InternalModel for NonLinModel.

It calls model.f!(x̂next0, x̂0, u0 ,d0) since this estimator does not augment the states.

source
ModelPredictiveControl.ĥ!Function
ĥ!(ŷ0, estim::StateEstimator, model::SimModel, x̂0, d0) -> nothing

Mutating output function $\mathbf{ĥ}$ of the augmented model, see f̂!.

source
ĥ!(ŷ0, estim::StateEstimator, model::LinModel, x̂0, d0) -> nothing

Use the augmented model matrices if model is a LinModel.

source
ĥ!(ŷ0, estim::InternalModel, model::NonLinModel, x̂0, d0)

Output function $\mathbf{ĥ}$ of InternalModel, it calls model.h!.

source

Estimator Construction

ModelPredictiveControl.init_estimstochFunction
init_estimstoch(model, i_ym, nint_u, nint_ym) -> As, Cs_u, Cs_y, nxs, nint_u, nint_ym

Init stochastic model matrices from integrator specifications for state estimation.

The arguments nint_u and nint_ym specify how many integrators are added to each manipulated input and measured outputs. The function returns the state-space matrices As, Cs_u and Cs_y of the stochastic model:

\[\begin{aligned} +\end{aligned}\]

where $\mathbf{x̂_0}(k+1)$ is stored in x̂next0 argument. The method mutates x̂next0 and û0 in place, the latter stores the input vector of the augmented model $\mathbf{u_0 + ŷ_{s_u}}$.

source
f̂!(x̂next0, _ , estim::StateEstimator, model::LinModel, x̂0, u0, d0) -> nothing

Use the augmented model matrices if model is a LinModel.

source
f̂!(x̂next0, _ , estim::InternalModel, model::NonLinModel, x̂0, u0, d0)

State function $\mathbf{f̂}$ of InternalModel for NonLinModel.

It calls model.f!(x̂next0, x̂0, u0 ,d0) since this estimator does not augment the states.

source
ModelPredictiveControl.ĥ!Function
ĥ!(ŷ0, estim::StateEstimator, model::SimModel, x̂0, d0) -> nothing

Mutating output function $\mathbf{ĥ}$ of the augmented model, see f̂!.

source
ĥ!(ŷ0, estim::StateEstimator, model::LinModel, x̂0, d0) -> nothing

Use the augmented model matrices if model is a LinModel.

source
ĥ!(ŷ0, estim::InternalModel, model::NonLinModel, x̂0, d0)

Output function $\mathbf{ĥ}$ of InternalModel, it calls model.h!.

source

Estimator Construction

ModelPredictiveControl.init_estimstochFunction
init_estimstoch(model, i_ym, nint_u, nint_ym) -> As, Cs_u, Cs_y, nxs, nint_u, nint_ym

Init stochastic model matrices from integrator specifications for state estimation.

The arguments nint_u and nint_ym specify how many integrators are added to each manipulated input and measured outputs. The function returns the state-space matrices As, Cs_u and Cs_y of the stochastic model:

\[\begin{aligned} \mathbf{x_{s}}(k+1) &= \mathbf{A_s x_s}(k) + \mathbf{B_s e}(k) \\ \mathbf{y_{s_{u}}}(k) &= \mathbf{C_{s_{u}} x_s}(k) \\ \mathbf{y_{s_{ym}}}(k) &= \mathbf{C_{s_{ym}} x_s}(k) -\end{aligned}\]

where $\mathbf{e}(k)$ is an unknown zero mean white noise and $\mathbf{A_s} = \mathrm{diag}(\mathbf{A_{s_{u}}, A_{s_{ym}}})$. The estimations does not use $\mathbf{B_s}$, it is thus ignored. The function init_integrators builds the state-space matrices.

source
ModelPredictiveControl.init_integratorsFunction
init_integrators(nint, ny, varname::String) -> A, C, nint

Calc A, C state-space matrices from integrator specifications nint.

This function is used to initialize the stochastic part of the augmented model for the design of state estimators. The vector nint provides how many integrators (in series) should be incorporated for each output. The argument should have ny element, except for nint=0 which is an alias for no integrator at all. The specific case of one integrator per output results in A = I and C = I. The estimation does not use the B matrix, it is thus ignored. This function is called twice :

  1. for the unmeasured disturbances at manipulated inputs $\mathbf{u}$
  2. for the unmeasured disturbances at measured outputs $\mathbf{y^m}$
source
ModelPredictiveControl.augment_modelFunction
augment_model(
+\end{aligned}\]

where $\mathbf{e}(k)$ is an unknown zero mean white noise and $\mathbf{A_s} = \mathrm{diag}(\mathbf{A_{s_{u}}, A_{s_{ym}}})$. The estimations does not use $\mathbf{B_s}$, it is thus ignored. The function init_integrators builds the state-space matrices.

source
ModelPredictiveControl.init_integratorsFunction
init_integrators(nint, ny, varname::String) -> A, C, nint

Calc A, C state-space matrices from integrator specifications nint.

This function is used to initialize the stochastic part of the augmented model for the design of state estimators. The vector nint provides how many integrators (in series) should be incorporated for each output. The argument should have ny element, except for nint=0 which is an alias for no integrator at all. The specific case of one integrator per output results in A = I and C = I. The estimation does not use the B matrix, it is thus ignored. This function is called twice :

  1. for the unmeasured disturbances at manipulated inputs $\mathbf{u}$
  2. for the unmeasured disturbances at measured outputs $\mathbf{y^m}$
source
ModelPredictiveControl.augment_modelFunction
augment_model(
     model::LinModel, As, Cs_u, Cs_y; verify_obsv=true
 ) -> Â, B̂u, Ĉ, B̂d, D̂d, x̂op, f̂op

Augment LinModel state-space matrices with stochastic ones As, Cs_u, Cs_y.

If $\mathbf{x_0}$ are model.x0 states, and $\mathbf{x_s}$, the states defined at init_estimstoch, we define an augmented state vector $\mathbf{x̂} = [ \begin{smallmatrix} \mathbf{x_0} \\ \mathbf{x_s} \end{smallmatrix} ]$. The method returns the augmented matrices , B̂u, , B̂d and D̂d:

\[\begin{aligned} \mathbf{x̂_0}(k+1) &= \mathbf{Â x̂_0}(k) + \mathbf{B̂_u u_0}(k) + \mathbf{B̂_d d_0}(k) \\ \mathbf{ŷ_0}(k) &= \mathbf{Ĉ x̂_0}(k) + \mathbf{D̂_d d_0}(k) -\end{aligned}\]

An error is thrown if the augmented model is not observable and verify_obsv == true. The augmented operating points x̂op and f̂op are simply $\mathbf{x_{op}}$ and $\mathbf{f_{op}}$ vectors appended with zeros (see setop!).

source
augment_model(
+\end{aligned}\]

An error is thrown if the augmented model is not observable and verify_obsv == true. The augmented operating points x̂op and f̂op are simply $\mathbf{x_{op}}$ and $\mathbf{f_{op}}$ vectors appended with zeros (see setop!).

source
augment_model(
     model::SimModel, As, Cs_u, Cs_y; verify_obsv=false
-) -> Â, B̂u, Ĉ, B̂d, D̂d, x̂op, f̂op

Return empty matrices, and x̂op & f̂op vectors, if model is not a LinModel.

source
ModelPredictiveControl.init_ukfFunction
init_ukf(model, nx̂, α, β, κ) -> nσ, γ, m̂, Ŝ

Compute the UnscentedKalmanFilter constants from $α, β$ and $κ$.

With $n_\mathbf{x̂}$ elements in the state vector $\mathbf{x̂}$ and $n_σ = 2 n_\mathbf{x̂} + 1$ sigma points, the scaling factor applied on standard deviation matrices $\sqrt{\mathbf{P̂}}$ is:

\[ γ = α \sqrt{ n_\mathbf{x̂} + κ }\]

The weight vector $(n_σ × 1)$ for the mean and the weight matrix $(n_σ × n_σ)$ for the covariance are respectively:

\[\begin{aligned} +) -> Â, B̂u, Ĉ, B̂d, D̂d, x̂op, f̂op

Return empty matrices, and x̂op & f̂op vectors, if model is not a LinModel.

source
ModelPredictiveControl.init_ukfFunction
init_ukf(model, nx̂, α, β, κ) -> nσ, γ, m̂, Ŝ

Compute the UnscentedKalmanFilter constants from $α, β$ and $κ$.

With $n_\mathbf{x̂}$ elements in the state vector $\mathbf{x̂}$ and $n_σ = 2 n_\mathbf{x̂} + 1$ sigma points, the scaling factor applied on standard deviation matrices $\sqrt{\mathbf{P̂}}$ is:

\[ γ = α \sqrt{ n_\mathbf{x̂} + κ }\]

The weight vector $(n_σ × 1)$ for the mean and the weight matrix $(n_σ × n_σ)$ for the covariance are respectively:

\[\begin{aligned} \mathbf{m̂} &= \begin{bmatrix} 1 - \tfrac{n_\mathbf{x̂}}{γ^2} & \tfrac{1}{2γ^2} & \tfrac{1}{2γ^2} & \cdots & \tfrac{1}{2γ^2} \end{bmatrix}' \\ \mathbf{Ŝ} &= \mathrm{diag}\big( 2 - α^2 + β - \tfrac{n_\mathbf{x̂}}{γ^2} \:,\; \tfrac{1}{2γ^2} \:,\; \tfrac{1}{2γ^2} \:,\; \cdots \:,\; \tfrac{1}{2γ^2} \big) -\end{aligned}\]

See update_estimate!(::UnscentedKalmanFilter) for other details.

source
ModelPredictiveControl.init_internalmodelFunction
init_internalmodel(As, Bs, Cs, Ds) -> Âs, B̂s

Calc stochastic model update matrices Âs and B̂s for InternalModel estimator.

As, Bs, Cs and Ds are the stochastic model matrices :

\[\begin{aligned} \mathbf{x_s}(k+1) &= \mathbf{A_s x_s}(k) + \mathbf{B_s e}(k) \\ \mathbf{y_s}(k) &= \mathbf{C_s x_s}(k) + \mathbf{D_s e}(k) \end{aligned}\]

where $\mathbf{e}(k)$ is conceptual and unknown zero mean white noise. Its optimal estimation is $\mathbf{ê=0}$, the expected value. Thus, the Âs and B̂s matrices that optimally update the stochastic estimate $\mathbf{x̂_s}$ are:

\[\begin{aligned} \mathbf{x̂_s}(k+1) &= \mathbf{(A_s - B_s D_s^{-1} C_s) x̂_s}(k) + \mathbf{(B_s D_s^{-1}) ŷ_s}(k) \\ &= \mathbf{Â_s x̂_s}(k) + \mathbf{B̂_s ŷ_s}(k) -\end{aligned}\]

with current stochastic outputs estimation $\mathbf{ŷ_s}(k)$, composed of the measured $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs. See [1].

source
ModelPredictiveControl.init_predmat_mheFunction
init_predmat_mhe(
+\end{aligned}\]

with current stochastic outputs estimation $\mathbf{ŷ_s}(k)$, composed of the measured $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs. See [1].

source
ModelPredictiveControl.init_predmat_mheFunction
init_predmat_mhe(
     model::LinModel, He, i_ym, Â, B̂u, Ĉm, B̂d, D̂dm, x̂op, f̂op
 ) -> E, G, J, B, ex̄, Ex̂, Gx̂, Jx̂, Bx̂

Construct the MHE prediction matrices for LinModel model.

We first introduce the deviation vector of the estimated state at arrival $\mathbf{x̂_0}(k-N_k+1) = \mathbf{x̂}_k(k-N_k+1) - \mathbf{x̂_{op}}$ (see setop!), and the vector $\mathbf{Z} = [\begin{smallmatrix} \mathbf{x̂_0}(k-N_k+1) \\ \mathbf{Ŵ} \end{smallmatrix}]$ with the decision variables. The estimated sensor noises from time $k-N_k+1$ to $k$ are computed by:

\[\begin{aligned} \mathbf{V̂} = \mathbf{Y_0^m - Ŷ_0^m} &= \mathbf{E Z + G U_0 + J D_0 + Y_0^m + B} \\ @@ -74,7 +74,7 @@ \mathbf{S}(1) \\ \vdots \\ \mathbf{S}(H_e-1) \end{bmatrix} \mathbf{\big(f̂_{op} - x̂_{op}\big)} -\end{aligned}\]

All these matrices are truncated when $N_k < H_e$ (at the beginning).

source

Return empty matrices if model is not a LinModel, except for ex̄.

source
source

Return empty matrices if model is not a LinModel, except for ex̄.

source
ModelPredictiveControl.relaxarrivalFunction
relaxarrival(
     model::SimModel, nϵ, c_x̂min, c_x̂max, x̂min, x̂max, ex̄
 ) -> A_x̃min, A_x̃max, x̃min, x̃max, ẽx̄

Augment arrival state constraints with slack variable ϵ for softening the MHE.

Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{ẽ_x̄}$ matrix that appears in the estimation error at arrival equation $\mathbf{x̄} = \mathbf{ẽ_x̄ Z̃ + f_x̄}$. It also returns the augmented constraints $\mathbf{x̃_{min}}$ and $\mathbf{x̃_{max}}$, and the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{x̃_{min}}} \\ @@ -83,33 +83,33 @@ \begin{bmatrix} - \mathbf{(x̃_{min} - x̃_{op})} \\ + \mathbf{(x̃_{max} - x̃_{op})} -\end{bmatrix}\]

in which $\mathbf{x̃_{min}} = [\begin{smallmatrix} 0 \\ \mathbf{x̂_{min}} \end{smallmatrix}]$, $\mathbf{x̃_{max}} = [\begin{smallmatrix} ∞ \\ \mathbf{x̂_{max}} \end{smallmatrix}]$ and $\mathbf{x̃_{op}} = [\begin{smallmatrix} 0 \\ \mathbf{x̂_{op}} \end{smallmatrix}]$

source
ModelPredictiveControl.relaxX̂Function
relaxX̂(model::SimModel, nϵ, C_x̂min, C_x̂max, Ex̂) -> A_X̂min, A_X̂max, Ẽx̂

Augment estimated state constraints with slack variable ϵ for softening the MHE.

Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ_x̂}$ matrix that appears in estimated states equation $\mathbf{X̂} = \mathbf{Ẽ_x̂ Z̃ + F_x̂}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +\end{bmatrix}\]

in which $\mathbf{x̃_{min}} = [\begin{smallmatrix} 0 \\ \mathbf{x̂_{min}} \end{smallmatrix}]$, $\mathbf{x̃_{max}} = [\begin{smallmatrix} ∞ \\ \mathbf{x̂_{max}} \end{smallmatrix}]$ and $\mathbf{x̃_{op}} = [\begin{smallmatrix} 0 \\ \mathbf{x̂_{op}} \end{smallmatrix}]$

source
ModelPredictiveControl.relaxX̂Function
relaxX̂(model::SimModel, nϵ, C_x̂min, C_x̂max, Ex̂) -> A_X̂min, A_X̂max, Ẽx̂

Augment estimated state constraints with slack variable ϵ for softening the MHE.

Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ_x̂}$ matrix that appears in estimated states equation $\mathbf{X̂} = \mathbf{Ẽ_x̂ Z̃ + F_x̂}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{X̂_{min}}} \\ \mathbf{A_{X̂_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{(X̂_{min} - X̂_{op}) + F_x̂} \\ + \mathbf{(X̂_{max} - X̂_{op}) - F_x̂} -\end{bmatrix}\]

in which $\mathbf{X̂_{min}, X̂_{max}}$ and $\mathbf{X̂_{op}}$ vectors respectively contains $\mathbf{x̂_{min}, x̂_{max}}$ and $\mathbf{x̂_{op}}$ repeated $H_e$ times.

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.relaxŴFunction
relaxŴ(model::SimModel, nϵ, C_ŵmin, C_ŵmax, nx̂) -> A_Ŵmin, A_Ŵmax

Augment estimated process noise constraints with slack variable ϵ for softening the MHE.

Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +\end{bmatrix}\]

in which $\mathbf{X̂_{min}, X̂_{max}}$ and $\mathbf{X̂_{op}}$ vectors respectively contains $\mathbf{x̂_{min}, x̂_{max}}$ and $\mathbf{x̂_{op}}$ repeated $H_e$ times.

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.relaxŴFunction
relaxŴ(model::SimModel, nϵ, C_ŵmin, C_ŵmax, nx̂) -> A_Ŵmin, A_Ŵmax

Augment estimated process noise constraints with slack variable ϵ for softening the MHE.

Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{Ŵ_{min}}} \\ \mathbf{A_{Ŵ_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{Ŵ_{min}} \\ + \mathbf{Ŵ_{max}} -\end{bmatrix}\]

source
ModelPredictiveControl.relaxV̂Function
relaxV̂(model::SimModel, nϵ, C_v̂min, C_v̂max, E) -> A_V̂min, A_V̂max, Ẽ

Augment estimated sensor noise constraints with slack variable ϵ for softening the MHE.

Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in estimated sensor noise equation $\mathbf{V̂} = \mathbf{Ẽ Z̃ + F}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} +\end{bmatrix}\]

source
ModelPredictiveControl.relaxV̂Function
relaxV̂(model::SimModel, nϵ, C_v̂min, C_v̂max, E) -> A_V̂min, A_V̂max, Ẽ

Augment estimated sensor noise constraints with slack variable ϵ for softening the MHE.

Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in estimated sensor noise equation $\mathbf{V̂} = \mathbf{Ẽ Z̃ + F}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:

\[\begin{bmatrix} \mathbf{A_{V̂_{min}}} \\ \mathbf{A_{V̂_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{V̂_{min} + F} \\ + \mathbf{V̂_{max} - F} -\end{bmatrix}\]

source

Return empty matrices if model is not a LinModel

source
ModelPredictiveControl.init_matconstraint_mheFunction
init_matconstraint_mhe(model::LinModel, 
     i_x̃min, i_x̃max, i_X̂min, i_X̂max, i_Ŵmin, i_Ŵmax, i_V̂min, i_V̂max, args...
 ) -> i_b, i_g, A

Init i_b, i_g and A matrices for the MHE linear inequality constraints.

The linear and nonlinear inequality constraints are respectively defined as:

\[\begin{aligned} \mathbf{A Z̃ } &≤ \mathbf{b} \\ \mathbf{g(Z̃)} &≤ \mathbf{0} -\end{aligned}\]

i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$ (empty if model is a LinModel). The method also returns the $\mathbf{A}$ matrix if args is provided. In such a case, args needs to contain all the inequality constraint matrices: A_x̃min, A_x̃max, A_X̂min, A_X̂max, A_Ŵmin, A_Ŵmax, A_V̂min, A_V̂max.

source

Init i_b, A without state and sensor noise constraints if model is not a LinModel.

source

Update Quadratic Optimization

ModelPredictiveControl.initpred!Method
initpred!(estim::MovingHorizonEstimator, model::LinModel) -> nothing

Init quadratic optimization matrices F, fx̄, H̃, q̃, p for MovingHorizonEstimator.

See init_predmat_mhe for the definition of the vectors $\mathbf{F, f_x̄}$. It also inits estim.optim objective function, expressed as the quadratic general form:

\[ J = \min_{\mathbf{Z̃}} \frac{1}{2}\mathbf{Z̃' H̃ Z̃} + \mathbf{q̃' Z̃} + p \]

in which $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$. Note that $p$ is useless at optimization but required to evaluate the objective minima $J$. The Hessian $\mathbf{H̃}$ matrix of the quadratic general form is not constant here because of the time-varying $\mathbf{P̄}$ covariance . The computed variables are:

\[\begin{aligned} +\end{aligned}\]

i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$ (empty if model is a LinModel). The method also returns the $\mathbf{A}$ matrix if args is provided. In such a case, args needs to contain all the inequality constraint matrices: A_x̃min, A_x̃max, A_X̂min, A_X̂max, A_Ŵmin, A_Ŵmax, A_V̂min, A_V̂max.

source

Init i_b, A without state and sensor noise constraints if model is not a LinModel.

source

Update Quadratic Optimization

ModelPredictiveControl.initpred!Method
initpred!(estim::MovingHorizonEstimator, model::LinModel) -> nothing

Init quadratic optimization matrices F, fx̄, H̃, q̃, p for MovingHorizonEstimator.

See init_predmat_mhe for the definition of the vectors $\mathbf{F, f_x̄}$. It also inits estim.optim objective function, expressed as the quadratic general form:

\[ J = \min_{\mathbf{Z̃}} \frac{1}{2}\mathbf{Z̃' H̃ Z̃} + \mathbf{q̃' Z̃} + p \]

in which $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$. Note that $p$ is useless at optimization but required to evaluate the objective minima $J$. The Hessian $\mathbf{H̃}$ matrix of the quadratic general form is not constant here because of the time-varying $\mathbf{P̄}$ covariance . The computed variables are:

\[\begin{aligned} \mathbf{F} &= \mathbf{G U_0} + \mathbf{J D_0} + \mathbf{Y_0^m} + \mathbf{B} \\ \mathbf{f_x̄} &= \mathbf{x̂_0^†}(k-N_k+1) \\ \mathbf{F_Z̃} &= [\begin{smallmatrix}\mathbf{f_x̄} \\ \mathbf{F} \end{smallmatrix}] \\ @@ -119,15 +119,15 @@ \mathbf{H̃} &= 2(\mathbf{Ẽ_Z̃}' \mathbf{M}_{N_k} \mathbf{Ẽ_Z̃} + \mathbf{Ñ}_{N_k}) \\ \mathbf{q̃} &= 2(\mathbf{M}_{N_k} \mathbf{Ẽ_Z̃})' \mathbf{F_Z̃} \\ p &= \mathbf{F_Z̃}' \mathbf{M}_{N_k} \mathbf{F_Z̃} -\end{aligned}\]

source
ModelPredictiveControl.linconstraint!Method
linconstraint!(estim::MovingHorizonEstimator, model::LinModel)

Set b vector for the linear model inequality constraints ($\mathbf{A Z̃ ≤ b}$) of MHE.

Also init $\mathbf{F_x̂ = G_x̂ U_0 + J_x̂ D_0 + B_x̂}$ vector for the state constraints, see init_predmat_mhe.

source

Evaluate Estimated Output

ModelPredictiveControl.evalŷFunction
evalŷ(estim::InternalModel, d) -> ŷ

Get InternalModel estimated output .

InternalModel estimator needs current stochastic output $\mathbf{ŷ_s}(k)$ to estimate its outputs $\mathbf{ŷ}(k)$. The method preparestate! store this value inside estim object, it should be thus called before evalŷ.

source
evalŷ(estim::StateEstimator, d) -> ŷ

Evaluate StateEstimator output from measured disturbance d and estim.x̂0.

source

Remove Operating Points

ModelPredictiveControl.remove_op!Function
remove_op!(estim::StateEstimator, ym, d, u=nothing) -> y0m, d0, u0

Remove operating pts on measured outputs ym, disturbances d and inputs u (if provided).

If u is provided, also store current inputs without operating points u0 in estim.lastu0. This field is used for PredictiveController computations.

source

Init Estimate

ModelPredictiveControl.init_estimate!Function
init_estimate!(estim::StateEstimator, model::LinModel, y0m, d0, u0)

Init estim.x̂0 estimate with the steady-state solution if model is a LinModel.

Using u0, y0m and d0 arguments (deviation values, see setop!), the steadystate problem combined to the equality constraint $\mathbf{ŷ_0^m} = \mathbf{y_0^m}$ engenders the following system to solve:

\[\begin{bmatrix} +\end{aligned}\]

source
ModelPredictiveControl.linconstraint!Method
linconstraint!(estim::MovingHorizonEstimator, model::LinModel)

Set b vector for the linear model inequality constraints ($\mathbf{A Z̃ ≤ b}$) of MHE.

Also init $\mathbf{F_x̂ = G_x̂ U_0 + J_x̂ D_0 + B_x̂}$ vector for the state constraints, see init_predmat_mhe.

source

Evaluate Estimated Output

ModelPredictiveControl.evalŷFunction
evalŷ(estim::InternalModel, d) -> ŷ

Get InternalModel estimated output .

InternalModel estimator needs current stochastic output $\mathbf{ŷ_s}(k)$ to estimate its outputs $\mathbf{ŷ}(k)$. The method preparestate! store this value inside estim object, it should be thus called before evalŷ.

source
evalŷ(estim::StateEstimator, d) -> ŷ

Evaluate StateEstimator output from measured disturbance d and estim.x̂0.

source

Remove Operating Points

ModelPredictiveControl.remove_op!Function
remove_op!(estim::StateEstimator, ym, d, u=nothing) -> y0m, d0, u0

Remove operating pts on measured outputs ym, disturbances d and inputs u (if provided).

If u is provided, also store current inputs without operating points u0 in estim.lastu0. This field is used for PredictiveController computations.

source

Init Estimate

ModelPredictiveControl.init_estimate!Function
init_estimate!(estim::StateEstimator, model::LinModel, y0m, d0, u0)

Init estim.x̂0 estimate with the steady-state solution if model is a LinModel.

Using u0, y0m and d0 arguments (deviation values, see setop!), the steadystate problem combined to the equality constraint $\mathbf{ŷ_0^m} = \mathbf{y_0^m}$ engenders the following system to solve:

\[\begin{bmatrix} \mathbf{I} - \mathbf{Â} \\ \mathbf{Ĉ^m} \end{bmatrix} \mathbf{x̂_0} = \begin{bmatrix} \mathbf{B̂_u u_0 + B̂_d d_0 + f̂_{op} - x̂_{op}} \\ \mathbf{y_0^m - D̂_d^m d_0} -\end{bmatrix}\]

in which $\mathbf{Ĉ^m, D̂_d^m}$ are the rows of estim.Ĉ, estim.D̂d that correspond to measured outputs $\mathbf{y^m}$.

source
init_estimate!(estim::StateEstimator, model::SimModel, _ , _ , _ )

Left estim.x̂0 estimate unchanged if model is not a LinModel.

source
init_estimate!(estim::InternalModel, model::LinModel, y0m, d0, u0)

Init estim.x̂0/x̂d/x̂s estimate at steady-state for InternalModel.

The deterministic estimates estim.x̂d start at steady-state using u0 and d0 arguments:

\[ \mathbf{x̂_d} = \mathbf{(I - A)^{-1} (B_u u_0 + B_d d_0 + f_{op} - x_{op})}\]

Based on y0m argument and current stochastic outputs estimation $\mathbf{ŷ_s}$, composed of the measured $\mathbf{ŷ_s^m} = \mathbf{y_0^m} - \mathbf{ŷ_{d0}^m}$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs, the stochastic estimates also start at steady-state:

\[ \mathbf{x̂_s} = \mathbf{(I - Â_s)^{-1} B̂_s ŷ_s}\]

This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel for details.

source

Correct Estimate

ModelPredictiveControl.correct_estimate!Function
correct_estimate!(estim::SteadyKalmanFilter, y0m, d0)

Correct estim.x̂0 with measured outputs y0m and disturbances d0 for current time step.

It computes the corrected state estimate $\mathbf{x̂}_{k}(k)$. See the docstring of update_estimate!(::SteadyKalmanFilter, ::Any, ::Any) for the equations.

source
correct_estimate!(estim::KalmanFilter, y0m, d0)

Correct estim.x̂0 and estim.P̂ using the time-varying KalmanFilter.

It computes the corrected state estimate $\mathbf{x̂}_{k}(k)$ estimation covariance $\mathbf{P̂}_{k}(k)$.

source
correct_estimate!(estim::UnscentedKalmanFilter, y0m, d0)

Do the same but for the UnscentedKalmanFilter.

source
correct_estimate!(estim::ExtendedKalmanFilter, y0m, d0)

Do the same but for the ExtendedKalmanFilter.

source
correct_estimate!(estim::Luenberger, y0m, d0, _ )

Identical to correct_estimate!(::SteadyKalmanFilter) but using Luenberger.

source
correct_estimate!(estim::MovingHorizonEstimator, y0m, d0)

Do the same but for MovingHorizonEstimator objects.

source
correct_estimate!(estim::InternalModel, y0m, d0)

Compute the current stochastic output estimation ŷs for InternalModel.

source

Update Estimate

Info

All these methods assume that the u0, y0m and d0 arguments are deviation vectors from their respective operating points (see setop!). The associated equations in the documentation drops the $\mathbf{0}$ in subscript to simplify the notation. Strictly speaking, the manipulated inputs, measured outputs, measured disturbances and estimated states should be denoted with $\mathbf{u_0, y_0^m, d_0}$ and $\mathbf{x̂_0}$, respectively.

ModelPredictiveControl.update_estimate!Function
update_estimate!(estim::SteadyKalmanFilter, y0m, d0, u0)

Update estim.x̂0 estimate with current inputs u0, measured outputs y0m and dist. d0.

If estim.direct == false, the SteadyKalmanFilter first corrects the state estimate with the precomputed Kalman gain $\mathbf{K̂}$. Afterward, it predicts the next state with the augmented process model. The correction step is skipped if direct == true since it is already done by the user through the preparestate! function (that calls correct_estimate!). The correction and prediction step equations are provided below.

Correction Step

\[\mathbf{x̂}_k(k) = \mathbf{x̂}_{k-1}(k) + \mathbf{K̂}[\mathbf{y^m}(k) - \mathbf{Ĉ^m x̂}_{k-1}(k) - - \mathbf{D̂_d^m d}(k) ]\]

Prediction Step

\[\mathbf{x̂}_{k}(k+1) = \mathbf{Â x̂}_{k}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) \]

source
update_estimate!(estim::KalmanFilter, y0m, d0, u0)

Update KalmanFilter state estim.x̂0 and estimation error covariance estim.P̂.

It implements the classical time-varying Kalman Filter based on the process model described in SteadyKalmanFilter. If estim.direct == false, it first corrects the estimate before predicting the next state. The correction step is skipped if estim.direct == true since it's already done by the user. The correction and prediction step equations are provided below, see [2] for details.

Correction Step

\[\begin{aligned} +\end{bmatrix}\]

in which $\mathbf{Ĉ^m, D̂_d^m}$ are the rows of estim.Ĉ, estim.D̂d that correspond to measured outputs $\mathbf{y^m}$.

source
init_estimate!(estim::StateEstimator, model::SimModel, _ , _ , _ )

Left estim.x̂0 estimate unchanged if model is not a LinModel.

source
init_estimate!(estim::InternalModel, model::LinModel, y0m, d0, u0)

Init estim.x̂0/x̂d/x̂s estimate at steady-state for InternalModel.

The deterministic estimates estim.x̂d start at steady-state using u0 and d0 arguments:

\[ \mathbf{x̂_d} = \mathbf{(I - A)^{-1} (B_u u_0 + B_d d_0 + f_{op} - x_{op})}\]

Based on y0m argument and current stochastic outputs estimation $\mathbf{ŷ_s}$, composed of the measured $\mathbf{ŷ_s^m} = \mathbf{y_0^m} - \mathbf{ŷ_{d0}^m}$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs, the stochastic estimates also start at steady-state:

\[ \mathbf{x̂_s} = \mathbf{(I - Â_s)^{-1} B̂_s ŷ_s}\]

This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel for details.

source

Correct Estimate

ModelPredictiveControl.correct_estimate!Function
correct_estimate!(estim::SteadyKalmanFilter, y0m, d0)

Correct estim.x̂0 with measured outputs y0m and disturbances d0 for current time step.

It computes the corrected state estimate $\mathbf{x̂}_{k}(k)$. See the docstring of update_estimate!(::SteadyKalmanFilter, ::Any, ::Any) for the equations.

source
correct_estimate!(estim::KalmanFilter, y0m, d0)

Correct estim.x̂0 and estim.P̂ using the time-varying KalmanFilter.

It computes the corrected state estimate $\mathbf{x̂}_{k}(k)$ estimation covariance $\mathbf{P̂}_{k}(k)$.

source
correct_estimate!(estim::UnscentedKalmanFilter, y0m, d0)

Do the same but for the UnscentedKalmanFilter.

source
correct_estimate!(estim::ExtendedKalmanFilter, y0m, d0)

Do the same but for the ExtendedKalmanFilter.

source
correct_estimate!(estim::Luenberger, y0m, d0, _ )

Identical to correct_estimate!(::SteadyKalmanFilter) but using Luenberger.

source
correct_estimate!(estim::MovingHorizonEstimator, y0m, d0)

Do the same but for MovingHorizonEstimator objects.

source
correct_estimate!(estim::InternalModel, y0m, d0)

Compute the current stochastic output estimation ŷs for InternalModel.

source

Update Estimate

Info

All these methods assume that the u0, y0m and d0 arguments are deviation vectors from their respective operating points (see setop!). The associated equations in the documentation drops the $\mathbf{0}$ in subscript to simplify the notation. Strictly speaking, the manipulated inputs, measured outputs, measured disturbances and estimated states should be denoted with $\mathbf{u_0, y_0^m, d_0}$ and $\mathbf{x̂_0}$, respectively.

ModelPredictiveControl.update_estimate!Function
update_estimate!(estim::SteadyKalmanFilter, y0m, d0, u0)

Update estim.x̂0 estimate with current inputs u0, measured outputs y0m and dist. d0.

If estim.direct == false, the SteadyKalmanFilter first corrects the state estimate with the precomputed Kalman gain $\mathbf{K̂}$. Afterward, it predicts the next state with the augmented process model. The correction step is skipped if direct == true since it is already done by the user through the preparestate! function (that calls correct_estimate!). The correction and prediction step equations are provided below.

Correction Step

\[\mathbf{x̂}_k(k) = \mathbf{x̂}_{k-1}(k) + \mathbf{K̂}[\mathbf{y^m}(k) - \mathbf{Ĉ^m x̂}_{k-1}(k) + - \mathbf{D̂_d^m d}(k) ]\]

Prediction Step

\[\mathbf{x̂}_{k}(k+1) = \mathbf{Â x̂}_{k}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) \]

source
update_estimate!(estim::KalmanFilter, y0m, d0, u0)

Update KalmanFilter state estim.x̂0 and estimation error covariance estim.P̂.

It implements the classical time-varying Kalman Filter based on the process model described in SteadyKalmanFilter. If estim.direct == false, it first corrects the estimate before predicting the next state. The correction step is skipped if estim.direct == true since it's already done by the user. The correction and prediction step equations are provided below, see [2] for details.

Correction Step

\[\begin{aligned} \mathbf{M̂}(k) &= \mathbf{Ĉ^m P̂}_{k-1}(k)\mathbf{Ĉ^m}' + \mathbf{R̂} \\ \mathbf{K̂}(k) &= \mathbf{P̂}_{k-1}(k)\mathbf{Ĉ^m}'\mathbf{M̂^{-1}}(k) \\ \mathbf{ŷ^m}(k) &= \mathbf{Ĉ^m x̂}_{k-1}(k) + \mathbf{D̂_d^m d}(k) \\ @@ -136,7 +136,7 @@ \end{aligned}\]

Prediction Step

\[\begin{aligned} \mathbf{x̂}_{k}(k+1) &= \mathbf{Â x̂}_{k}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) \\ \mathbf{P̂}_{k}(k+1) &= \mathbf{Â P̂}_{k}(k)\mathbf{Â}' + \mathbf{Q̂} -\end{aligned}\]

source
update_estimate!(estim::UnscentedKalmanFilter, y0m, d0, u0)

Update UnscentedKalmanFilter state estim.x̂0 and covariance estimate estim.P̂.

It implements the unscented Kalman Filter based on the generalized unscented transform[3]. See init_ukf for the definition of the constants $\mathbf{m̂, Ŝ}$ and $γ$. The superscript in e.g. $\mathbf{X̂}_{k-1}^j(k)$ refers the vector at the $j$th column of $\mathbf{X̂}_{k-1}(k)$. The symbol $\mathbf{0}$ is a vector with zeros. The number of sigma points is $n_σ = 2 n_\mathbf{x̂} + 1$. The matrices $\sqrt{\mathbf{P̂}_{k-1}(k)}$ and $\sqrt{\mathbf{P̂}_{k}(k)}$ are the the lower triangular factors of cholesky results. The correction and prediction step equations are provided below. The correction step is skipped if estim.direct == true since it's already done by the user.

Correction Step

\[\begin{aligned} +\end{aligned}\]

source
update_estimate!(estim::UnscentedKalmanFilter, y0m, d0, u0)

Update UnscentedKalmanFilter state estim.x̂0 and covariance estimate estim.P̂.

It implements the unscented Kalman Filter based on the generalized unscented transform[3]. See init_ukf for the definition of the constants $\mathbf{m̂, Ŝ}$ and $γ$. The superscript in e.g. $\mathbf{X̂}_{k-1}^j(k)$ refers the vector at the $j$th column of $\mathbf{X̂}_{k-1}(k)$. The symbol $\mathbf{0}$ is a vector with zeros. The number of sigma points is $n_σ = 2 n_\mathbf{x̂} + 1$. The matrices $\sqrt{\mathbf{P̂}_{k-1}(k)}$ and $\sqrt{\mathbf{P̂}_{k}(k)}$ are the the lower triangular factors of cholesky results. The correction and prediction step equations are provided below. The correction step is skipped if estim.direct == true since it's already done by the user.

Correction Step

\[\begin{aligned} \mathbf{X̂}_{k-1}(k) &= \bigg[\begin{matrix} \mathbf{x̂}_{k-1}(k) & \mathbf{x̂}_{k-1}(k) & \cdots & \mathbf{x̂}_{k-1}(k) \end{matrix}\bigg] + \bigg[\begin{matrix} \mathbf{0} & γ \sqrt{\mathbf{P̂}_{k-1}(k)} & -γ \sqrt{\mathbf{P̂}_{k-1}(k)} \end{matrix}\bigg] \\ \mathbf{Ŷ^m}(k) &= \bigg[\begin{matrix} \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{1}(k) \Big) & \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{2}(k) \Big) & \cdots & \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{n_σ}(k) \Big) \end{matrix}\bigg] \\ \mathbf{ŷ^m}(k) &= \mathbf{Ŷ^m}(k) \mathbf{m̂} \\ @@ -152,7 +152,7 @@ \mathbf{x̂}_{k}(k+1) &= \mathbf{X̂}_{k}(k+1)\mathbf{m̂} \\ \mathbf{X̄}_k(k+1) &= \begin{bmatrix} \mathbf{X̂}_{k}^{1}(k+1) - \mathbf{x̂}_{k}(k+1) & \mathbf{X̂}_{k}^{2}(k+1) - \mathbf{x̂}_{k}(k+1) & \cdots &\, \mathbf{X̂}_{k}^{n_σ}(k+1) - \mathbf{x̂}_{k}(k+1) \end{bmatrix} \\ \mathbf{P̂}_k(k+1) &= \mathbf{X̄}_k(k+1) \mathbf{Ŝ} \mathbf{X̄}_k'(k+1) + \mathbf{Q̂} -\end{aligned}\]

source
update_estimate!(estim::ExtendedKalmanFilter, y0m, d0, u0)

Update ExtendedKalmanFilter state estim.x̂0 and covariance estim.P̂.

The equations are similar to update_estimate!(::KalmanFilter) but with the substitutions $\mathbf{Ĉ^m = Ĥ^m}(k)$ and $\mathbf{Â = F̂}(k)$, the Jacobians of the augmented process model:

\[\begin{aligned} +\end{aligned}\]

source
update_estimate!(estim::ExtendedKalmanFilter, y0m, d0, u0)

Update ExtendedKalmanFilter state estim.x̂0 and covariance estim.P̂.

The equations are similar to update_estimate!(::KalmanFilter) but with the substitutions $\mathbf{Ĉ^m = Ĥ^m}(k)$ and $\mathbf{Â = F̂}(k)$, the Jacobians of the augmented process model:

\[\begin{aligned} \mathbf{Ĥ}(k) &= \left. \frac{∂\mathbf{ĥ}(\mathbf{x̂}, \mathbf{d})}{∂\mathbf{x̂}} \right|_{\mathbf{x̂ = x̂}_{k-1}(k),\, \mathbf{d = d}(k)} \\ \mathbf{F̂}(k) &= \left. \frac{∂\mathbf{f̂}(\mathbf{x̂}, \mathbf{u}, \mathbf{d})}{∂\mathbf{x̂}} \right|_{\mathbf{x̂ = x̂}_{k}(k), \, \mathbf{u = u}(k),\, \mathbf{d = d}(k)} \end{aligned}\]

The matrix $\mathbf{Ĥ^m}$ is the rows of $\mathbf{Ĥ}$ that are measured outputs. The function ForwardDiff.jacobian automatically computes them. The correction and prediction step equations are provided below. The correction step is skipped if estim.direct == true since it's already done by the user.

Correction Step

\[\begin{aligned} @@ -164,7 +164,7 @@ \end{aligned}\]

Prediction Step

\[\begin{aligned} \mathbf{x̂}_{k}(k+1) &= \mathbf{f̂}\Big( \mathbf{x̂}_{k}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) \\ \mathbf{P̂}_{k}(k+1) &= \mathbf{F̂}(k)\mathbf{P̂}_{k}(k)\mathbf{F̂}'(k) + \mathbf{Q̂} -\end{aligned}\]

source
update_estimate!(estim::Luenberger, y0m, d0, u0)

Same than update_estimate!(::SteadyKalmanFilter) but using Luenberger.

source
update_estimate!(estim::MovingHorizonEstimator, y0m, d0, u0)

Update MovingHorizonEstimator state estim.x̂0.

The optimization problem of MovingHorizonEstimator documentation is solved at each discrete time $k$. Once solved, the next estimate $\mathbf{x̂}_k(k+1)$ is computed by inserting the optimal values of $\mathbf{x̂}_k(k-N_k+1)$ and $\mathbf{Ŵ}$ in the augmented model from $j = N_k-1$ to $0$ inclusively. Afterward, if $k ≥ H_e$, the arrival covariance for the next time step $\mathbf{P̂}_{k-N_k+1}(k-N_k+2)$ is estimated using estim.covestim object.

source
update_estimate!(estim::InternalModel, _ , d0, u0)

Update estim.x̂0/x̂d/x̂s with current inputs u0, measured outputs y0m and dist. d0.

The InternalModel updates the deterministic x̂d and stochastic x̂s estimates with:

\[\begin{aligned} +\end{aligned}\]

source
update_estimate!(estim::Luenberger, y0m, d0, u0)

Same than update_estimate!(::SteadyKalmanFilter) but using Luenberger.

source
update_estimate!(estim::MovingHorizonEstimator, y0m, d0, u0)

Update MovingHorizonEstimator state estim.x̂0.

The optimization problem of MovingHorizonEstimator documentation is solved at each discrete time $k$. Once solved, the next estimate $\mathbf{x̂}_k(k+1)$ is computed by inserting the optimal values of $\mathbf{x̂}_k(k-N_k+1)$ and $\mathbf{Ŵ}$ in the augmented model from $j = N_k-1$ to $0$ inclusively. Afterward, if $k ≥ H_e$, the arrival covariance for the next time step $\mathbf{P̂}_{k-N_k+1}(k-N_k+2)$ is estimated using estim.covestim object.

source
update_estimate!(estim::InternalModel, _ , d0, u0)

Update estim.x̂0/x̂d/x̂s with current inputs u0, measured outputs y0m and dist. d0.

The InternalModel updates the deterministic x̂d and stochastic x̂s estimates with:

\[\begin{aligned} \mathbf{x̂_d}(k+1) &= \mathbf{f}\Big( \mathbf{x̂_d}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) \\ \mathbf{x̂_s}(k+1) &= \mathbf{Â_s x̂_s}(k) + \mathbf{B̂_s ŷ_s}(k) -\end{aligned}\]

This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel for details.

source
  • 1Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
  • 2"Kalman Filter", Wikipedia: The Free Encyclopedia, https://en.wikipedia.org/wiki/Kalman_filter, Accessed 2024-08-08.
  • 3Simon, D. 2006, "Chapter 14: The unscented Kalman filter" in "Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches", John Wiley & Sons, p. 433–459, https://doi.org/10.1002/0470045345.ch14, ISBN9780470045343.
+\end{aligned}\]

This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel for details.

source
  • 1Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
  • 2"Kalman Filter", Wikipedia: The Free Encyclopedia, https://en.wikipedia.org/wiki/Kalman_filter, Accessed 2024-08-08.
  • 3Simon, D. 2006, "Chapter 14: The unscented Kalman filter" in "Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches", John Wiley & Sons, p. 433–459, https://doi.org/10.1002/0470045345.ch14, ISBN9780470045343.
diff --git a/previews/PR94/manual/installation/index.html b/previews/PR94/manual/installation/index.html index 35c339688..fe5db9dd7 100644 --- a/previews/PR94/manual/installation/index.html +++ b/previews/PR94/manual/installation/index.html @@ -1,2 +1,2 @@ -Installation · ModelPredictiveControl.jl

Manual: Installation

To install the ModelPredictiveControl package, run this command in the Julia REPL:

using Pkg; Pkg.activate(); Pkg.add("ModelPredictiveControl")

Doing so will install the package to default Julia environnement, that is, accessible anywhere. To facilitate sharing of code and reproducibility of results, it is recommended to install packages in a project environnement. To generate a new project named MPCproject with this package in the current working directory, write this in the REPL:

using Pkg; Pkg.generate("MPCproject"); Pkg.activate("."); Pkg.add("ModelPredictiveControl")

Note that that the construction of linear models typically requires ss or tf functions, it is thus advised to load the package with:

using ModelPredictiveControl, ControlSystemsBase
+Installation · ModelPredictiveControl.jl

Manual: Installation

To install the ModelPredictiveControl package, run this command in the Julia REPL:

using Pkg; Pkg.activate(); Pkg.add("ModelPredictiveControl")

Doing so will install the package to default Julia environnement, that is, accessible anywhere. To facilitate sharing of code and reproducibility of results, it is recommended to install packages in a project environnement. To generate a new project named MPCproject with this package in the current working directory, write this in the REPL:

using Pkg; Pkg.generate("MPCproject"); Pkg.activate("."); Pkg.add("ModelPredictiveControl")

Note that that the construction of linear models typically requires ss or tf functions, it is thus advised to load the package with:

using ModelPredictiveControl, ControlSystemsBase
diff --git a/previews/PR94/manual/linmpc/index.html b/previews/PR94/manual/linmpc/index.html index 569ed497b..13c69831f 100644 --- a/previews/PR94/manual/linmpc/index.html +++ b/previews/PR94/manual/linmpc/index.html @@ -131,4 +131,4 @@ u, y, d = model.uop, model(), mpc_d.estim.model.dop initstate!(mpc_d, u, y, d) u_data, y_data, ry_data = test_mpc_d(mpc_d, model) -plot_data(t_data, u_data, y_data, ry_data)

plot4_LinMPC

Note that measured disturbances are assumed constant in the future by default but custom $\mathbf{D̂}$ predictions are possible. The same applies for the setpoint predictions $\mathbf{R̂_y}$.

  • 1As an alternative to state observer, we could have use an InternalModel structure with mpc = LinMPC(InternalModel(model), Hp=15, Hc=2, Mwt=[1, 1], Nwt=[0.1, 0.1]). It was tested on the example of this page and it gave similar results.
+plot_data(t_data, u_data, y_data, ry_data)

plot4_LinMPC

Note that measured disturbances are assumed constant in the future by default but custom $\mathbf{D̂}$ predictions are possible. The same applies for the setpoint predictions $\mathbf{R̂_y}$.

  • 1As an alternative to state observer, we could have use an InternalModel structure with mpc = LinMPC(InternalModel(model), Hp=15, Hc=2, Mwt=[1, 1], Nwt=[0.1, 0.1]). It was tested on the example of this page and it gave similar results.
diff --git a/previews/PR94/manual/nonlinmpc/index.html b/previews/PR94/manual/nonlinmpc/index.html index 44b272485..e0ada7d69 100644 --- a/previews/PR94/manual/nonlinmpc/index.html +++ b/previews/PR94/manual/nonlinmpc/index.html @@ -147,4 +147,4 @@ res_slin = sim_adapt!(mpc3, model, N, ry, plant, x_0, x̂_0) plot(res_slin)

plot10_NonLinMPC

and the 10° step disturbance:

x_0 = [π, 0]; x̂_0 = [π, 0, 0]; y_step = [10]
 res_slin = sim_adapt!(mpc3, model, N, ry, plant, x_0, x̂_0, y_step)
-plot(res_slin)

plot11_NonLinMPC

The computations of the successive linearization MPC are about 125 times faster than the nonlinear MPC (0.00012 s per time steps versus 0.015 s per time steps, on average), an impressive gain for similar closed-loop performances!

  • 1Arnström, D., Bemporad, A., and Axehill, D. (2022). A dual active-set solver for embedded quadratic programming using recursive LDLᵀ updates. IEEE Trans. Autom. Contr., 67(8). https://doi.org/doi:10.1109/TAC.2022.3176430.
+plot(res_slin)

plot11_NonLinMPC

The computations of the successive linearization MPC are about 125 times faster than the nonlinear MPC (0.00012 s per time steps versus 0.015 s per time steps, on average), an impressive gain for similar closed-loop performances!

  • 1Arnström, D., Bemporad, A., and Axehill, D. (2022). A dual active-set solver for embedded quadratic programming using recursive LDLᵀ updates. IEEE Trans. Autom. Contr., 67(8). https://doi.org/doi:10.1109/TAC.2022.3176430.
diff --git a/previews/PR94/manual/plot10_NonLinMPC.svg b/previews/PR94/manual/plot10_NonLinMPC.svg index f6c292794..577428dde 100644 --- a/previews/PR94/manual/plot10_NonLinMPC.svg +++ b/previews/PR94/manual/plot10_NonLinMPC.svg @@ -1,82 +1,82 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot11_NonLinMPC.svg b/previews/PR94/manual/plot11_NonLinMPC.svg index 4e29fca04..25a049cfa 100644 --- a/previews/PR94/manual/plot11_NonLinMPC.svg +++ b/previews/PR94/manual/plot11_NonLinMPC.svg @@ -1,82 +1,82 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot1_LinMPC.svg b/previews/PR94/manual/plot1_LinMPC.svg index afcc8a605..b80654834 100644 --- a/previews/PR94/manual/plot1_LinMPC.svg +++ b/previews/PR94/manual/plot1_LinMPC.svg @@ -1,118 +1,118 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot1_NonLinMPC.svg b/previews/PR94/manual/plot1_NonLinMPC.svg index 4788a6b2e..ea6c149cc 100644 --- a/previews/PR94/manual/plot1_NonLinMPC.svg +++ b/previews/PR94/manual/plot1_NonLinMPC.svg @@ -1,39 +1,39 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot2_LinMPC.svg b/previews/PR94/manual/plot2_LinMPC.svg index 1c6fa4fcb..abbc71c87 100644 --- a/previews/PR94/manual/plot2_LinMPC.svg +++ b/previews/PR94/manual/plot2_LinMPC.svg @@ -1,126 +1,126 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot2_NonLinMPC.svg b/previews/PR94/manual/plot2_NonLinMPC.svg index 9836e1f35..06f6c2fd4 100644 --- a/previews/PR94/manual/plot2_NonLinMPC.svg +++ b/previews/PR94/manual/plot2_NonLinMPC.svg @@ -1,139 +1,139 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot3_LinMPC.svg b/previews/PR94/manual/plot3_LinMPC.svg index c0a71620b..d86484c9a 100644 --- a/previews/PR94/manual/plot3_LinMPC.svg +++ b/previews/PR94/manual/plot3_LinMPC.svg @@ -1,120 +1,120 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot3_NonLinMPC.svg b/previews/PR94/manual/plot3_NonLinMPC.svg index 18f6adc50..a831ede8b 100644 --- a/previews/PR94/manual/plot3_NonLinMPC.svg +++ b/previews/PR94/manual/plot3_NonLinMPC.svg @@ -1,82 +1,82 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot4_LinMPC.svg b/previews/PR94/manual/plot4_LinMPC.svg index 06464342b..44a51559f 100644 --- a/previews/PR94/manual/plot4_LinMPC.svg +++ b/previews/PR94/manual/plot4_LinMPC.svg @@ -1,128 +1,128 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot4_NonLinMPC.svg b/previews/PR94/manual/plot4_NonLinMPC.svg index 1b8266f08..af434f600 100644 --- a/previews/PR94/manual/plot4_NonLinMPC.svg +++ b/previews/PR94/manual/plot4_NonLinMPC.svg @@ -1,82 +1,82 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot5_NonLinMPC.svg b/previews/PR94/manual/plot5_NonLinMPC.svg index 63ceba811..1737e534f 100644 --- a/previews/PR94/manual/plot5_NonLinMPC.svg +++ b/previews/PR94/manual/plot5_NonLinMPC.svg @@ -1,82 +1,82 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot6_NonLinMPC.svg b/previews/PR94/manual/plot6_NonLinMPC.svg index 1c3451a9b..b76fd6d6d 100644 --- a/previews/PR94/manual/plot6_NonLinMPC.svg +++ b/previews/PR94/manual/plot6_NonLinMPC.svg @@ -1,84 +1,84 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot7_NonLinMPC.svg b/previews/PR94/manual/plot7_NonLinMPC.svg index e112892d5..5be471bc3 100644 --- a/previews/PR94/manual/plot7_NonLinMPC.svg +++ b/previews/PR94/manual/plot7_NonLinMPC.svg @@ -1,84 +1,84 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot8_NonLinMPC.svg b/previews/PR94/manual/plot8_NonLinMPC.svg index 57f9d0f6a..16a56ab69 100644 --- a/previews/PR94/manual/plot8_NonLinMPC.svg +++ b/previews/PR94/manual/plot8_NonLinMPC.svg @@ -1,82 +1,82 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/manual/plot9_NonLinMPC.svg b/previews/PR94/manual/plot9_NonLinMPC.svg index 31c051ceb..4583cf357 100644 --- a/previews/PR94/manual/plot9_NonLinMPC.svg +++ b/previews/PR94/manual/plot9_NonLinMPC.svg @@ -1,82 +1,82 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR94/public/generic_func/index.html b/previews/PR94/public/generic_func/index.html index ac17401d8..4055068d1 100644 --- a/previews/PR94/public/generic_func/index.html +++ b/previews/PR94/public/generic_func/index.html @@ -17,7 +17,7 @@ \mathbf{X̂_{min} - C_{x̂_{min}}} ϵ ≤&&\ \mathbf{X̂} &≤ \mathbf{X̂_{max} + C_{x̂_{max}}} ϵ \\ \mathbf{Ŵ_{min} - C_{ŵ_{min}}} ϵ ≤&&\ \mathbf{Ŵ} &≤ \mathbf{Ŵ_{max} + C_{ŵ_{max}}} ϵ \\ \mathbf{V̂_{min} - C_{v̂_{min}}} ϵ ≤&&\ \mathbf{V̂} &≤ \mathbf{V̂_{max} + C_{v̂_{max}}} ϵ -\end{alignat*}\]

For this, use the same keyword arguments as above but with a first capital letter:

  • X̂min / X̂max / C_x̂min / C_x̂max : $\mathbf{X̂}$ constraints (nx̂*(He+1),).
  • Ŵmin / Ŵmax / C_ŵmin / C_ŵmax : $\mathbf{Ŵ}$ constraints (nx̂*He,).
  • V̂min / V̂max / C_v̂min / C_v̂max : $\mathbf{V̂}$ constraints (nym*He,).
source
setconstraint!(mpc::PredictiveController; <keyword arguments>) -> mpc

Set the constraint parameters of the PredictiveController mpc.

The predictive controllers support both soft and hard constraints, defined by:

\[\begin{alignat*}{3} +\end{alignat*}\]

For this, use the same keyword arguments as above but with a first capital letter:

  • X̂min / X̂max / C_x̂min / C_x̂max : $\mathbf{X̂}$ constraints (nx̂*(He+1),).
  • Ŵmin / Ŵmax / C_ŵmin / C_ŵmax : $\mathbf{Ŵ}$ constraints (nx̂*He,).
  • V̂min / V̂max / C_v̂min / C_v̂max : $\mathbf{V̂}$ constraints (nym*He,).
source
setconstraint!(mpc::PredictiveController; <keyword arguments>) -> mpc

Set the constraint parameters of the PredictiveController mpc.

The predictive controllers support both soft and hard constraints, defined by:

\[\begin{alignat*}{3} \mathbf{u_{min} - c_{u_{min}}} ϵ ≤&&\ \mathbf{u}(k+j) &≤ \mathbf{u_{max} + c_{u_{max}}} ϵ &&\qquad j = 0, 1 ,..., H_p - 1 \\ \mathbf{Δu_{min} - c_{Δu_{min}}} ϵ ≤&&\ \mathbf{Δu}(k+j) &≤ \mathbf{Δu_{max} + c_{Δu_{max}}} ϵ &&\qquad j = 0, 1 ,..., H_c - 1 \\ \mathbf{y_{min} - c_{y_{min}}} ϵ ≤&&\ \mathbf{ŷ}(k+j) &≤ \mathbf{y_{max} + c_{y_{max}}} ϵ &&\qquad j = 1, 2 ,..., H_p \\ @@ -37,15 +37,15 @@ \mathbf{U_{min} - C_{u_{min}}} ϵ ≤&&\ \mathbf{U} &≤ \mathbf{U_{max} + C_{u_{max}}} ϵ \\ \mathbf{ΔU_{min} - C_{Δu_{min}}} ϵ ≤&&\ \mathbf{ΔU} &≤ \mathbf{ΔU_{max} + C_{Δu_{max}}} ϵ \\ \mathbf{Y_{min} - C_{y_{min}}} ϵ ≤&&\ \mathbf{Ŷ} &≤ \mathbf{Y_{max} + C_{y_{max}}} ϵ -\end{alignat*}\]

For this, use the same keyword arguments as above but with a first capital letter:

  • Umin / Umax / C_umin / C_umax : $\mathbf{U}$ constraints (nu*Hp,).
  • ΔUmin / ΔUmax / C_Δumin / C_Δumax : $\mathbf{ΔU}$ constraints (nu*Hc,).
  • Ymin / Ymax / C_ymin / C_ymax : $\mathbf{Ŷ}$ constraints (ny*Hp,).
source

Evaluate Output y

ModelPredictiveControl.evaloutputFunction
evaloutput(model::SimModel, d=[]) -> y

Evaluate SimModel outputs y from model.x0 states and measured disturbances d.

It returns model output at the current time step $\mathbf{y}(k)$. Calling a SimModel object calls this evaloutput method.

Examples

julia> model = setop!(LinModel(tf(2, [10, 1]), 5.0), yop=[20]);
+\end{alignat*}\]

For this, use the same keyword arguments as above but with a first capital letter:

  • Umin / Umax / C_umin / C_umax : $\mathbf{U}$ constraints (nu*Hp,).
  • ΔUmin / ΔUmax / C_Δumin / C_Δumax : $\mathbf{ΔU}$ constraints (nu*Hc,).
  • Ymin / Ymax / C_ymin / C_ymax : $\mathbf{Ŷ}$ constraints (ny*Hp,).
source

Evaluate Output y

ModelPredictiveControl.evaloutputFunction
evaloutput(model::SimModel, d=[]) -> y

Evaluate SimModel outputs y from model.x0 states and measured disturbances d.

It returns model output at the current time step $\mathbf{y}(k)$. Calling a SimModel object calls this evaloutput method.

Examples

julia> model = setop!(LinModel(tf(2, [10, 1]), 5.0), yop=[20]);
 
 julia> y = evaloutput(model)
 1-element Vector{Float64}:
- 20.0
source
evaloutput(estim::StateEstimator, d=[]) -> ŷ

Evaluate StateEstimator outputs from estim.x̂0 states and disturbances d.

It returns estim output at the current time step $\mathbf{ŷ}(k)$. Calling a StateEstimator object calls this evaloutput method.

Examples

julia> kf = SteadyKalmanFilter(setop!(LinModel(tf(2, [10, 1]), 5), yop=[20]));
+ 20.0
source
evaloutput(estim::StateEstimator, d=[]) -> ŷ

Evaluate StateEstimator outputs from estim.x̂0 states and disturbances d.

It returns estim output at the current time step $\mathbf{ŷ}(k)$. Calling a StateEstimator object calls this evaloutput method.

Examples

julia> kf = SteadyKalmanFilter(setop!(LinModel(tf(2, [10, 1]), 5), yop=[20]));
 
 julia> ŷ = evaloutput(kf)
 1-element Vector{Float64}:
- 20.0
source

Prepare State x

ModelPredictiveControl.preparestate!Function
preparestate!(model::SimModel, _ , _ ) -> x

Do nothing for SimModel and return the current model state $\mathbf{x}(k)$.

source
preparestate!(estim::StateEstimator, ym, d=estim.model.dop) -> x̂

Prepare estim.x̂0 estimate with meas. outputs ym and dist. d for the current time step.

This function should be called at the beginning of each discrete time step. Its behavior depends if estim is a StateEstimator in the current/filter (1.) or delayed/predictor (2.) form:

  1. If estim.direct is true, it removes the operating points with remove_op!, calls correct_estimate!, and returns the corrected state estimate $\mathbf{x̂}_k(k)$.
  2. Else, it does nothing and returns the current best estimate $\mathbf{x̂}_{k-1}(k)$.

Examples

julia> estim2 = SteadyKalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4)), nint_ym=0, direct=true);
+ 20.0
source

Prepare State x

ModelPredictiveControl.preparestate!Function
preparestate!(model::SimModel, _ , _ ) -> x

Do nothing for SimModel and return the current model state $\mathbf{x}(k)$.

source
preparestate!(estim::StateEstimator, ym, d=estim.model.dop) -> x̂

Prepare estim.x̂0 estimate with meas. outputs ym and dist. d for the current time step.

This function should be called at the beginning of each discrete time step. Its behavior depends if estim is a StateEstimator in the current/filter (1.) or delayed/predictor (2.) form:

  1. If estim.direct is true, it removes the operating points with remove_op!, calls correct_estimate!, and returns the corrected state estimate $\mathbf{x̂}_k(k)$.
  2. Else, it does nothing and returns the current best estimate $\mathbf{x̂}_{k-1}(k)$.

Examples

julia> estim2 = SteadyKalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4)), nint_ym=0, direct=true);
 
 julia> x̂ = round.(preparestate!(estim2, [1]), digits=3)
 1-element Vector{Float64}:
@@ -55,25 +55,25 @@
 
 julia> x̂ = preparestate!(estim1, [1])
 1-element Vector{Float64}:
- 0.0
source
preparestate!(mpc::PredictiveController, ym, d=[]) -> x̂

Call preparestate! on mpc.estim StateEstimator.

source

Update State x

ModelPredictiveControl.updatestate!Function
updatestate!(model::SimModel, u, d=[]) -> xnext

Update model.x0 states with current inputs u and measured disturbances d.

The method computes and returns the model state for the next time step $\mathbf{x}(k+1)$.

Examples

julia> model = LinModel(ss(1.0, 1.0, 1.0, 0, 1.0));
+ 0.0
source
preparestate!(mpc::PredictiveController, ym, d=[]) -> x̂

Call preparestate! on mpc.estim StateEstimator.

source

Update State x

ModelPredictiveControl.updatestate!Function
updatestate!(model::SimModel, u, d=[]) -> xnext

Update model.x0 states with current inputs u and measured disturbances d.

The method computes and returns the model state for the next time step $\mathbf{x}(k+1)$.

Examples

julia> model = LinModel(ss(1.0, 1.0, 1.0, 0, 1.0));
 
 julia> x = updatestate!(model, [1])
 1-element Vector{Float64}:
- 1.0
source
updatestate!(estim::StateEstimator, u, ym, d=[]) -> x̂next

Update estim.x̂0 estimate with current inputs u, measured outputs ym and dist. d.

This function should be called at the end of each discrete time step. It removes the operating points with remove_op!, calls update_estimate! and returns the state estimate for the next time step $\mathbf{x̂}_k(k+1)$. The method preparestate! should be called prior to this one to correct the estimate when applicable (if estim.direct == true).

Examples

julia> kf = SteadyKalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0))); u = [1]; ym = [0];
+ 1.0
source
updatestate!(estim::StateEstimator, u, ym, d=[]) -> x̂next

Update estim.x̂0 estimate with current inputs u, measured outputs ym and dist. d.

This function should be called at the end of each discrete time step. It removes the operating points with remove_op!, calls update_estimate! and returns the state estimate for the next time step $\mathbf{x̂}_k(k+1)$. The method preparestate! should be called prior to this one to correct the estimate when applicable (if estim.direct == true).

Examples

julia> kf = SteadyKalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0))); u = [1]; ym = [0];
 
 julia> preparestate!(kf, ym);
 
 julia> x̂ = updatestate!(kf, u, ym) # x̂[2] is the integrator state (nint_ym argument)
 2-element Vector{Float64}:
  0.5
- 0.0
source
updatestate!(mpc::PredictiveController, u, ym, d=[]) -> x̂next

Call updatestate! on mpc.estim StateEstimator.

source

Init State x

ModelPredictiveControl.initstate!Function
initstate!(model::SimModel, u, d=[]) -> x

Init model.x0 with manipulated inputs u and measured disturbances d steady-state.

It removes the operating points on u and d and calls steadystate!:

  • If model is a LinModel, the method computes the steady-state of current inputs u and measured disturbances d.
  • Else, model.x0 is left unchanged. Use setstate! to manually modify it.

Examples

julia> model = LinModel(tf(6, [10, 1]), 2.0);
+ 0.0
source
updatestate!(mpc::PredictiveController, u, ym, d=[]) -> x̂next

Call updatestate! on mpc.estim StateEstimator.

source

Init State x

ModelPredictiveControl.initstate!Function
initstate!(model::SimModel, u, d=[]) -> x

Init model.x0 with manipulated inputs u and measured disturbances d steady-state.

It removes the operating points on u and d and calls steadystate!:

  • If model is a LinModel, the method computes the steady-state of current inputs u and measured disturbances d.
  • Else, model.x0 is left unchanged. Use setstate! to manually modify it.

Examples

julia> model = LinModel(tf(6, [10, 1]), 2.0);
 
 julia> u = [1]; x = initstate!(model, u); y = round.(evaloutput(model), digits=3)
 1-element Vector{Float64}:
  6.0
  
 julia> x ≈ updatestate!(model, u)
-true
source
initstate!(estim::StateEstimator, u, ym, d=[]) -> x̂

Init estim.x̂0 states from current inputs u, measured outputs ym and disturbances d.

The method tries to find a good steady-state for the initial estimate $\mathbf{x̂}(0)$. It removes the operating points with remove_op! and call init_estimate!:

  • If estim.model is a LinModel, it finds the steady-state of the augmented model using u and d arguments, and uses the ym argument to enforce that $\mathbf{ŷ^m}(0) = \mathbf{y^m}(0)$. For control applications, this solution produces a bumpless manual to automatic transfer. See init_estimate! for details.
  • Else, estim.x̂0 is left unchanged. Use setstate! to manually modify it.

If applicable, it also sets the error covariance estim.P̂ to estim.P̂_0.

Examples

julia> estim = SteadyKalmanFilter(LinModel(tf(3, [10, 1]), 0.5), nint_ym=[2]);
+true
source
initstate!(estim::StateEstimator, u, ym, d=[]) -> x̂

Init estim.x̂0 states from current inputs u, measured outputs ym and disturbances d.

The method tries to find a good steady-state for the initial estimate $\mathbf{x̂}(0)$. It removes the operating points with remove_op! and call init_estimate!:

  • If estim.model is a LinModel, it finds the steady-state of the augmented model using u and d arguments, and uses the ym argument to enforce that $\mathbf{ŷ^m}(0) = \mathbf{y^m}(0)$. For control applications, this solution produces a bumpless manual to automatic transfer. See init_estimate! for details.
  • Else, estim.x̂0 is left unchanged. Use setstate! to manually modify it.

If applicable, it also sets the error covariance estim.P̂ to estim.P̂_0.

Examples

julia> estim = SteadyKalmanFilter(LinModel(tf(3, [10, 1]), 0.5), nint_ym=[2]);
 
 julia> u = [1]; y = [3 - 0.1]; x̂ = round.(initstate!(estim, u, y), digits=3)
 3-element Vector{Float64}:
@@ -87,7 +87,7 @@
 true
 
 julia> evaloutput(estim) ≈ y
-true
source
initstate!(mpc::PredictiveController, u, ym, d=[]) -> x̂

Init the states of mpc.estim StateEstimator and warm start mpc.ΔŨ at zero.

source

Set State x

ModelPredictiveControl.setstate!Function
setstate!(model::SimModel, x) -> model

Set model.x0 to x - model.xop from the argument x.

source
setstate!(estim::StateEstimator, x̂) -> estim

Set estim.x̂0 to x̂ - estim.x̂op from the argument .

source
setstate!(mpc::PredictiveController, x̂) -> mpc

Set mpc.estim.x̂0 to x̂ - estim.x̂op from the argument .

source

Set Model and Weights

ModelPredictiveControl.setmodel!Function
setmodel!(estim::StateEstimator, model=estim.model; <keyword arguments>) -> estim

Set model and covariance matrices of estim StateEstimator.

Allows model adaptation of estimators based on LinModel at runtime. Modification of NonLinModel state-space functions is not supported. New covariance matrices can be specified with the keyword arguments (see SteadyKalmanFilter documentation for the nomenclature). Not supported by Luenberger and SteadyKalmanFilter, use the time-varying KalmanFilter instead. The MovingHorizonEstimator model is kept constant over the estimation horizon $H_e$. The matrix dimensions and sample time must stay the same. Note that the observability and controllability of the new augmented model is not verified (see Extended Help for more info).

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • estim::StateEstimator : estimator to set model and covariances.
  • model=estim.model : new plant model (not supported by NonLinModel).
  • Q̂=nothing or Qhat : new augmented model $\mathbf{Q̂}$ covariance matrix.
  • R̂=nothing or Rhat : new augmented model $\mathbf{R̂}$ covariance matrix.

Examples

julia> kf = KalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0)), σQ=[√4.0], σQint_ym=[√0.25]);
+true
source
initstate!(mpc::PredictiveController, u, ym, d=[]) -> x̂

Init the states of mpc.estim StateEstimator and warm start mpc.ΔŨ at zero.

source

Set State x

ModelPredictiveControl.setstate!Function
setstate!(model::SimModel, x) -> model

Set model.x0 to x - model.xop from the argument x.

source
setstate!(estim::StateEstimator, x̂) -> estim

Set estim.x̂0 to x̂ - estim.x̂op from the argument .

source
setstate!(mpc::PredictiveController, x̂) -> mpc

Set mpc.estim.x̂0 to x̂ - estim.x̂op from the argument .

source

Set Model and Weights

ModelPredictiveControl.setmodel!Function
setmodel!(estim::StateEstimator, model=estim.model; <keyword arguments>) -> estim

Set model and covariance matrices of estim StateEstimator.

Allows model adaptation of estimators based on LinModel at runtime. Modification of NonLinModel state-space functions is not supported. New covariance matrices can be specified with the keyword arguments (see SteadyKalmanFilter documentation for the nomenclature). Not supported by Luenberger and SteadyKalmanFilter, use the time-varying KalmanFilter instead. The MovingHorizonEstimator model is kept constant over the estimation horizon $H_e$. The matrix dimensions and sample time must stay the same. Note that the observability and controllability of the new augmented model is not verified (see Extended Help for more info).

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • estim::StateEstimator : estimator to set model and covariances.
  • model=estim.model : new plant model (not supported by NonLinModel).
  • Q̂=nothing or Qhat : new augmented model $\mathbf{Q̂}$ covariance matrix.
  • R̂=nothing or Rhat : new augmented model $\mathbf{R̂}$ covariance matrix.

Examples

julia> kf = KalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0)), σQ=[√4.0], σQint_ym=[√0.25]);
 
 julia> kf.model.A[], kf.Q̂[1, 1], kf.Q̂[2, 2] 
 (0.1, 4.0, 0.25)
@@ -95,7 +95,7 @@
 julia> setmodel!(kf, LinModel(ss(0.42, 0.5, 1, 0, 4.0)), Q̂=[1 0;0 0.5]);
 
 julia> kf.model.A[], kf.Q̂[1, 1], kf.Q̂[2, 2] 
-(0.42, 1.0, 0.5)

Extended Help

Extended Help

Using the default model augmentation computed by the default_nint method, switching from a non-integrating plant model to an integrating one will produce an augmented model that is not observable. Moving the unmeasured disturbances at the model input (nint_u parameter) can fix this issue.

source
setmodel!(mpc::PredictiveController, model=mpc.estim.model; <keyword arguments>) -> mpc

Set model and objective function weights of mpc PredictiveController.

Allows model adaptation of controllers based on LinModel at runtime. Modification of NonLinModel state-space functions is not supported. New weight matrices in the objective function can be specified with the keyword arguments (see LinMPC for the nomenclature). If Cwt ≠ Inf, the augmented move suppression weight is $\mathbf{Ñ}_{H_c} = \mathrm{diag}(\mathbf{N}_{H_c}, C)$, else $\mathbf{Ñ}_{H_c} = \mathbf{N}_{H_c}$. The StateEstimator mpc.estim cannot be a Luenberger observer or a SteadyKalmanFilter (the default estimator). Construct the mpc object with a time-varying KalmanFilter instead. Note that the model is constant over the prediction horizon $H_p$.

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • mpc::PredictiveController : controller to set model and weights.
  • model=mpc.estim.model : new plant model (not supported by NonLinModel).
  • Mwt=nothing : new main diagonal in $\mathbf{M}$ weight matrix (vector).
  • Nwt=nothing : new main diagonal in $\mathbf{N}$ weight matrix (vector).
  • Lwt=nothing : new main diagonal in $\mathbf{L}$ weight matrix (vector).
  • M_Hp=nothing : new $\mathbf{M}_{H_p}$ weight matrix.
  • Ñ_Hc=nothing or Ntilde_Hc : new $\mathbf{Ñ}_{H_c}$ weight matrix (see def. above).
  • L_Hp=nothing : new $\mathbf{L}_{H_p}$ weight matrix.
  • additional keyword arguments are passed to setmodel!(mpc.estim).

Examples

julia> mpc = LinMPC(KalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0)), σR=[√25]), Hp=1, Hc=1);
+(0.42, 1.0, 0.5)

Extended Help

Extended Help

Using the default model augmentation computed by the default_nint method, switching from a non-integrating plant model to an integrating one will produce an augmented model that is not observable. Moving the unmeasured disturbances at the model input (nint_u parameter) can fix this issue.

source
setmodel!(mpc::PredictiveController, model=mpc.estim.model; <keyword arguments>) -> mpc

Set model and objective function weights of mpc PredictiveController.

Allows model adaptation of controllers based on LinModel at runtime. Modification of NonLinModel state-space functions is not supported. New weight matrices in the objective function can be specified with the keyword arguments (see LinMPC for the nomenclature). If Cwt ≠ Inf, the augmented move suppression weight is $\mathbf{Ñ}_{H_c} = \mathrm{diag}(\mathbf{N}_{H_c}, C)$, else $\mathbf{Ñ}_{H_c} = \mathbf{N}_{H_c}$. The StateEstimator mpc.estim cannot be a Luenberger observer or a SteadyKalmanFilter (the default estimator). Construct the mpc object with a time-varying KalmanFilter instead. Note that the model is constant over the prediction horizon $H_p$.

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • mpc::PredictiveController : controller to set model and weights.
  • model=mpc.estim.model : new plant model (not supported by NonLinModel).
  • Mwt=nothing : new main diagonal in $\mathbf{M}$ weight matrix (vector).
  • Nwt=nothing : new main diagonal in $\mathbf{N}$ weight matrix (vector).
  • Lwt=nothing : new main diagonal in $\mathbf{L}$ weight matrix (vector).
  • M_Hp=nothing : new $\mathbf{M}_{H_p}$ weight matrix.
  • Ñ_Hc=nothing or Ntilde_Hc : new $\mathbf{Ñ}_{H_c}$ weight matrix (see def. above).
  • L_Hp=nothing : new $\mathbf{L}_{H_p}$ weight matrix.
  • additional keyword arguments are passed to setmodel!(mpc.estim).

Examples

julia> mpc = LinMPC(KalmanFilter(LinModel(ss(0.1, 0.5, 1, 0, 4.0)), σR=[√25]), Hp=1, Hc=1);
 
 julia> mpc.estim.model.A[1], mpc.estim.R̂[1], mpc.M_Hp[1], mpc.Ñ_Hc[1]
 (0.1, 25.0, 1.0, 0.1)
@@ -103,7 +103,7 @@
 julia> setmodel!(mpc, LinModel(ss(0.42, 0.5, 1, 0, 4.0)); R̂=[9], M_Hp=[10], Nwt=[0.666]);
 
 julia> mpc.estim.model.A[1], mpc.estim.R̂[1], mpc.M_Hp[1], mpc.Ñ_Hc[1]
-(0.42, 9.0, 10.0, 0.666)
source

Get Additional Information

ModelPredictiveControl.getinfoFunction
getinfo(estim::MovingHorizonEstimator) -> info

Get additional info on estim MovingHorizonEstimator optimum for troubleshooting.

The function should be called after calling updatestate!. It returns the dictionary info with the following fields:

Info

Fields with emphasis are non-Unicode alternatives.

  • :Ŵ or :What : optimal estimated process noise over $N_k$, $\mathbf{Ŵ}$
  • or :epsilon : optimal slack variable, $ϵ$
  • :X̂ or :Xhat : optimal estimated states over $N_k+1$, $\mathbf{X̂}$
  • :x̂ or :xhat : optimal estimated state for the next time step, $\mathbf{x̂}_k(k+1)$
  • :V̂ or :Vhat : optimal estimated sensor noise over $N_k$, $\mathbf{V̂}$
  • :P̄ or :Pbar : estimation error covariance at arrival, $\mathbf{P̄}$
  • :x̄ or :xbar : optimal estimation error at arrival, $\mathbf{x̄}$
  • :Ŷ or :Yhat : optimal estimated outputs over $N_k$, $\mathbf{Ŷ}$
  • :Ŷm or :Yhatm : optimal estimated measured outputs over $N_k$, $\mathbf{Ŷ^m}$
  • :x̂arr or :xhatarr : optimal estimated state at arrival, $\mathbf{x̂}_k(k-N_k+1)$
  • :J : objective value optimum, $J$
  • :Ym : measured outputs over $N_k$, $\mathbf{Y^m}$
  • :U : manipulated inputs over $N_k$, $\mathbf{U}$
  • :D : measured disturbances over $N_k$, $\mathbf{D}$
  • :sol : solution summary of the optimizer for printing

Examples

julia> model = LinModel(ss(1.0, 1.0, 1.0, 0, 5.0));
+(0.42, 9.0, 10.0, 0.666)
source

Get Additional Information

ModelPredictiveControl.getinfoFunction
getinfo(estim::MovingHorizonEstimator) -> info

Get additional info on estim MovingHorizonEstimator optimum for troubleshooting.

The function should be called after calling updatestate!. It returns the dictionary info with the following fields:

Info

Fields with emphasis are non-Unicode alternatives.

  • :Ŵ or :What : optimal estimated process noise over $N_k$, $\mathbf{Ŵ}$
  • or :epsilon : optimal slack variable, $ϵ$
  • :X̂ or :Xhat : optimal estimated states over $N_k+1$, $\mathbf{X̂}$
  • :x̂ or :xhat : optimal estimated state for the next time step, $\mathbf{x̂}_k(k+1)$
  • :V̂ or :Vhat : optimal estimated sensor noise over $N_k$, $\mathbf{V̂}$
  • :P̄ or :Pbar : estimation error covariance at arrival, $\mathbf{P̄}$
  • :x̄ or :xbar : optimal estimation error at arrival, $\mathbf{x̄}$
  • :Ŷ or :Yhat : optimal estimated outputs over $N_k$, $\mathbf{Ŷ}$
  • :Ŷm or :Yhatm : optimal estimated measured outputs over $N_k$, $\mathbf{Ŷ^m}$
  • :x̂arr or :xhatarr : optimal estimated state at arrival, $\mathbf{x̂}_k(k-N_k+1)$
  • :J : objective value optimum, $J$
  • :Ym : measured outputs over $N_k$, $\mathbf{Y^m}$
  • :U : manipulated inputs over $N_k$, $\mathbf{U}$
  • :D : measured disturbances over $N_k$, $\mathbf{D}$
  • :sol : solution summary of the optimizer for printing

Examples

julia> model = LinModel(ss(1.0, 1.0, 1.0, 0, 5.0));
 
 julia> estim = MovingHorizonEstimator(model, He=1, nint_ym=0, direct=false);
 
@@ -111,13 +111,13 @@
 
 julia> round.(getinfo(estim)[:Ŷ], digits=3)
 1-element Vector{Float64}:
- 0.5
source
getinfo(mpc::PredictiveController) -> info

Get additional info about mpc PredictiveController optimum for troubleshooting.

The function should be called after calling moveinput!. It returns the dictionary info with the following fields:

Info

Fields with emphasis are non-Unicode alternatives.

  • :ΔU or :DeltaU : optimal manipulated input increments over $H_c$, $\mathbf{ΔU}$
  • or :epsilon : optimal slack variable, $ϵ$
  • :D̂ or :Dhat : predicted measured disturbances over $H_p$, $\mathbf{D̂}$
  • :ŷ or :yhat : current estimated output, $\mathbf{ŷ}(k)$
  • :Ŷ or :Yhat : optimal predicted outputs over $H_p$, $\mathbf{Ŷ}$
  • :Ŷs or :Yhats : predicted stochastic output over $H_p$ of InternalModel, $\mathbf{Ŷ_s}$
  • :R̂y or :Rhaty : predicted output setpoint over $H_p$, $\mathbf{R̂_y}$
  • :R̂u or :Rhatu : predicted manipulated input setpoint over $H_p$, $\mathbf{R̂_u}$
  • :x̂end or :xhatend : optimal terminal states, $\mathbf{x̂}_{k-1}(k+H_p)$
  • :J : objective value optimum, $J$
  • :U : optimal manipulated inputs over $H_p$, $\mathbf{U}$
  • :u : current optimal manipulated input, $\mathbf{u}(k)$
  • :d : current measured disturbance, $\mathbf{d}(k)$

For LinMPC and NonLinMPC, the field :sol also contains the optimizer solution summary that can be printed. Lastly, the optimal economic cost :JE is also available for NonLinMPC.

Examples

julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1, Hc=1);
+ 0.5
source
getinfo(mpc::PredictiveController) -> info

Get additional info about mpc PredictiveController optimum for troubleshooting.

The function should be called after calling moveinput!. It returns the dictionary info with the following fields:

Info

Fields with emphasis are non-Unicode alternatives.

  • :ΔU or :DeltaU : optimal manipulated input increments over $H_c$, $\mathbf{ΔU}$
  • or :epsilon : optimal slack variable, $ϵ$
  • :D̂ or :Dhat : predicted measured disturbances over $H_p$, $\mathbf{D̂}$
  • :ŷ or :yhat : current estimated output, $\mathbf{ŷ}(k)$
  • :Ŷ or :Yhat : optimal predicted outputs over $H_p$, $\mathbf{Ŷ}$
  • :Ŷs or :Yhats : predicted stochastic output over $H_p$ of InternalModel, $\mathbf{Ŷ_s}$
  • :R̂y or :Rhaty : predicted output setpoint over $H_p$, $\mathbf{R̂_y}$
  • :R̂u or :Rhatu : predicted manipulated input setpoint over $H_p$, $\mathbf{R̂_u}$
  • :x̂end or :xhatend : optimal terminal states, $\mathbf{x̂}_{k-1}(k+H_p)$
  • :J : objective value optimum, $J$
  • :U : optimal manipulated inputs over $H_p$, $\mathbf{U}$
  • :u : current optimal manipulated input, $\mathbf{u}(k)$
  • :d : current measured disturbance, $\mathbf{d}(k)$

For LinMPC and NonLinMPC, the field :sol also contains the optimizer solution summary that can be printed. Lastly, the optimal economic cost :JE is also available for NonLinMPC.

Examples

julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1, Hc=1);
 
 julia> u = moveinput!(mpc, [10]);
 
 julia> round.(getinfo(mpc)[:Ŷ], digits=3)
 1-element Vector{Float64}:
- 10.0
source

Soft Real-Time Utilities

Save current time t

ModelPredictiveControl.savetime!Function
savetime!(model::SimModel) -> t

Set model.t to time() and return the value.

Used in conjunction with periodsleep for simple soft real-time simulations. Call this function before any other in the simulation loop.

source
savetime!(estim::StateEstimator) -> t

Call savetime!(estim.model) and return the time t.

source
savetime!(mpc::PredictiveController) -> t

Call savetime!(mpc.estim.model) and return the time t.

source

Period Sleep

ModelPredictiveControl.periodsleepFunction
periodsleep(model::SimModel, busywait=false) -> nothing

Sleep for model.Ts s minus the time elapsed since the last call to savetime!.

It calls sleep if busywait is false. Else, a simple while loop implements busy-waiting. As a rule-of-thumb, busy-waiting should be used if model.Ts < 0.1 s, since the accuracy of sleep is around 1 ms. Can be used to implement simple soft real-time simulations, see the example below.

Examples

julia> model = LinModel(tf(2, [0.3, 1]), 0.1);
+ 10.0
source

Soft Real-Time Utilities

Save current time t

ModelPredictiveControl.savetime!Function
savetime!(model::SimModel) -> t

Set model.t to time() and return the value.

Used in conjunction with periodsleep for simple soft real-time simulations. Call this function before any other in the simulation loop.

source
savetime!(estim::StateEstimator) -> t

Call savetime!(estim.model) and return the time t.

source
savetime!(mpc::PredictiveController) -> t

Call savetime!(mpc.estim.model) and return the time t.

source

Period Sleep

ModelPredictiveControl.periodsleepFunction
periodsleep(model::SimModel, busywait=false) -> nothing

Sleep for model.Ts s minus the time elapsed since the last call to savetime!.

It calls sleep if busywait is false. Else, a simple while loop implements busy-waiting. As a rule-of-thumb, busy-waiting should be used if model.Ts < 0.1 s, since the accuracy of sleep is around 1 ms. Can be used to implement simple soft real-time simulations, see the example below.

Examples

julia> model = LinModel(tf(2, [0.3, 1]), 0.1);
 
 julia> function sim_realtime!(model)
            t_0 = time()
@@ -132,4 +132,4 @@
 julia> sim_realtime!(model)
 0.0
 0.1
-0.2
source
periodsleep(estim::StateEstimator) -> nothing

Call periodsleep(estim.model).

source
periodsleep(mpc::PredictiveController) -> nothing

Call periodsleep(mpc.estim.model).

source
+0.2source
periodsleep(estim::StateEstimator) -> nothing

Call periodsleep(estim.model).

source
periodsleep(mpc::PredictiveController) -> nothing

Call periodsleep(mpc.estim.model).

source
diff --git a/previews/PR94/public/plot_sim/index.html b/previews/PR94/public/plot_sim/index.html index ecfdb1a10..4f2a19b0b 100644 --- a/previews/PR94/public/plot_sim/index.html +++ b/previews/PR94/public/plot_sim/index.html @@ -2,7 +2,7 @@ Simulations and Plots · ModelPredictiveControl.jl

Functions: Simulations and Plots

This page documents the functions for quick plotting of open- and closed-loop simulations. They are generic to SimModel, StateEstimator and PredictiveController types. A SimResult instance must be created first with its constructor or by calling sim!. The results are then visualized with plot function from Plots.jl.

Quick Simulations

ModelPredictiveControl.sim!Function
sim!(plant::SimModel, N::Int, u=plant.uop.+1, d=plant.dop; x_0=plant.xop) -> res

Open-loop simulation of plant for N time steps, default to unit bump test on all inputs.

The manipulated inputs $\mathbf{u}$ and measured disturbances $\mathbf{d}$ are held constant at u and d values, respectively. The plant initial state $\mathbf{x}(0)$ is specified by x_0 keyword arguments. The function returns SimResult instances that can be visualized by calling plot on them. Note that the method mutates plant internal states.

Examples

julia> plant = NonLinModel((x,u,d)->0.1x+u+d, (x,_)->2x, 10.0, 1, 1, 1, 1, solver=nothing);
 
 julia> res = sim!(plant, 15, [0], [0], x_0=[1])
-Simulation results of NonLinModel with 15 time steps.
source
sim!(
+Simulation results of NonLinModel with 15 time steps.
source
sim!(
     estim::StateEstimator,
     N::Int,
     u = estim.model.uop .+ 1,
@@ -13,7 +13,7 @@
 julia> estim = KalmanFilter(model, σR=[0.5], σQ=[0.25], σQint_ym=[0.01], σPint_ym_0=[0.1]);
 
 julia> res = sim!(estim, 50, [0], y_noise=[0.5], x_noise=[0.25], x_0=[-10], x̂_0=[0, 0])
-Simulation results of KalmanFilter with 50 time steps.
source
sim!(
+Simulation results of KalmanFilter with 50 time steps.
source
sim!(
     mpc::PredictiveController, 
     N::Int,
     ry = mpc.estim.model.yop .+ 1, 
@@ -25,7 +25,7 @@
 julia> mpc = setconstraint!(LinMPC(model, Mwt=[0, 1], Nwt=[0.01], Hp=30), ymin=[0, -Inf]);
 
 julia> res = sim!(mpc, 25, [0, 0], y_noise=[0.1], y_step=[-10, 0])
-Simulation results of LinMPC with 25 time steps.
source

Simulation Results

Simulation Results

ModelPredictiveControl.SimResultType
SimResult(obj::SimModel,             U_data, Y_data, D_data=[]; <keyword arguments>)
 SimResult(obj::StateEstimator,       U_data, Y_data, D_data=[]; <keyword arguments>)
 SimResult(obj::PredictiveController, U_data, Y_data, D_data=[]; <keyword arguments>)

Manually construct a SimResult to quickly plot obj simulations.

Except for obj, all the arguments should be matrices of N columns, where N is the number of time steps. SimResult objects allow to quickly plot simulation results. Simply call plot on them.

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • obj : simulated SimModel/StateEstimator/PredictiveController
  • U_data : manipulated inputs
  • Y_data : plant outputs
  • D_data=[] : measured disturbances
  • X_data=nothing : plant states
  • X̂_data=nothing or Xhat_data : estimated states
  • Ŷ_data=nothing or Yhat_data : estimated outputs
  • Ry_data=nothing : plant output setpoints
  • Ru_data=nothing : manipulated input setpoints
  • plant=get_model(obj) : simulated plant model, default to obj internal plant model

Examples

julia> model = LinModel(tf(1, [1, 1]), 1.0);
 
@@ -36,12 +36,12 @@
  0.632121  0.864665  0.950213  0.981684  0.993262
 
 julia> res = SimResult(model, U_data, Y_data)
-Simulation results of LinModel with 5 time steps.
source

Plotting Results

The plot methods are based on Plots.jl package. To install it run using Pkg; Pkg.add("Plots") in the Julia REPL.

ModelPredictiveControl.plotFunction
plot(res::SimResult{<:Real, <:SimModel}; <keyword arguments>)

Plot the simulation results of a SimModel.

Arguments

Info

The keyword arguments can be Bools, index ranges (2:4) or vectors ([1, 3]), to select the variables to plot.

  • res::SimResult{<:Real, <:SimModel} : simulation results to plot
  • ploty=true : plot plant outputs $\mathbf{y}$
  • plotu=true : plot manipulated inputs $\mathbf{u}$
  • plotd=true : plot measured disturbances $\mathbf{d}$ if applicable
  • plotx=false : plot plant states $\mathbf{x}$

Examples

julia> res = sim!(LinModel(tf(2, [10, 1]), 2.0), 25);
+Simulation results of LinModel with 5 time steps.
source

Plotting Results

The plot methods are based on Plots.jl package. To install it run using Pkg; Pkg.add("Plots") in the Julia REPL.

ModelPredictiveControl.plotFunction
plot(res::SimResult{<:Real, <:SimModel}; <keyword arguments>)

Plot the simulation results of a SimModel.

Arguments

Info

The keyword arguments can be Bools, index ranges (2:4) or vectors ([1, 3]), to select the variables to plot.

  • res::SimResult{<:Real, <:SimModel} : simulation results to plot
  • ploty=true : plot plant outputs $\mathbf{y}$
  • plotu=true : plot manipulated inputs $\mathbf{u}$
  • plotd=true : plot measured disturbances $\mathbf{d}$ if applicable
  • plotx=false : plot plant states $\mathbf{x}$

Examples

julia> res = sim!(LinModel(tf(2, [10, 1]), 2.0), 25);
 
-julia> using Plots; plot(res, plotu=false)

plot_model

source
plot(res::SimResult{<:Real, <:StateEstimator}; <keyword arguments>)

Plot the simulation results of a StateEstimator.

Arguments

Info

The keyword arguments can be Bools, index ranges (2:4) or vectors ([1, 3]), to select the variables to plot. Keywords in emphasis are non-Unicode alternatives.

  • res::SimResult{<:Real, <:StateEstimator} : simulation results to plot
  • plotŷ=true or plotyhat : plot estimated outputs $\mathbf{ŷ}$
  • plotx̂=false or plotxhat : plot estimated states $\mathbf{x̂}$
  • plotxwithx̂=false or plotxwithxhat : plot plant states $\mathbf{x}$ and estimated states $\mathbf{x̂}$ together
  • plotx̂min=true or plotxhatmin : plot estimated state lower bounds $\mathbf{x̂_{min}}$ if applicable
  • plotx̂max=true or plotxhatmax : plot estimated state upper bounds $\mathbf{x̂_{max}}$ if applicable
  • <keyword arguments> of plot(::SimResult{<:Real, <:SimModel})

Examples

julia> res = sim!(KalmanFilter(LinModel(tf(3, [2.0, 1]), 1.0)), 25, [0], y_step=[1]);
+julia> using Plots; plot(res, plotu=false)

plot_model

source
plot(res::SimResult{<:Real, <:StateEstimator}; <keyword arguments>)

Plot the simulation results of a StateEstimator.

Arguments

Info

The keyword arguments can be Bools, index ranges (2:4) or vectors ([1, 3]), to select the variables to plot. Keywords in emphasis are non-Unicode alternatives.

  • res::SimResult{<:Real, <:StateEstimator} : simulation results to plot
  • plotŷ=true or plotyhat : plot estimated outputs $\mathbf{ŷ}$
  • plotx̂=false or plotxhat : plot estimated states $\mathbf{x̂}$
  • plotxwithx̂=false or plotxwithxhat : plot plant states $\mathbf{x}$ and estimated states $\mathbf{x̂}$ together
  • plotx̂min=true or plotxhatmin : plot estimated state lower bounds $\mathbf{x̂_{min}}$ if applicable
  • plotx̂max=true or plotxhatmax : plot estimated state upper bounds $\mathbf{x̂_{max}}$ if applicable
  • <keyword arguments> of plot(::SimResult{<:Real, <:SimModel})

Examples

julia> res = sim!(KalmanFilter(LinModel(tf(3, [2.0, 1]), 1.0)), 25, [0], y_step=[1]);
 
-julia> using Plots; plot(res, plotu=false, plotŷ=true, plotxwithx̂=true)

plot_estimator

source
plot(res::SimResult{<:Real, <:PredictiveController}; <keyword arguments>)

Plot the simulation results of a PredictiveController.

Arguments

Info

The keyword arguments can be Bools, index ranges (2:4) or vectors ([1, 3]), to select the variables to plot.

  • res::SimResult{<:Real, <:PredictiveController} : simulation results to plot
  • plotry=true : plot plant output setpoints $\mathbf{r_y}$ if applicable
  • plotymin=true : plot predicted output lower bounds $\mathbf{y_{min}}$ if applicable
  • plotymax=true : plot predicted output upper bounds $\mathbf{y_{max}}$ if applicable
  • plotru=true : plot manipulated input setpoints $\mathbf{r_u}$ if applicable
  • plotumin=true : plot manipulated input lower bounds $\mathbf{u_{min}}$ if applicable
  • plotumax=true : plot manipulated input upper bounds $\mathbf{u_{max}}$ if applicable
  • <keyword arguments> of plot(::SimResult{<:Real, <:SimModel})
  • <keyword arguments> of plot(::SimResult{<:Real, <:StateEstimator})

Examples

julia> model = LinModel(tf(2, [5.0, 1]), 1.0);
+julia> using Plots; plot(res, plotu=false, plotŷ=true, plotxwithx̂=true)

plot_estimator

source
plot(res::SimResult{<:Real, <:PredictiveController}; <keyword arguments>)

Plot the simulation results of a PredictiveController.

Arguments

Info

The keyword arguments can be Bools, index ranges (2:4) or vectors ([1, 3]), to select the variables to plot.

  • res::SimResult{<:Real, <:PredictiveController} : simulation results to plot
  • plotry=true : plot plant output setpoints $\mathbf{r_y}$ if applicable
  • plotymin=true : plot predicted output lower bounds $\mathbf{y_{min}}$ if applicable
  • plotymax=true : plot predicted output upper bounds $\mathbf{y_{max}}$ if applicable
  • plotru=true : plot manipulated input setpoints $\mathbf{r_u}$ if applicable
  • plotumin=true : plot manipulated input lower bounds $\mathbf{u_{min}}$ if applicable
  • plotumax=true : plot manipulated input upper bounds $\mathbf{u_{max}}$ if applicable
  • <keyword arguments> of plot(::SimResult{<:Real, <:SimModel})
  • <keyword arguments> of plot(::SimResult{<:Real, <:StateEstimator})

Examples

julia> model = LinModel(tf(2, [5.0, 1]), 1.0);
 
 julia> res = sim!(setconstraint!(LinMPC(model), umax=[1.0]), 25, [0], u_step=[-1]);
 
-julia> using Plots; plot(res, plotŷ=true, plotry=true, plotumax=true, plotx̂=[2])

plot_controller

source
+julia> using Plots; plot(res, plotŷ=true, plotry=true, plotumax=true, plotx̂=[2])

plot_controller

source diff --git a/previews/PR94/public/predictive_control/index.html b/previews/PR94/public/predictive_control/index.html index 10b394eb9..d533a53b7 100644 --- a/previews/PR94/public/predictive_control/index.html +++ b/previews/PR94/public/predictive_control/index.html @@ -19,7 +19,7 @@ julia> u = mpc([5]); round.(u, digits=3) 1-element Vector{Float64}: - 1.0source

LinMPC

ModelPredictiveControl.LinMPCType
LinMPC(model::LinModel; <keyword arguments>)

Construct a linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} + 1.0

source

LinMPC

ModelPredictiveControl.LinMPCType
LinMPC(model::LinModel; <keyword arguments>)

Construct a linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} \min_{\mathbf{ΔU}, ϵ} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\ + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} @@ -39,7 +39,7 @@ 4 estimated states x̂ 2 measured outputs ym (2 integrating states) 0 unmeasured outputs yu - 0 measured disturbances d

Extended Help

Extended Help

Manipulated inputs setpoints $\mathbf{r_u}$ are not common but they can be interesting for over-actuated systems, when nu > ny (e.g. prioritize solutions with lower economical costs). The default Lwt value implies that this feature is disabled by default.

The objective function follows this nomenclature:

VARIABLEDESCRIPTIONSIZE
$H_p$prediction horizon (integer)()
$H_c$control horizon (integer)()
$\mathbf{ΔU}$manipulated input increments over $H_c$(nu*Hc,)
$\mathbf{Ŷ}$predicted outputs over $H_p$(ny*Hp,)
$\mathbf{U}$manipulated inputs over $H_p$(nu*Hp,)
$\mathbf{R̂_y}$predicted output setpoints over $H_p$(ny*Hp,)
$\mathbf{R̂_u}$predicted manipulated input setpoints over $H_p$(nu*Hp,)
$\mathbf{M}_{H_p}$output setpoint tracking weights over $H_p$(ny*Hp, ny*Hp)
$\mathbf{N}_{H_c}$manipulated input increment weights over $H_c$(nu*Hc, nu*Hc)
$\mathbf{L}_{H_p}$manipulated input setpoint tracking weights over $H_p$(nu*Hp, nu*Hp)
$C$slack variable weight()
$ϵ$slack variable for constraint softening()
source
LinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct LinMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

Examples

julia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);
+  0 measured disturbances d

Extended Help

Extended Help

Manipulated inputs setpoints $\mathbf{r_u}$ are not common but they can be interesting for over-actuated systems, when nu > ny (e.g. prioritize solutions with lower economical costs). The default Lwt value implies that this feature is disabled by default.

The objective function follows this nomenclature:

VARIABLEDESCRIPTIONSIZE
$H_p$prediction horizon (integer)()
$H_c$control horizon (integer)()
$\mathbf{ΔU}$manipulated input increments over $H_c$(nu*Hc,)
$\mathbf{Ŷ}$predicted outputs over $H_p$(ny*Hp,)
$\mathbf{U}$manipulated inputs over $H_p$(nu*Hp,)
$\mathbf{R̂_y}$predicted output setpoints over $H_p$(ny*Hp,)
$\mathbf{R̂_u}$predicted manipulated input setpoints over $H_p$(nu*Hp,)
$\mathbf{M}_{H_p}$output setpoint tracking weights over $H_p$(ny*Hp, ny*Hp)
$\mathbf{N}_{H_c}$manipulated input increment weights over $H_c$(nu*Hc, nu*Hc)
$\mathbf{L}_{H_p}$manipulated input setpoint tracking weights over $H_p$(nu*Hp, nu*Hp)
$C$slack variable weight()
$ϵ$slack variable for constraint softening()
source
LinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct LinMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

Examples

julia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);
 
 julia> mpc = LinMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
 LinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, KalmanFilter estimator and:
@@ -50,7 +50,7 @@
   3 estimated states x̂
   1 measured outputs ym (1 integrating states)
   1 unmeasured outputs yu
-  0 measured disturbances d
source

ExplicitMPC

ModelPredictiveControl.ExplicitMPCType
ExplicitMPC(model::LinModel; <keyword arguments>)

Construct an explicit linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} + 0 measured disturbances d

source

ExplicitMPC

ModelPredictiveControl.ExplicitMPCType
ExplicitMPC(model::LinModel; <keyword arguments>)

Construct an explicit linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} \min_{\mathbf{ΔU}} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\ + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} @@ -64,7 +64,7 @@ 4 estimated states x̂ 2 measured outputs ym (2 integrating states) 0 unmeasured outputs yu - 0 measured disturbances d

source
ExplicitMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct ExplicitMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

source

NonLinMPC

ModelPredictiveControl.NonLinMPCType
NonLinMPC(model::SimModel; <keyword arguments>)

Construct a nonlinear predictive controller based on SimModel model.

Both NonLinModel and LinModel are supported (see Extended Help). The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} + 0 measured disturbances d

source
ExplicitMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct ExplicitMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

source

NonLinMPC

ModelPredictiveControl.NonLinMPCType
NonLinMPC(model::SimModel; <keyword arguments>)

Construct a nonlinear predictive controller based on SimModel model.

Both NonLinModel and LinModel are supported (see Extended Help). The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} \min_{\mathbf{ΔU}, ϵ}\ & \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\& + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} @@ -83,7 +83,7 @@ 2 estimated states x̂ 1 measured outputs ym (1 integrating states) 0 unmeasured outputs yu - 0 measured disturbances d

Extended Help

Extended Help

NonLinMPC controllers based on LinModel compute the predictions with matrix algebra instead of a for loop. This feature can accelerate the optimization, especially for the constraint handling, and is not available in any other package, to my knowledge.

The optimization relies on JuMP automatic differentiation (AD) to compute the objective and constraint derivatives. Optimizers generally benefit from exact derivatives like AD. However, the NonLinModel state-space functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions.

Note that if Cwt≠Inf, the attribute nlp_scaling_max_gradient of Ipopt is set to 10/Cwt (if not already set), to scale the small values of $ϵ$.

source
NonLinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct NonLinMPC.

Examples

julia> model = NonLinModel((x,u,_)->0.5x+u, (x,_)->2x, 10.0, 1, 1, 1, solver=nothing);
+  0 measured disturbances d

Extended Help

Extended Help

NonLinMPC controllers based on LinModel compute the predictions with matrix algebra instead of a for loop. This feature can accelerate the optimization, especially for the constraint handling, and is not available in any other package, to my knowledge.

The optimization relies on JuMP automatic differentiation (AD) to compute the objective and constraint derivatives. Optimizers generally benefit from exact derivatives like AD. However, the NonLinModel state-space functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions.

Note that if Cwt≠Inf, the attribute nlp_scaling_max_gradient of Ipopt is set to 10/Cwt (if not already set), to scale the small values of $ϵ$.

source
NonLinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct NonLinMPC.

Examples

julia> model = NonLinModel((x,u,_)->0.5x+u, (x,_)->2x, 10.0, 1, 1, 1, solver=nothing);
 
 julia> estim = UnscentedKalmanFilter(model, σQint_ym=[0.05]);
 
@@ -96,8 +96,8 @@
   2 estimated states x̂
   1 measured outputs ym (1 integrating states)
   0 unmeasured outputs yu
-  0 measured disturbances d
source

Move Manipulated Input u

ModelPredictiveControl.moveinput!Function
moveinput!(mpc::PredictiveController, ry=mpc.estim.model.yop, d=[]; <keyword args>) -> u

Compute the optimal manipulated input value u for the current control period.

Solve the optimization problem of mpc PredictiveController and return the results $\mathbf{u}(k)$. Following the receding horizon principle, the algorithm discards the optimal future manipulated inputs $\mathbf{u}(k+1), \mathbf{u}(k+2), ...$ Note that the method mutates mpc internal data but it does not modifies mpc.estim states. Call updatestate!(mpc, u, ym, d) to update mpc state estimates.

Calling a PredictiveController object calls this method.

See also LinMPC, ExplicitMPC, NonLinMPC.

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • mpc::PredictiveController : solve optimization problem of mpc.
  • ry=mpc.estim.model.yop : current output setpoints $\mathbf{r_y}(k)$.
  • d=[] : current measured disturbances $\mathbf{d}(k)$.
  • D̂=repeat(d, mpc.Hp) or Dhat : predicted measured disturbances $\mathbf{D̂}$, constant in the future by default or $\mathbf{d̂}(k+j)=\mathbf{d}(k)$ for $j=1$ to $H_p$.
  • R̂y=repeat(ry, mpc.Hp) or Rhaty : predicted output setpoints $\mathbf{R̂_y}$, constant in the future by default or $\mathbf{r̂_y}(k+j)=\mathbf{r_y}(k)$ for $j=1$ to $H_p$.
  • R̂u=mpc.Uop or Rhatu : predicted manipulated input setpoints, constant in the future by default or $\mathbf{r̂_u}(k+j)=\mathbf{u_{op}}$ for $j=0$ to $H_p-1$.

Examples

julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1);
+  0 measured disturbances d
source

Move Manipulated Input u

ModelPredictiveControl.moveinput!Function
moveinput!(mpc::PredictiveController, ry=mpc.estim.model.yop, d=[]; <keyword args>) -> u

Compute the optimal manipulated input value u for the current control period.

Solve the optimization problem of mpc PredictiveController and return the results $\mathbf{u}(k)$. Following the receding horizon principle, the algorithm discards the optimal future manipulated inputs $\mathbf{u}(k+1), \mathbf{u}(k+2), ...$ Note that the method mutates mpc internal data but it does not modifies mpc.estim states. Call updatestate!(mpc, u, ym, d) to update mpc state estimates.

Calling a PredictiveController object calls this method.

See also LinMPC, ExplicitMPC, NonLinMPC.

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • mpc::PredictiveController : solve optimization problem of mpc.
  • ry=mpc.estim.model.yop : current output setpoints $\mathbf{r_y}(k)$.
  • d=[] : current measured disturbances $\mathbf{d}(k)$.
  • D̂=repeat(d, mpc.Hp) or Dhat : predicted measured disturbances $\mathbf{D̂}$, constant in the future by default or $\mathbf{d̂}(k+j)=\mathbf{d}(k)$ for $j=1$ to $H_p$.
  • R̂y=repeat(ry, mpc.Hp) or Rhaty : predicted output setpoints $\mathbf{R̂_y}$, constant in the future by default or $\mathbf{r̂_y}(k+j)=\mathbf{r_y}(k)$ for $j=1$ to $H_p$.
  • R̂u=mpc.Uop or Rhatu : predicted manipulated input setpoints, constant in the future by default or $\mathbf{r̂_u}(k+j)=\mathbf{u_{op}}$ for $j=0$ to $H_p-1$.

Examples

julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1);
 
 julia> ry = [5]; u = moveinput!(mpc, ry); round.(u, digits=3)
 1-element Vector{Float64}:
- 1.0
source
+ 1.0source diff --git a/previews/PR94/public/sim_model/index.html b/previews/PR94/public/sim_model/index.html index f4f21bd6c..1cba37a42 100644 --- a/previews/PR94/public/sim_model/index.html +++ b/previews/PR94/public/sim_model/index.html @@ -3,7 +3,7 @@ julia> y = model() 1-element Vector{Float64}: - 20.0source

LinModel

ModelPredictiveControl.LinModelType
LinModel(sys::StateSpace[, Ts]; i_u=1:size(sys,2), i_d=Int[])

Construct a linear model from state-space model sys with sampling time Ts in second.

The system sys can be continuous or discrete-time (Ts can be omitted for the latter). For continuous dynamics, its state-space equations are (discrete case in Extended Help):

\[\begin{aligned} + 20.0

source

LinModel

ModelPredictiveControl.LinModelType
LinModel(sys::StateSpace[, Ts]; i_u=1:size(sys,2), i_d=Int[])

Construct a linear model from state-space model sys with sampling time Ts in second.

The system sys can be continuous or discrete-time (Ts can be omitted for the latter). For continuous dynamics, its state-space equations are (discrete case in Extended Help):

\[\begin{aligned} \mathbf{ẋ}(t) &= \mathbf{A x}(t) + \mathbf{B z}(t) \\ \mathbf{y}(t) &= \mathbf{C x}(t) + \mathbf{D z}(t) \end{aligned}\]

with the state $\mathbf{x}$ and output $\mathbf{y}$ vectors. The $\mathbf{z}$ vector comprises the manipulated inputs $\mathbf{u}$ and measured disturbances $\mathbf{d}$, in any order. i_u provides the indices of $\mathbf{z}$ that are manipulated, and i_d, the measured disturbances. The constructor automatically discretizes continuous systems, resamples discrete ones if Ts ≠ sys.Ts, computes a new realization if not minimal, and separates the $\mathbf{z}$ terms in two parts (details in Extended Help). The rest of the documentation assumes discrete dynamics since all systems end up in this form.

See also ss

Examples

julia> model1 = LinModel(ss(-0.1, 1.0, 1.0, 0), 2.0) # continuous-time StateSpace
@@ -24,12 +24,12 @@
 \end{aligned}\]

Continuous dynamics are internally discretized using c2d and :zoh for manipulated inputs, and :tustin, for measured disturbances. Lastly, if sys is discrete and the provided argument Ts ≠ sys.Ts, the system is resampled by using the aforementioned discretization methods.

Note that the constructor transforms the system to its minimal realization using minreal for controllability/observability. As a consequence, the final state-space representation may be different from the one provided in sys. It is also converted into a more practical form ($\mathbf{D_u=0}$ because of the zero-order hold):

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{A x}(k) + \mathbf{B_u u}(k) + \mathbf{B_d d}(k) \\ \mathbf{y}(k) &= \mathbf{C x}(k) + \mathbf{D_d d}(k) -\end{aligned}\]

Use the syntax LinModel{NT}(A, Bu, C, Bd, Dd, Ts) to force a specific state-space representation.

source
LinModel(sys::TransferFunction[, Ts]; i_u=1:size(sys,2), i_d=Int[])

Convert to minimal realization state-space when sys is a transfer function.

sys is equal to $\frac{\mathbf{y}(s)}{\mathbf{z}(s)}$ for continuous-time, and $\frac{\mathbf{y}(z)}{\mathbf{z}(z)}$, for discrete-time.

See also tf

Examples

julia> model = LinModel([tf(3, [30, 1]) tf(-2, [5, 1])], 0.5, i_d=[2])
+\end{aligned}\]

Use the syntax LinModel{NT}(A, Bu, C, Bd, Dd, Ts) to force a specific state-space representation.

source
LinModel(sys::TransferFunction[, Ts]; i_u=1:size(sys,2), i_d=Int[])

Convert to minimal realization state-space when sys is a transfer function.

sys is equal to $\frac{\mathbf{y}(s)}{\mathbf{z}(s)}$ for continuous-time, and $\frac{\mathbf{y}(z)}{\mathbf{z}(z)}$, for discrete-time.

See also tf

Examples

julia> model = LinModel([tf(3, [30, 1]) tf(-2, [5, 1])], 0.5, i_d=[2])
 LinModel with a sample time Ts = 0.5 s and:
  1 manipulated inputs u
  2 states x
  1 outputs y
- 1 measured disturbances d
source
LinModel(sys::DelayLtiSystem, Ts; i_u=1:size(sys,2), i_d=Int[])

Discretize with zero-order hold when sys is a continuous system with delays.

The delays must be multiples of the sample time Ts.

source
LinModel{NT}(A, Bu, C, Bd, Dd, Ts)

Construct the model from the discrete state-space matrices A, Bu, C, Bd, Dd directly.

See LinModel(::StateSpace) Extended Help for the meaning of the matrices. The arguments Bd and Dd can be the scalar 0 if there is no measured disturbance. This syntax do not modify the state-space representation provided in argument (minreal is not called). Care must be taken to ensure that the model is controllable and observable. The optional parameter NT explicitly set the number type of vectors (default to Float64).

source
LinModel(model::NonLinModel; x=model.x0+model.xop, u=model.uop, d=model.dop)

Call linearize(model; x, u, d) and return the resulting linear model.

source

NonLinModel

ModelPredictiveControl.NonLinModelType
NonLinModel{NT}(f::Function,  h::Function,  Ts, nu, nx, ny, nd=0; solver=RungeKutta(4))
+ 1 measured disturbances d
source
LinModel(sys::DelayLtiSystem, Ts; i_u=1:size(sys,2), i_d=Int[])

Discretize with zero-order hold when sys is a continuous system with delays.

The delays must be multiples of the sample time Ts.

source
LinModel{NT}(A, Bu, C, Bd, Dd, Ts)

Construct the model from the discrete state-space matrices A, Bu, C, Bd, Dd directly.

See LinModel(::StateSpace) Extended Help for the meaning of the matrices. The arguments Bd and Dd can be the scalar 0 if there is no measured disturbance. This syntax do not modify the state-space representation provided in argument (minreal is not called). Care must be taken to ensure that the model is controllable and observable. The optional parameter NT explicitly set the number type of vectors (default to Float64).

source
LinModel(model::NonLinModel; x=model.x0+model.xop, u=model.uop, d=model.dop)

Call linearize(model; x, u, d) and return the resulting linear model.

source

NonLinModel

ModelPredictiveControl.NonLinModelType
NonLinModel{NT}(f::Function,  h::Function,  Ts, nu, nx, ny, nd=0; solver=RungeKutta(4))
 NonLinModel{NT}(f!::Function, h!::Function, Ts, nu, nx, ny, nd=0; solver=RungeKutta(4))

Construct a nonlinear model from state-space functions f/f! and h/h!.

Both continuous and discrete-time models are supported. The default arguments assume continuous dynamics. Use solver=nothing for the discrete case (see Extended Help). The functions are defined as:

\[\begin{aligned} \mathbf{ẋ}(t) &= \mathbf{f}\Big( \mathbf{x}(t), \mathbf{u}(t), \mathbf{d}(t) \Big) \\ \mathbf{y}(t) &= \mathbf{h}\Big( \mathbf{x}(t), \mathbf{d}(t) \Big) @@ -56,12 +56,12 @@ 0 measured disturbances d

Extended Help

Extended Help

State-space functions are similar for discrete dynamics:

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{f}\Big( \mathbf{x}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) \\ \mathbf{y}(k) &= \mathbf{h}\Big( \mathbf{x}(k), \mathbf{d}(k) \Big) -\end{aligned}\]

with two possible implementations as well:

  1. Non-mutating functions: define them as f(x, u, d) -> xnext and h(x, d) -> y.
  2. Mutating functions: define them as f!(xnext, x, u, d) -> nothing and h!(y, x, d) -> nothing.
source

Set Variable Names

ModelPredictiveControl.setname!Function
setname!(model::SimModel; u=nothing, y=nothing, d=nothing, x=nothing) -> model

Set the names of model inputs u, outputs y, disturbances d, and states x.

The keyword arguments u, y, d, and x must be vectors of strings. The strings are used in the plotting functions.

Examples

julia> model = setname!(LinModel(tf(3, [10, 1]), 2.0), u=["\$A\$ (%)"], y=["\$T\$ (∘C)"])
+\end{aligned}\]

with two possible implementations as well:

  1. Non-mutating functions: define them as f(x, u, d) -> xnext and h(x, d) -> y.
  2. Mutating functions: define them as f!(xnext, x, u, d) -> nothing and h!(y, x, d) -> nothing.
source

Set Variable Names

ModelPredictiveControl.setname!Function
setname!(model::SimModel; u=nothing, y=nothing, d=nothing, x=nothing) -> model

Set the names of model inputs u, outputs y, disturbances d, and states x.

The keyword arguments u, y, d, and x must be vectors of strings. The strings are used in the plotting functions.

Examples

julia> model = setname!(LinModel(tf(3, [10, 1]), 2.0), u=["\$A\$ (%)"], y=["\$T\$ (∘C)"])
 LinModel with a sample time Ts = 2.0 s and:
  1 manipulated inputs u
  1 states x
  1 outputs y
- 0 measured disturbances d
source

Set Operating Points

ModelPredictiveControl.setop!Function
setop!(model; uop=nothing, yop=nothing, dop=nothing, xop=nothing, fop=nothing) -> model

Set the operating points of model (both LinModel and NonLinModel).

Introducing deviations vectors around manipulated input uop, model output yop, measured disturbance dop, and model state xop operating points (a.k.a. nominal values):

\[\begin{align*} + 0 measured disturbances d

source

Set Operating Points

ModelPredictiveControl.setop!Function
setop!(model; uop=nothing, yop=nothing, dop=nothing, xop=nothing, fop=nothing) -> model

Set the operating points of model (both LinModel and NonLinModel).

Introducing deviations vectors around manipulated input uop, model output yop, measured disturbance dop, and model state xop operating points (a.k.a. nominal values):

\[\begin{align*} \mathbf{u_0}(k) &= \mathbf{u}(k) - \mathbf{u_{op}} \\ \mathbf{d_0}(k) &= \mathbf{d}(k) - \mathbf{d_{op}} \\ \mathbf{y_0}(k) &= \mathbf{y}(k) - \mathbf{y_{op}} \\ @@ -83,7 +83,7 @@ julia> y = model() 1-element Vector{Float64}: - 20.0

source

Linearize

ModelPredictiveControl.linearizeFunction
linearize(model::SimModel; x=model.x0+model.xop, u=model.uop, d=model.dop) -> linmodel

Linearize model at the operating points x, u, d and return the LinModel.

The arguments x, u and d are the linearization points for the state $\mathbf{x}$, manipulated input $\mathbf{u}$ and measured disturbance $\mathbf{d}$, respectively (not necessarily an equilibrium, details in Extended Help). The Jacobians of $\mathbf{f}$ and $\mathbf{h}$ functions are automatically computed with ForwardDiff.jl.

Warning

See Extended Help if you get an error like: MethodError: no method matching (::var"##")(::Vector{ForwardDiff.Dual}).

Examples

julia> model = NonLinModel((x,u,_)->x.^3 + u, (x,_)->x, 0.1, 1, 1, 1, solver=nothing);
+ 20.0
source

Linearize

ModelPredictiveControl.linearizeFunction
linearize(model::SimModel; x=model.x0+model.xop, u=model.uop, d=model.dop) -> linmodel

Linearize model at the operating points x, u, d and return the LinModel.

The arguments x, u and d are the linearization points for the state $\mathbf{x}$, manipulated input $\mathbf{u}$ and measured disturbance $\mathbf{d}$, respectively (not necessarily an equilibrium, details in Extended Help). The Jacobians of $\mathbf{f}$ and $\mathbf{h}$ functions are automatically computed with ForwardDiff.jl.

Warning

See Extended Help if you get an error like: MethodError: no method matching (::var"##")(::Vector{ForwardDiff.Dual}).

Examples

julia> model = NonLinModel((x,u,_)->x.^3 + u, (x,_)->x, 0.1, 1, 1, 1, solver=nothing);
 
 julia> linmodel = linearize(model, x=[10.0], u=[0.0]); 
 
@@ -105,7 +105,7 @@
 \end{aligned}\]

Following setop! notation, we find:

\[\begin{aligned} \mathbf{f_{op}} &= \mathbf{f(x_{op}, u_{op}, d_{op})} \\ \mathbf{y_{op}} &= \mathbf{h(x_{op}, d_{op})} -\end{aligned}\]

Notice that $\mathbf{f_{op} - x_{op} = 0}$ if the point is an equilibrium. The equations are similar if the nonlinear model has nonzero operating points.

Automatic differentiation (AD) allows exact Jacobians. The NonLinModel f and h functions must be compatible with this feature though. See Automatic differentiation for common mistakes when writing these functions.

source
ModelPredictiveControl.linearize!Function
linearize!(linmodel::LinModel, model::SimModel; <keyword arguments>) -> linmodel

Linearize model and store the result in linmodel (in-place).

The keyword arguments are identical to linearize.

Examples

julia> model = NonLinModel((x,u,_)->x.^3 + u, (x,_)->x, 0.1, 1, 1, 1, solver=nothing);
+\end{aligned}\]

Notice that $\mathbf{f_{op} - x_{op} = 0}$ if the point is an equilibrium. The equations are similar if the nonlinear model has nonzero operating points.

Automatic differentiation (AD) allows exact Jacobians. The NonLinModel f and h functions must be compatible with this feature though. See Automatic differentiation for common mistakes when writing these functions.

source
ModelPredictiveControl.linearize!Function
linearize!(linmodel::LinModel, model::SimModel; <keyword arguments>) -> linmodel

Linearize model and store the result in linmodel (in-place).

The keyword arguments are identical to linearize.

Examples

julia> model = NonLinModel((x,u,_)->x.^3 + u, (x,_)->x, 0.1, 1, 1, 1, solver=nothing);
 
 julia> linmodel = linearize(model, x=[10.0], u=[0.0]); linmodel.A
 1×1 Matrix{Float64}:
@@ -113,4 +113,4 @@
 
 julia> linearize!(linmodel, model, x=[20.0], u=[0.0]); linmodel.A
 1×1 Matrix{Float64}:
- 1200.0
source

Differential Equation Solvers

DiffSolver

RungeKutta

ModelPredictiveControl.RungeKuttaType
RungeKutta(order=4; supersample=1)

Create a Runge-Kutta solver with optional super-sampling.

Only the 4th order Runge-Kutta is supported for now. The keyword argument supersample provides the number of internal steps (default to 1 step).

source
+ 1200.0source

Differential Equation Solvers

DiffSolver

RungeKutta

ModelPredictiveControl.RungeKuttaType
RungeKutta(order=4; supersample=1)

Create a Runge-Kutta solver with optional super-sampling.

Only the 4th order Runge-Kutta is supported for now. The keyword argument supersample provides the number of internal steps (default to 1 step).

source
diff --git a/previews/PR94/public/state_estim/index.html b/previews/PR94/public/state_estim/index.html index f4f131ae5..99667fc92 100644 --- a/previews/PR94/public/state_estim/index.html +++ b/previews/PR94/public/state_estim/index.html @@ -3,7 +3,7 @@ julia> ŷ = kf() 1-element Vector{Float64}: - 20.0
source

SteadyKalmanFilter

SteadyKalmanFilter

ModelPredictiveControl.SteadyKalmanFilterType
SteadyKalmanFilter(model::LinModel; <keyword arguments>)

Construct a steady-state Kalman Filter with the LinModel model.

The steady-state (or asymptotic) Kalman filter is based on the process model :

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{Â x}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) + \mathbf{w}(k) \\ \mathbf{y^m}(k) &= \mathbf{Ĉ^m x}(k) + \mathbf{D̂_d^m d}(k) + \mathbf{v}(k) \\ @@ -16,7 +16,7 @@ 3 estimated states x̂ 1 measured outputs ym (1 integrating states) 1 unmeasured outputs yu - 0 measured disturbances d

Extended Help

Extended Help

The model augmentation with nint_u vector adds integrators at model manipulated inputs, and nint_ym, at measured outputs. They create the integral action when the estimator is used in a controller as state feedback. By default, the method default_nint adds one integrator per measured output if feasible. The argument nint_ym can also be tweaked by following these rules on each measured output:

  • Use 0 integrator if the model output is already integrating (else it will be unobservable)
  • Use 1 integrator if the disturbances on the output are typically "step-like"
  • Use 2 integrators if the disturbances on the output are typically "ramp-like"

The function init_estimstoch builds the stochastic model for estimation.

Tip

Increasing σQint_u and σQint_ym values increases the integral action "gain".

The constructor pre-compute the steady-state Kalman gain with the kalman function. It can sometimes fail, for example when model matrices are ill-conditioned. In such a case, you can try the alternative time-varying KalmanFilter.

source
SteadyKalmanFilter(model, i_ym, nint_u, nint_ym, Q̂, R̂; direct=true)

Construct the estimator from the augmented covariance matrices and .

This syntax allows nonzero off-diagonal elements in $\mathbf{Q̂, R̂}$.

source

KalmanFilter

ModelPredictiveControl.KalmanFilterType
KalmanFilter(model::LinModel; <keyword arguments>)

Construct a time-varying Kalman Filter with the LinModel model.

The process model is identical to SteadyKalmanFilter. The matrix $\mathbf{P̂}_k(k+1)$ is the estimation error covariance of model states augmented with the stochastic ones (specified by nint_u and nint_ym). Three keyword arguments modify its initial value with $\mathbf{P̂}_{-1}(0) = \mathrm{diag}\{ \mathbf{P}(0), \mathbf{P_{int_{u}}}(0), \mathbf{P_{int_{ym}}}(0) \}$.

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • model::LinModel : (deterministic) model for the estimations.
  • σP_0=fill(1/model.nx,model.nx) or sigmaP_0 : main diagonal of the initial estimate covariance $\mathbf{P}(0)$, specified as a standard deviation vector.
  • σPint_u_0=fill(1,sum(nint_u)) or sigmaPint_u_0 : same than σP_0 but for the unmeasured disturbances at manipulated inputs $\mathbf{P_{int_u}}(0)$ (composed of integrators).
  • σPint_ym_0=fill(1,sum(nint_ym)) or sigmaPint_ym_0 : same than σP_0 but for the unmeasured disturbances at measured outputs $\mathbf{P_{int_{ym}}}(0)$ (composed of integrators).
  • <keyword arguments> of SteadyKalmanFilter constructor.

Examples

julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);
+ 0 measured disturbances d

Extended Help

Extended Help

The model augmentation with nint_u vector adds integrators at model manipulated inputs, and nint_ym, at measured outputs. They create the integral action when the estimator is used in a controller as state feedback. By default, the method default_nint adds one integrator per measured output if feasible. The argument nint_ym can also be tweaked by following these rules on each measured output:

  • Use 0 integrator if the model output is already integrating (else it will be unobservable)
  • Use 1 integrator if the disturbances on the output are typically "step-like"
  • Use 2 integrators if the disturbances on the output are typically "ramp-like"

The function init_estimstoch builds the stochastic model for estimation.

Tip

Increasing σQint_u and σQint_ym values increases the integral action "gain".

The constructor pre-compute the steady-state Kalman gain with the kalman function. It can sometimes fail, for example when model matrices are ill-conditioned. In such a case, you can try the alternative time-varying KalmanFilter.

source
SteadyKalmanFilter(model, i_ym, nint_u, nint_ym, Q̂, R̂; direct=true)

Construct the estimator from the augmented covariance matrices and .

This syntax allows nonzero off-diagonal elements in $\mathbf{Q̂, R̂}$.

source

KalmanFilter

ModelPredictiveControl.KalmanFilterType
KalmanFilter(model::LinModel; <keyword arguments>)

Construct a time-varying Kalman Filter with the LinModel model.

The process model is identical to SteadyKalmanFilter. The matrix $\mathbf{P̂}_k(k+1)$ is the estimation error covariance of model states augmented with the stochastic ones (specified by nint_u and nint_ym). Three keyword arguments modify its initial value with $\mathbf{P̂}_{-1}(0) = \mathrm{diag}\{ \mathbf{P}(0), \mathbf{P_{int_{u}}}(0), \mathbf{P_{int_{ym}}}(0) \}$.

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • model::LinModel : (deterministic) model for the estimations.
  • σP_0=fill(1/model.nx,model.nx) or sigmaP_0 : main diagonal of the initial estimate covariance $\mathbf{P}(0)$, specified as a standard deviation vector.
  • σPint_u_0=fill(1,sum(nint_u)) or sigmaPint_u_0 : same than σP_0 but for the unmeasured disturbances at manipulated inputs $\mathbf{P_{int_u}}(0)$ (composed of integrators).
  • σPint_ym_0=fill(1,sum(nint_ym)) or sigmaPint_ym_0 : same than σP_0 but for the unmeasured disturbances at measured outputs $\mathbf{P_{int_{ym}}}(0)$ (composed of integrators).
  • <keyword arguments> of SteadyKalmanFilter constructor.

Examples

julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5);
 
 julia> estim = KalmanFilter(model, i_ym=[2], σR=[1], σP_0=[100, 100], σQint_ym=[0.01])
 KalmanFilter estimator with a sample time Ts = 0.5 s, LinModel and:
@@ -24,7 +24,7 @@
  3 estimated states x̂
  1 measured outputs ym (1 integrating states)
  1 unmeasured outputs yu
- 0 measured disturbances d
source
KalmanFilter(model, i_ym, nint_u, nint_ym, P̂_0, Q̂, R̂; direct=true)

Construct the estimator from the augmented covariance matrices P̂_0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

Luenberger

ModelPredictiveControl.LuenbergerType
Luenberger(
+ 0 measured disturbances d
source
KalmanFilter(model, i_ym, nint_u, nint_ym, P̂_0, Q̂, R̂; direct=true)

Construct the estimator from the augmented covariance matrices P̂_0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

Luenberger

ModelPredictiveControl.LuenbergerType
Luenberger(
     model::LinModel; 
     i_ym = 1:model.ny, 
     nint_u  = 0,
@@ -39,7 +39,7 @@
  4 estimated states x̂
  2 measured outputs ym (2 integrating states)
  0 unmeasured outputs yu
- 0 measured disturbances d
source

UnscentedKalmanFilter

UnscentedKalmanFilter

ModelPredictiveControl.UnscentedKalmanFilterType
UnscentedKalmanFilter(model::SimModel; <keyword arguments>)

Construct an unscented Kalman Filter with the SimModel model.

Both LinModel and NonLinModel are supported. The unscented Kalman filter is based on the process model :

\[\begin{aligned} \mathbf{x}(k+1) &= \mathbf{f̂}\Big(\mathbf{x}(k), \mathbf{u}(k), \mathbf{d}(k)\Big) + \mathbf{w}(k) \\ \mathbf{y^m}(k) &= \mathbf{ĥ^m}\Big(\mathbf{x}(k), \mathbf{d}(k)\Big) + \mathbf{v}(k) \\ @@ -52,9 +52,9 @@ 3 estimated states x̂ 1 measured outputs ym (2 integrating states) 0 unmeasured outputs yu - 0 measured disturbances d

Extended Help

Extended Help

The Extended Help of SteadyKalmanFilter details the augmentation with nint_ym and nint_u arguments. The default augmentation scheme is identical, that is nint_u=0 and nint_ym computed by default_nint. Note that the constructor does not validate the observability of the resulting augmented NonLinModel. In such cases, it is the user's responsibility to ensure that it is still observable.

source
UnscentedKalmanFilter(
+ 0 measured disturbances d

Extended Help

Extended Help

The Extended Help of SteadyKalmanFilter details the augmentation with nint_ym and nint_u arguments. The default augmentation scheme is identical, that is nint_u=0 and nint_ym computed by default_nint. Note that the constructor does not validate the observability of the resulting augmented NonLinModel. In such cases, it is the user's responsibility to ensure that it is still observable.

source
UnscentedKalmanFilter(
     model, i_ym, nint_u, nint_ym, P̂_0, Q̂, R̂, α=1e-3, β=2, κ=0; direct=true
-)

Construct the estimator from the augmented covariance matrices P̂_0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

ExtendedKalmanFilter

ModelPredictiveControl.ExtendedKalmanFilterType
ExtendedKalmanFilter(model::SimModel; <keyword arguments>)

Construct an extended Kalman Filter with the SimModel model.

Both LinModel and NonLinModel are supported. The process model is identical to UnscentedKalmanFilter. The Jacobians of the augmented model $\mathbf{f̂, ĥ}$ are computed with ForwardDiff.jl automatic differentiation.

Warning

See the Extended Help of linearize function if you get an error like: MethodError: no method matching (::var"##")(::Vector{ForwardDiff.Dual}).

Arguments

  • model::SimModel : (deterministic) model for the estimations.
  • <keyword arguments> of SteadyKalmanFilter constructor.
  • <keyword arguments> of KalmanFilter constructor.

Examples

julia> model = NonLinModel((x,u,_)->0.2x+u, (x,_)->-3x, 5.0, 1, 1, 1, solver=nothing);
+)

Construct the estimator from the augmented covariance matrices P̂_0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

ExtendedKalmanFilter

ModelPredictiveControl.ExtendedKalmanFilterType
ExtendedKalmanFilter(model::SimModel; <keyword arguments>)

Construct an extended Kalman Filter with the SimModel model.

Both LinModel and NonLinModel are supported. The process model is identical to UnscentedKalmanFilter. The Jacobians of the augmented model $\mathbf{f̂, ĥ}$ are computed with ForwardDiff.jl automatic differentiation.

Warning

See the Extended Help of linearize function if you get an error like: MethodError: no method matching (::var"##")(::Vector{ForwardDiff.Dual}).

Arguments

  • model::SimModel : (deterministic) model for the estimations.
  • <keyword arguments> of SteadyKalmanFilter constructor.
  • <keyword arguments> of KalmanFilter constructor.

Examples

julia> model = NonLinModel((x,u,_)->0.2x+u, (x,_)->-3x, 5.0, 1, 1, 1, solver=nothing);
 
 julia> estim = ExtendedKalmanFilter(model, σQ=[2], σQint_ym=[2], σP_0=[0.1], σPint_ym_0=[0.1])
 ExtendedKalmanFilter estimator with a sample time Ts = 5.0 s, NonLinModel and:
@@ -62,7 +62,7 @@
  2 estimated states x̂
  1 measured outputs ym (1 integrating states)
  0 unmeasured outputs yu
- 0 measured disturbances d
source
ExtendedKalmanFilter(model, i_ym, nint_u, nint_ym, P̂_0, Q̂, R̂; direct=true)

Construct the estimator from the augmented covariance matrices P̂_0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

MovingHorizonEstimator

ModelPredictiveControl.MovingHorizonEstimatorType
MovingHorizonEstimator(model::SimModel; <keyword arguments>)

Construct a moving horizon estimator (MHE) based on model (LinModel or NonLinModel).

It can handle constraints on the estimates, see setconstraint!. Additionally, model is not linearized like the ExtendedKalmanFilter, and the probability distribution is not approximated like the UnscentedKalmanFilter. The computational costs are drastically higher, however, since it minimizes the following objective function at each discrete time $k$:

\[\min_{\mathbf{x̂}_k(k-N_k+1), \mathbf{Ŵ}, ϵ} \mathbf{x̄}' \mathbf{P̄}^{-1} \mathbf{x̄} + 0 measured disturbances d

source
ExtendedKalmanFilter(model, i_ym, nint_u, nint_ym, P̂_0, Q̂, R̂; direct=true)

Construct the estimator from the augmented covariance matrices P̂_0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$.

source

MovingHorizonEstimator

ModelPredictiveControl.MovingHorizonEstimatorType
MovingHorizonEstimator(model::SimModel; <keyword arguments>)

Construct a moving horizon estimator (MHE) based on model (LinModel or NonLinModel).

It can handle constraints on the estimates, see setconstraint!. Additionally, model is not linearized like the ExtendedKalmanFilter, and the probability distribution is not approximated like the UnscentedKalmanFilter. The computational costs are drastically higher, however, since it minimizes the following objective function at each discrete time $k$:

\[\min_{\mathbf{x̂}_k(k-N_k+1), \mathbf{Ŵ}, ϵ} \mathbf{x̄}' \mathbf{P̄}^{-1} \mathbf{x̄} + \mathbf{Ŵ}' \mathbf{Q̂}_{N_k}^{-1} \mathbf{Ŵ} + \mathbf{V̂}' \mathbf{R̂}_{N_k}^{-1} \mathbf{V̂} + C ϵ^2\]

in which the arrival costs are evaluated from the states estimated at time $k-N_k$:

\[\begin{aligned} @@ -99,21 +99,21 @@ \end{bmatrix}\]

based on the augmented model functions $\mathbf{f̂, ĥ^m}$:

\[\begin{aligned} \mathbf{v̂}(k-j) &= \mathbf{y^m}(k-j) - \mathbf{ĥ^m}\Big(\mathbf{x̂}_k(k-j), \mathbf{d}(k-j)\Big) \\ \mathbf{x̂}_k(k-j+1) &= \mathbf{f̂}\Big(\mathbf{x̂}_k(k-j), \mathbf{u}(k-j), \mathbf{d}(k-j)\Big) + \mathbf{ŵ}(k-j) -\end{aligned}\]

The slack variable $ϵ$ relaxes the constraints if enabled, see setconstraint!. It is disabled by default for the MHE (from Cwt=Inf) but it should be activated for problems with two or more types of bounds, to ensure feasibility (e.g. on the estimated state and sensor noise).

The optimization and the estimation of the covariance at arrival $\mathbf{P̂}_{k-N_k}(k-N_k+1)$ depend on model:

  • If model is a LinModel, the optimization is treated as a quadratic program with a time-varying Hessian, which is generally cheaper than nonlinear programming. By default, a KalmanFilter estimates the arrival covariance (customizable).
  • Else, a nonlinear program with automatic differentiation (AD) solves the optimization. Optimizers generally benefit from exact derivatives like AD. However, the f and h functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions. An UnscentedKalmanFilter estimates the arrival covariance by default.

Note that if Cwt≠Inf, the attribute nlp_scaling_max_gradient of Ipopt is set to 10/Cwt (if not already set), to scale the small values of $ϵ$. Use the second constructor to specify the covariance estimation method.

source
MovingHorizonEstimator(
+\end{aligned}\]

The slack variable $ϵ$ relaxes the constraints if enabled, see setconstraint!. It is disabled by default for the MHE (from Cwt=Inf) but it should be activated for problems with two or more types of bounds, to ensure feasibility (e.g. on the estimated state and sensor noise).

The optimization and the estimation of the covariance at arrival $\mathbf{P̂}_{k-N_k}(k-N_k+1)$ depend on model:

  • If model is a LinModel, the optimization is treated as a quadratic program with a time-varying Hessian, which is generally cheaper than nonlinear programming. By default, a KalmanFilter estimates the arrival covariance (customizable).
  • Else, a nonlinear program with automatic differentiation (AD) solves the optimization. Optimizers generally benefit from exact derivatives like AD. However, the f and h functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions. An UnscentedKalmanFilter estimates the arrival covariance by default.

Note that if Cwt≠Inf, the attribute nlp_scaling_max_gradient of Ipopt is set to 10/Cwt (if not already set), to scale the small values of $ϵ$. Use the second constructor to specify the covariance estimation method.

source
MovingHorizonEstimator(
     model, He, i_ym, nint_u, nint_ym, P̂_0, Q̂, R̂, Cwt=Inf;
     optim=default_optim_mhe(model), 
     direct=true,
     covestim=default_covestim_mhe(model, i_ym, nint_u, nint_ym, P̂_0, Q̂, R̂; direct)
-)

Construct the estimator from the augmented covariance matrices P̂_0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$. The keyword argument covestim also allows specifying a custom StateEstimator object for the estimation of covariance at the arrival $\mathbf{P̂}_{k-N_k}(k-N_k+1)$. The supported types are KalmanFilter, UnscentedKalmanFilter and ExtendedKalmanFilter.

source

InternalModel

ModelPredictiveControl.InternalModelType
InternalModel(model::SimModel; i_ym=1:model.ny, stoch_ym=ss(I,I,I,I,model.Ts))

Construct an internal model estimator based on model (LinModel or NonLinModel).

i_ym provides the model output indices that are measured $\mathbf{y^m}$, the rest are unmeasured $\mathbf{y^u}$. model evaluates the deterministic predictions $\mathbf{ŷ_d}$, and stoch_ym, the stochastic predictions of the measured outputs $\mathbf{ŷ_s^m}$ (the unmeasured ones being $\mathbf{ŷ_s^u=0}$). The predicted outputs sum both values : $\mathbf{ŷ = ŷ_d + ŷ_s}$. See the Extended Help for more details.

Warning

InternalModel estimator does not work if model is integrating or unstable. The constructor verifies these aspects for LinModel but not for NonLinModel. Uses any other state estimator in such cases.

Examples

julia> estim = InternalModel(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5), i_ym=[2])
+)

Construct the estimator from the augmented covariance matrices P̂_0, and .

This syntax allows nonzero off-diagonal elements in $\mathbf{P̂}_{-1}(0), \mathbf{Q̂, R̂}$. The keyword argument covestim also allows specifying a custom StateEstimator object for the estimation of covariance at the arrival $\mathbf{P̂}_{k-N_k}(k-N_k+1)$. The supported types are KalmanFilter, UnscentedKalmanFilter and ExtendedKalmanFilter.

source

InternalModel

ModelPredictiveControl.InternalModelType
InternalModel(model::SimModel; i_ym=1:model.ny, stoch_ym=ss(I,I,I,I,model.Ts))

Construct an internal model estimator based on model (LinModel or NonLinModel).

i_ym provides the model output indices that are measured $\mathbf{y^m}$, the rest are unmeasured $\mathbf{y^u}$. model evaluates the deterministic predictions $\mathbf{ŷ_d}$, and stoch_ym, the stochastic predictions of the measured outputs $\mathbf{ŷ_s^m}$ (the unmeasured ones being $\mathbf{ŷ_s^u=0}$). The predicted outputs sum both values : $\mathbf{ŷ = ŷ_d + ŷ_s}$. See the Extended Help for more details.

Warning

InternalModel estimator does not work if model is integrating or unstable. The constructor verifies these aspects for LinModel but not for NonLinModel. Uses any other state estimator in such cases.

Examples

julia> estim = InternalModel(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 0.5), i_ym=[2])
 InternalModel estimator with a sample time Ts = 0.5 s, LinModel and:
  1 manipulated inputs u
  2 estimated states x̂
  1 measured outputs ym
  1 unmeasured outputs yu
- 0 measured disturbances d

Extended Help

Extended Help

stoch_ym is a TransferFunction or StateSpace object that models disturbances on $\mathbf{y^m}$. Its input is a hypothetical zero mean white noise vector. stoch_ym supposes 1 integrator per measured outputs by default, assuming that the current stochastic estimate $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ is constant in the future. This is the dynamic matrix control (DMC) strategy, which is simple but sometimes too aggressive. Additional poles and zeros in stoch_ym can mitigate this. The following block diagram summarizes the internal model structure.

block diagram of the internal model structure

source

Default Model Augmentation

ModelPredictiveControl.default_nintFunction
default_nint(model::LinModel, i_ym=1:model.ny, nint_u=0) -> nint_ym

Get default integrator quantity per measured outputs nint_ym for LinModel.

The arguments i_ym and nint_u are the measured output indices and the integrator quantity on each manipulated input, respectively. By default, one integrator is added on each measured outputs. If $\mathbf{Â, Ĉ}$ matrices of the augmented model become unobservable, the integrator is removed. This approach works well for stable, integrating and unstable model (see Examples).

Examples

julia> model = LinModel(append(tf(3, [10, 1]), tf(2, [1, 0]), tf(4,[-5, 1])), 1.0);
+ 0 measured disturbances d

Extended Help

Extended Help

stoch_ym is a TransferFunction or StateSpace object that models disturbances on $\mathbf{y^m}$. Its input is a hypothetical zero mean white noise vector. stoch_ym supposes 1 integrator per measured outputs by default, assuming that the current stochastic estimate $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ is constant in the future. This is the dynamic matrix control (DMC) strategy, which is simple but sometimes too aggressive. Additional poles and zeros in stoch_ym can mitigate this. The following block diagram summarizes the internal model structure.

block diagram of the internal model structure

source

Default Model Augmentation

ModelPredictiveControl.default_nintFunction
default_nint(model::LinModel, i_ym=1:model.ny, nint_u=0) -> nint_ym

Get default integrator quantity per measured outputs nint_ym for LinModel.

The arguments i_ym and nint_u are the measured output indices and the integrator quantity on each manipulated input, respectively. By default, one integrator is added on each measured outputs. If $\mathbf{Â, Ĉ}$ matrices of the augmented model become unobservable, the integrator is removed. This approach works well for stable, integrating and unstable model (see Examples).

Examples

julia> model = LinModel(append(tf(3, [10, 1]), tf(2, [1, 0]), tf(4,[-5, 1])), 1.0);
 
 julia> nint_ym = default_nint(model)
 3-element Vector{Int64}:
  1
  0
- 1
source
default_nint(model::SimModel, i_ym=1:model.ny, nint_u=0)

One integrator on each measured output by default if model is not a LinModel.

If the integrator quantity per manipulated input nint_u ≠ 0, the method returns zero integrator on each measured output.

source
+ 1source
default_nint(model::SimModel, i_ym=1:model.ny, nint_u=0)

One integrator on each measured output by default if model is not a LinModel.

If the integrator quantity per manipulated input nint_u ≠ 0, the method returns zero integrator on each measured output.

source