diff --git a/src/controller/nonlinmpc.jl b/src/controller/nonlinmpc.jl index a8af925d..a990c442 100644 --- a/src/controller/nonlinmpc.jl +++ b/src/controller/nonlinmpc.jl @@ -162,11 +162,14 @@ NonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedK algebra instead of a `for` loop. This feature can accelerate the optimization, especially for the constraint handling, and is not available in any other package, to my knowledge. - The optimization relies on [`JuMP.jl`](https://github.com/jump-dev/JuMP.jl) automatic + The optimization relies on [`JuMP`](https://github.com/jump-dev/JuMP.jl) automatic differentiation (AD) to compute the objective and constraint derivatives. Optimizers generally benefit from exact derivatives like AD. However, the [`NonLinModel`](@ref) `f` and `h` functions must be compatible with this feature. See [Automatic differentiation](https://jump.dev/JuMP.jl/stable/manual/nlp/#Automatic-differentiation) for common mistakes when writing these functions. + + Note that if `Cwt≠Inf`, the attribute `nlp_scaling_max_gradient` of `Ipopt` is set to + `10/Cwt` (if not already set), to scale the small values of ``ϵ``. """ function NonLinMPC( model::SimModel; diff --git a/src/estimator/mhe/construct.jl b/src/estimator/mhe/construct.jl index a931e483..5d0f0b14 100644 --- a/src/estimator/mhe/construct.jl +++ b/src/estimator/mhe/construct.jl @@ -250,11 +250,15 @@ MovingHorizonEstimator estimator with a sample time Ts = 10.0 s, Ipopt optimizer state and sensor noise). For [`LinModel`](@ref), the optimization is treated as a quadratic program with a - time-varying Hessian, which is generally cheaper than nonlinear programming. For - [`NonLinModel`](@ref), the optimization relies on automatic differentiation (AD). + time-varying Hessian, which is generally cheaper than nonlinear programming. + + For [`NonLinModel`](@ref), the optimization relies on automatic differentiation (AD). Optimizers generally benefit from exact derivatives like AD. However, the `f` and `h` functions must be compatible with this feature. See [Automatic differentiation](https://jump.dev/JuMP.jl/stable/manual/nlp/#Automatic-differentiation) - for common mistakes when writing these functions. + for common mistakes when writing these functions. + + Note that if `Cwt≠Inf`, the attribute `nlp_scaling_max_gradient` of `Ipopt` is set to + `10/Cwt` (if not already set), to scale the small values of ``ϵ``. """ function MovingHorizonEstimator( model::SM; diff --git a/src/estimator/mhe/execute.jl b/src/estimator/mhe/execute.jl index 87c59294..0a6b4277 100644 --- a/src/estimator/mhe/execute.jl +++ b/src/estimator/mhe/execute.jl @@ -192,9 +192,10 @@ also inits `estim.optim` objective function, expressed as the quadratic general ```math J = \min_{\mathbf{Z̃}} \frac{1}{2}\mathbf{Z̃' H̃ Z̃} + \mathbf{q̃' Z̃} + p ``` -in which ``\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]``. The +in which ``\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]``. Note that +``p`` is useless at optimization but required to evaluate the objective minima ``J``. The Hessian ``\mathbf{H̃}`` matrix of the quadratic general form is not constant here because -of the time-varying ``\mathbf{P̄}`` covariance . The computations are: +of the time-varying ``\mathbf{P̄}`` covariance . The computed variables are: ```math \begin{aligned} \mathbf{F} &= \mathbf{G U} + \mathbf{J D} + \mathbf{Y^m} \\