Skip to content

Commit

Permalink
doc: comment on nlp_scaling_max_gradient
Browse files Browse the repository at this point in the history
  • Loading branch information
franckgaga committed Jan 18, 2024
1 parent 3d98242 commit 650e11a
Show file tree
Hide file tree
Showing 3 changed files with 14 additions and 6 deletions.
5 changes: 4 additions & 1 deletion src/controller/nonlinmpc.jl
Original file line number Diff line number Diff line change
Expand Up @@ -162,11 +162,14 @@ NonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedK
algebra instead of a `for` loop. This feature can accelerate the optimization, especially
for the constraint handling, and is not available in any other package, to my knowledge.
The optimization relies on [`JuMP.jl`](https://github.com/jump-dev/JuMP.jl) automatic
The optimization relies on [`JuMP`](https://github.com/jump-dev/JuMP.jl) automatic
differentiation (AD) to compute the objective and constraint derivatives. Optimizers
generally benefit from exact derivatives like AD. However, the [`NonLinModel`](@ref) `f`
and `h` functions must be compatible with this feature. See [Automatic differentiation](https://jump.dev/JuMP.jl/stable/manual/nlp/#Automatic-differentiation)
for common mistakes when writing these functions.
Note that if `Cwt≠Inf`, the attribute `nlp_scaling_max_gradient` of `Ipopt` is set to
`10/Cwt` (if not already set), to scale the small values of ``ϵ``.
"""
function NonLinMPC(
model::SimModel;
Expand Down
10 changes: 7 additions & 3 deletions src/estimator/mhe/construct.jl
Original file line number Diff line number Diff line change
Expand Up @@ -250,11 +250,15 @@ MovingHorizonEstimator estimator with a sample time Ts = 10.0 s, Ipopt optimizer
state and sensor noise).
For [`LinModel`](@ref), the optimization is treated as a quadratic program with a
time-varying Hessian, which is generally cheaper than nonlinear programming. For
[`NonLinModel`](@ref), the optimization relies on automatic differentiation (AD).
time-varying Hessian, which is generally cheaper than nonlinear programming.
For [`NonLinModel`](@ref), the optimization relies on automatic differentiation (AD).
Optimizers generally benefit from exact derivatives like AD. However, the `f` and `h`
functions must be compatible with this feature. See [Automatic differentiation](https://jump.dev/JuMP.jl/stable/manual/nlp/#Automatic-differentiation)
for common mistakes when writing these functions.
for common mistakes when writing these functions.
Note that if `Cwt≠Inf`, the attribute `nlp_scaling_max_gradient` of `Ipopt` is set to
`10/Cwt` (if not already set), to scale the small values of ``ϵ``.
"""
function MovingHorizonEstimator(
model::SM;
Expand Down
5 changes: 3 additions & 2 deletions src/estimator/mhe/execute.jl
Original file line number Diff line number Diff line change
Expand Up @@ -192,9 +192,10 @@ also inits `estim.optim` objective function, expressed as the quadratic general
```math
J = \min_{\mathbf{Z̃}} \frac{1}{2}\mathbf{Z̃' H̃ Z̃} + \mathbf{q̃' Z̃} + p
```
in which ``\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]``. The
in which ``\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]``. Note that
``p`` is useless at optimization but required to evaluate the objective minima ``J``. The
Hessian ``\mathbf{H̃}`` matrix of the quadratic general form is not constant here because
of the time-varying ``\mathbf{P̄}`` covariance . The computations are:
of the time-varying ``\mathbf{P̄}`` covariance . The computed variables are:
```math
\begin{aligned}
\mathbf{F} &= \mathbf{G U} + \mathbf{J D} + \mathbf{Y^m} \\
Expand Down

0 comments on commit 650e11a

Please sign in to comment.