Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
shtadinada committed Nov 13, 2024
1 parent a161cec commit c0ab37e
Showing 1 changed file with 42 additions and 46 deletions.
88 changes: 42 additions & 46 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,83 +31,79 @@ Pkg.add(url="https://github.com/ZIB-IOL/AbsSmoothFrankWolfe.jl", rev="main")
Let us consider the minimization of the abs-smooth function $max(x_1^4+x_2^2, (2-x_1)^2+(2-x_2)^2, 2*e^{(x_2-x_1)})$ subjected to simple box constraints $-5\leq x_i \leq 5$. Here is what the code will look like:

```julia
julia> using AbsSmoothFrankWolfe
using AbsSmoothFrankWolfe
using FrankWolfe
using LinearAlgebra
using JuMP
using HiGHS

julia> using FrankWolfe

julia> using LinearAlgebra
import MathOptInterface
const MOI = MathOptInterface

julia> using JuMP

julia> using HiGHS

julia> import MathOptInterface

julia> const MOI = MathOptInterface

julia> function f(x)
return max(x[1]^4+x[2]^2, (2-x[1])^2+(2-x[2])^2, 2*exp(x[2]-x[1]))
# DEM
function f(x)
return max(5*x[1]+x[2], -5*x[1]+x[2], x[1]^2+x[2]^2+4*x[2])
end

# evaluation point x_base
julia> x_base = [3.0,2.0]

# box constraints
julia> lb_x = [-5 for in in x_base]

julia> ub_x = [5 for in in x_base]
x_base = [1.0,1.0]
n = length(x_base)

lb_x = [-5 for in in x_base]
ub_x = [5 for in in x_base]

# call the abs-linear form of f
julia> abs_normal_form = AbsSmoothFrankWolfe.abs_linear(x_base,f)
abs_normal_form = abs_linear(x_base,f)

# gradient formula in terms of abs-linearization
julia> alf_a = abs_normal_form.Y
alf_a = abs_normal_form.Y
alf_b = abs_normal_form.J
z = abs_normal_form.z
s = abs_normal_form.num_switches

julia> alf_b = abs_normal_form.J
sigma_z = signature_vec(s,z)

julia> z = abs_normal_form.z

julia> s = abs_normal_form.num_switches

julia> sigma_z = AbsSmoothFrankWolfe.signature_vec(s,z)

julia> function grad!(storage, x)
# gradient formula in terms of abs-linearization
function grad!(storage, x)
c = vcat(alf_a', alf_b'.* sigma_z)
@. storage = c
end

# define the model using JuMP with HiGHS as inner solver
julia> o = Model(HiGHS.Optimizer)
# set bounds
o = Model(HiGHS.Optimizer)
MOI.set(o, MOI.Silent(), true)
@variable(o, lb_x[i] <= x[i=1:n] <= ub_x[i])

julia> MOI.set(o, MOI.Silent(), true)

julia> @variable(o, lb_x[i] <= x[i=1:n] <= ub_x[i])

# initialise dual gap
julia> dualgap_asfw = Inf
# initialise dual gap
dualgap_asfw = Inf

# abs-smooth lmo
julia> lmo_as = AbsSmoothFrankWolfe.AbsSmoothLMO(o, x_base, f, n, s, lb_x, ub_x, dualgap_asfw)
lmo_as = AbsSmoothLMO(o, x_base, f, n, s, lb_x, ub_x, dualgap_asfw)

# define termination criteria using Frank-Wolfe 'callback' function
julia> function make_termination_callback(state)
# define termination criteria

# In case we want to stop the frank_wolfe algorithm prematurely after a certain condition is met,
# we can return a boolean stop criterion `false`.
# Here, we will implement a callback that terminates the algorithm if ASFW Dual gap < eps.
function make_termination_callback(state)
return function callback(state,args...)
return state.lmo.dualgap_asfw[1] > 1e-2
end
end

julia> callback = make_termination_callback(FrankWolfe.CallbackState)
callback = make_termination_callback(FrankWolfe.CallbackState)

# call abs-smooth-frank-wolfe
julia> x, v, primal, dual_gap, traj_data = AbsSmoothFrankWolfe.as_frank_wolfe(
x, v, primal, dual_gap, traj_data = as_frank_wolfe(
f,
grad!,
lmo_as,
x_base;
gradient = ones(n+s),
line_search = FrankWolfe.FixedStep(1.0),
callback=callback,
verbose=true
verbose=true,
max_iteration=1e7
)

Vanilla Abs-Smooth Frank-Wolfe Algorithm.
Expand All @@ -122,7 +118,7 @@ LMO: AbsSmoothLMO
I 1 8.500000e+01 2.376593e+00 1.281206e+02 0.000000e+00 Inf
Last 7 2.000080e+00 2.000000e-05 3.600149e-04 2.885519e+00 2.425907e+00
-------------------------------------------------------------------------------------------------
x_final = [1.00002, 1.0]

```
## Documentation
Expand Down

0 comments on commit c0ab37e

Please sign in to comment.