Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UDE integration #65

Open
sebapersson opened this issue Sep 15, 2023 · 4 comments
Open

UDE integration #65

sebapersson opened this issue Sep 15, 2023 · 4 comments

Comments

@sebapersson
Copy link
Owner

Me and @m-philipps and @schminin have discussed UDE support (basically support for providing support for including Universal-function approximators such as Neural networks) into a PEtab model.

The first point I think is relevant to discuss is the input format. When the model contains a neural network I don't think it is possible to write a symbolic equation (ModellingToolkit-style) or use Catalyst. This means the model has to be provided as an ODEProblem. The challenge then becomes how to keep track of the parameters and states. I suggest we enforce the user to define the parameters as a NamedTuple (e.g p_dynamic = [a = 3.0, b = 3.0]) which is then combined with the neural-network parameters via a ComponentArray, and the initial values as a state-map (see below):

using ModelingToolkit
using Lux
using ComponentArrays
using Random

# Define the neural network 
hidden_layers = 2
first_layer = Dense(2, 5, Lux.tanh)
intermediate_layers = [Dense(5, 5, Lux.tanh) for _ in 1:hidden_layers-1]
last_layer = Dense(5, 2)
nn = Lux.Chain(first_layer, intermediate_layers..., last_layer)
rng = Random.default_rng()
Random.seed!(rng, 1)
p_nn, st = Lux.setup(rng, nn)

# Defines initial value maps and parameter 
@variables prey, predator
p_mechanistic == 1.3, δ = 1.8)
u0_map = [prey => 0.44249296, predator => 4.6280594]
p_combined = ComponentArray(merge(p_mechanistic, (p_nn=p_nn,)))

# Dynamics 
function lv!(du, u, p, t)
    prey, predator = u
    @unpack α, δ, p_nn = p

    du_nn = nn([prey, predator], p_nn, st)[1]

    du[1] = dprey =  α*prey + du_nn[1]
    du[2] = dpredator = du_nn[2] - δ*predator
    return nothing
end

With using ComponentArray for grouping the neural network and dynamic parameters, together with the unpack macro I think we can achieve quite clean model syntax. I suggest we then let the user to define the lv_problem which is then provided to readPEtabModel :

lv_problem = ODEProblem(lv!, u0_map, (0.0, 500), p_combined)
petab_model = readPetabModel(lv_problem, ...)

Then in the readPetabModel I will call modelingtoolkitize(lv_problem), and as long as the initial values are provided via a state-map, and the parameters as an ComponentArray I will be able to keep track of parameter order and build the correct indices when setting up the PEtabProblem. This should be enough for me, given that the other PEtab-information (like simulation-conditions is available) to correctly build a parameter estimation problem.

Furthermore, I also think the parameter-map is a good idea as it allows the user to encode parameter-dependent initial conditions, something liek

u0_map = [prey => α, predator => 4.6280594]

What do you two think about the input format?

@TorkelE
Copy link
Contributor

TorkelE commented Sep 15, 2023

For reference, while I have not tried it myself (but have been meaning to ensure that it works and write tutorial, we even mention this in the paper), UDE integration via Catalyst is an intended feature. The syntax should be like this:

unknown_rate = make_neural_network(...)
rn = @reaction_network begin
    ($(unknown_rate), d), 0 <--> X
end

Right now, replacing parameters with neural network is the only thing that would work. Representing full arbitrary reactions would not be possible (but I would like to implement something like https://pubmed.ncbi.nlm.nih.gov/33471526/ for this).

@TorkelE
Copy link
Contributor

TorkelE commented Sep 15, 2023

In continuation of my previous point, we discussed and we cannot fully support UDEs in Catalyst. Currently, we aim to have it fully working by the beginning of next year.

@sebapersson
Copy link
Owner Author

It would be great to also have partial support for UDE with Catalyst. So I think the best approach then would be to add support for UDE for ODEProblem (as above) for users who want to do more advanced things (e.g. replace a state), and down the line we also add support for UDE in Catalyst as the more user-friendly option.

@sebapersson
Copy link
Owner Author

sebapersson commented Oct 5, 2023

@m-philipps and @schminin Neural networks in the in the model is now supported on the UDE_integration branch. I have tried to make it as easy as possible to define the model, and currently it all can be setup via this code : (you can find the full example, and the associated PEtab-folder here)

using Lux
using ComponentArrays
using Random
using PEtab
using OrdinaryDiffEq
using Plots
using Ipopt


# Define the neural network
hidden_layers = 2
first_layer = Dense(2, 5, Lux.tanh)
intermediate_layers = [Dense(5, 5, Lux.tanh) for _ in 1:hidden_layers-1]
last_layer = Dense(5, 2)
nn = Lux.Chain(first_layer, intermediate_layers..., last_layer)
rng = Random.default_rng()
Random.seed!(rng, 1)
p_nn, st = Lux.setup(rng, nn)

# Defines initial value maps and parameter
@variables prey, predator
p_mechanistic == 1.3, δ = 1.8)
u0_map = [prey => 0.44249296, predator => 4.6280594]
p_combined = ComponentArray(merge(p_mechanistic, (p_nn=p_nn,)))

# Dynamics
function lv!(du, u, p, t)
    prey, predator = u
    @unpack α, δ, p_nn = p

    du_nn = nn([prey, predator], p_nn, st)[1]

    du[1] = dprey =  α*prey + du_nn[1]
    du[2] = dpredator = du_nn[2] - δ*predator
    return nothing
end

lv_problem = ODEProblem(lv!, u0_map, (0.0, 500), p_combined)
path_yaml = joinpath(@__DIR__, "petab_problem", "petab_problem", "petab_problem.yaml")
petab_model = PEtabModel(path_yaml, lv_problem, write_to_file=true)

Here PEtabModel simply takes the yaml file, the ODEProblem and any potential additional options. With a PEtabModel a PEtabODEProblem can now be built, allowing easy objective and gradient computations (with all the methods available in PEtab.jl )

petab_problem = PEtabODEProblem(petab_model,
                                ode_solver=ODESolver(Rodas5P(), abstol=1e-8, reltol=1e-8), 
                                gradient_method=:ForwardDiff)
f = petab_problem.compute_cost(p0)
∇f = petab_problem.compute_gradient(p0)

and the model can be trained either by wrapping optmisers, or using any of those available in PEtab.jl, such as Ipopt

# A good parameter Vector from training
#  p0 = [0.11753319620828563, 0.2534687584628469, -1.681432507324037, 0.32499151893081296, 0.36914119110141047, 0.050983215410473424, -2.2899272133243, -0.5835703303333487, 0.3833751283738198, -1.593909484316206, 0.4445738185701283, 5.045708632939731, 4.105623325645565, -4.45614380906301, -3.460630382408623, -0.69046931123274, 8.037701925042986, -0.3539866309028288, -2.3031594205870136, -3.594783044128806, -3.864628928761406, 2.003383319966077, 5.4632255243501655, 6.08593910735619, -0.24169351993501445, 1.7724396725892908, 1.9294948705637167, 3.306480256535746, 2.6170036689227136, 3.9881969393180574, 1.6824144346035175, -0.5316005332769452, 1.00200883131416, 0.8730186809895223, 0.4710358410045214, -1.7667191460237879, -0.19589228389519667, 4.1696234012253495, -0.6328667586099389, 2.660393959432052, -7.191306943859559, 1.2884478129986003, 3.013721362259555, 4.435340563440399, -9.556192412905016, -9.878317375651857, 6.485477021070721, -5.387670729732423, 4.724608630907724, -8.60317910366224, 7.676772587610731, 4.589969368229043, -2.0236144646563794, 4.475602341842053, -0.23074168072766651, 0.8621991084021459, 5.791175619766056, -5.807734508945664, 4.364505511305734]
alg = IpoptOptimiser(true)
opt = IpoptOptions(print_level=5)
dir_save = joinpath(petab_model.dir_model, "Opt_results")
res = calibrate_model_multistart(petab_problem, alg, 100, dir_save; seed=123, options=opt)

Any direct feedback on the format?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants