This repository provides some basic insights on Physics Informed Neural Networks (PINNs) and their implementation.
PINNs are numerical methods based on the universal approximation capacity of neural networks, aiming to approximate solutions of partial differential equations. Recently, extensive focus has been on approximating solutions of various equations, leading to the introduction of different PINN concepts. See PINN papers for a walkthrough of the classical literature on PINNs.
This repository will show how to solve the 1D heat equation using PINNs. This code can be easily modified to solve a variety of equations in higher-dimensional domains.
Let us consider the heat equation
where
with
PINNs aim to find the parameters
Remark 1: Both the architecture and the activation function of the previous neural network can be changed, taking care that this new architecture has reasonable approximation properties and that the activation function is as differentiable as the PINN scheme requires. In the case of the heat equation, from the standard theory of parabolic equations, the solution is in
To satisfy (1), the main idea of PINNs is to introduce a penalty to the neural network so that it satisfies both the PDE and the boundary and initial conditions. For this, we introduce the following loss functions.
where
where the constants
Remark 2: One of the important advantages of PINNs is that it is not necessary to have labels, as is typically the case in supervised learning. It is sufficient to have a sample of points from our spatio-temporal domain.
This code contains two examples.
1) Example_1: This corresponds to the solution of the heat equation on the domain
2) Example_2: This corresponds to the solution of the heat equation on the domain
The code is divided into seven sections:
1) Libraries: We load the libraries that will be used in the main code, including Torch, Numpy, os, deepxde, and matplotlib.
2) Device configuration: General configurations. We define the device and the type of float that will be used in Torch.
3) Tuning Parameters: Here we can tunning the parameters, among them are:
- step: Corresponds to the number of steps in the neural network training process.
- batch_size: To optimize the database, we divide it into batches, and this parameter defines the size of the batches.
- w_bc: These parameters correspond to those introduced in (2).
- w_pde: These parameters correspond to those introduced in (2).
- lr: Learning rate for stochastic gradient descent.
- layers: Corresponds to a vector that represents the architecture of the fully connected neural network. The
$i$ -th entry of the vector corresponds to the number of neurons in the$i$ -th layer. - case: In our code, we have implemented two examples for the heat equation. Therefore, this parameter can take the values "example_1" or "example_2". Each example has a different domain and source function.
- N_test_x: Number of testing points in space.
- N_test_t: Number of testing points in time.
- N_train_x: Number of training points in space.
- N_train_t: Number of training points in time.
- N_bc: Number of training points on the boundary.
4) Generate data: In this section, we use functions from the data_gen module. This module allows us to generate data within the domain and at its boundary. For this, the functions used are:
-
get_PDE_dataset: Generates the training data within the domain. It is possible to use Latin hypercube instead of the get_PDE_dataset function.
-
get_BC_dataset: Generates training data on the boundary.
-
get_test_dataset: Generates testing data.
5) Create Model and Optimizer: We use the FCN module to create the fully connected neural network. This module contains the following functions:
- Forward: Corresponds to the forward step of the neural network.
- lossBC: Calculates the loss function on the boundary and for the initial condition.
- lossPDE: Calculates the loss function within the domain.
- loss: Corresponds to the sum of lossBC and lossPDE.
- relative_error_l2_norm: Calculates the relative error in the
$L^2$ norm. This function is used to compare the testing data and the obtained solution. - Closure: This function is only used in the case that the chosen optimization method is L-BFGS. The optimizer is also defined. By default, we have selected the ADAM optimizer. It is also possible to use the L-BFGS optimizer (You need to uncomment).
6) Training process: In the training process, we iterate over each batch and over all the data in each batch. The batches are selected randomly but without repetition. In each iteration of the training process, the solution with the current parameters is saved, as well as the relative error concerning the real solution. Images are also generated with this information.
7) Animation: Using the generate_gif function, we create a .gifs and a .mp4 with images of the training process, illustrating how the solution converges to the real one.