-
Notifications
You must be signed in to change notification settings - Fork 57
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
17 changed files
with
35,284 additions
and
15 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,123 @@ | ||
# Deepritz | ||
|
||
This section repeats the Deepritz method presented by [Weinan E and Bing Yu](https://link.springer.com/article/10.1007/s40304-018-0127-z). | ||
|
||
Consider the 2d Poisson's equation such as the following: | ||
|
||
$$ | ||
\begin{equation} | ||
\begin{aligned} | ||
-\Delta u=f, & \text { in } \Omega \\ | ||
u=0, & \text { on } \partial \Omega | ||
\end{aligned} | ||
\end{equation} | ||
$$ | ||
|
||
Based on the scattering theorem, its weak form is that both sides are multiplied by$ v \in H_0^1$(which can be interpreted as some function bounded by 0),to get | ||
|
||
$$ | ||
\int f v=-\int v \Delta u=(\nabla u, \nabla v) | ||
$$ | ||
|
||
The above equation holds for any $v \in H_0^1$. The bilinear part of the right-hand side of the equation with respect to $u,v$ is symmetric and yields the bilinear term: | ||
|
||
$$ | ||
a(u, v)=\int \nabla u \cdot \nabla v | ||
$$ | ||
|
||
By the Poincaré inequality, the $a(\cdot, \cdot)$ is a positive definite operator. By positive definite, we mean that there exists $\alpha >0$, such that | ||
|
||
$$ | ||
a(u, u) \geq \alpha\|u\|^2, \quad \forall u \in H_0^1 | ||
$$ | ||
|
||
The remaining term is a linear generalization of $v$, which is $l(v)$, which yields the equation: | ||
|
||
$$ | ||
a(u, v) = l(v) | ||
$$ | ||
|
||
For this equation, by discretizing $u,v$ in the same finite dimensional subspace, we can obtain a symmetric positive definite system of equations, which is the family of Galerkin methods, or we can transform it into a polarization problem to solve it. | ||
|
||
To find $u$ satisfies | ||
|
||
$$ | ||
a(u, v) = l(v), \quad \forall v \in H_0^1 | ||
$$ | ||
|
||
For a symmetric positive definite $a$ , which is equivalent to solving the variational minimization problem, that is, finding $u$, such that holds, where | ||
|
||
$$ | ||
J(u) = \frac{1}{2} a(u, u) - l(u) | ||
$$ | ||
|
||
Specifically | ||
|
||
$$ | ||
\min _{u \in H_0^1} J(u)=\frac{1}{2} \int\|\nabla u\|_2^2-\int f v | ||
$$ | ||
|
||
The DeepRitz method is similar to the PINN approach, replacing the neural network with u, and after sampling the region, just solve it with a solver like Adam. Written as | ||
|
||
$$ | ||
\begin{equation} | ||
\min _{\left.\hat{u}\right|_{\partial \Omega}=0} \hat{J}(\hat{u})=\frac{1}{2} \frac{S_{\Omega}}{N_{\Omega}} \sum\left\|\nabla \hat{u}\left(x_i, y_i\right)\right\|_2^2-\frac{S_{\Omega}}{N_{\partial \Omega}} \sum f\left(x_i, y_i\right) \hat{u}\left(x_i, y_i\right) | ||
\end{equation} | ||
$$ | ||
|
||
Note that the original $u \in H_0^1$, which is zero on the boundary, is transformed into an unconstrained problem by adding the penalty function term: | ||
|
||
$$ | ||
\begin{equation} | ||
\begin{gathered} | ||
\min \hat{J}(\hat{u})=\frac{1}{2} \frac{S_{\Omega}}{N_{\Omega}} \sum\left\|\nabla \hat{u}\left(x_i, y_i\right)\right\|_2^2-\frac{S_{\Omega}}{N_{\Omega}} \sum f\left(x_i, y_i\right) \hat{u}\left(x_i, y_i\right)+\beta \frac{S_{\partial \Omega}}{N_{\partial \Omega}} \\ | ||
\sum \hat{u}^2\left(x_i, y_i\right) | ||
\end{gathered} | ||
\end{equation} | ||
$$ | ||
|
||
Consider the 2d Poisson's equation defined on $\Omega=[-1,1]\times[-1,1]$, which satisfies $f=2 \pi^2 \sin (\pi x) \sin (\pi y)$. | ||
|
||
### Define Sampling Methods and Constraints | ||
|
||
For the problem, boundary condition and PDE constraint are presented and use the Identity loss. | ||
|
||
```python | ||
@sc.datanode(sigma=1000.0) | ||
class Boundary(sc.SampleDomain): | ||
def __init__(self): | ||
self.points = geo.sample_boundary(100,) | ||
self.constraints = {"u": 0.} | ||
|
||
def sampling(self, *args, **kwargs): | ||
return self.points, self.constraints | ||
|
||
|
||
@sc.datanode(loss_fn="Identity") | ||
class Interior(sc.SampleDomain): | ||
def __init__(self): | ||
self.points = geo.sample_interior(1000) | ||
self.constraints = {"integral_dxdy": 0,} | ||
|
||
def sampling(self, *args, **kwargs): | ||
return self.points, self.constraints | ||
``` | ||
|
||
### Define Neural Networks and PDEs | ||
|
||
In the PDE definition section, based on the DeepRitz method we add two types of PDE nodes: | ||
|
||
```python | ||
def f(x, y): | ||
return 2 * sp.pi ** 2 * sp.sin(sp.pi * x) * sp.sin(sp.pi * y) | ||
|
||
dx_exp = sc.ExpressionNode( | ||
expression=0.5*(u.diff(x) ** 2 + u.diff(y) ** 2) - u * f(x, y), name="dxdy" | ||
) | ||
net = sc.get_net_node(inputs=("x", "y"), outputs=("u",), name="net", arch=sc.Arch.mlp) | ||
|
||
integral = sc.ICNode("dxdy", dim=2, time=False) | ||
``` | ||
|
||
The result is shown as follows: | ||
![deepritz](https://github.com/xiangzixuebit/picture/raw/3d73005f3642f10400975659479e856fb99f6518/deepritz.png) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,227 @@ | ||
# Navier-Stokes equations | ||
|
||
This section repeats the Robust PINN method presented by [Peng et.al](https://deepai.org/publication/robust-regression-with-highly-corrupted-data-via-physics-informed-neural-networks). | ||
|
||
## Steady 2D NS equations | ||
|
||
The prototype problem of incompressible flow past a circular cylinder is considered. | ||
![image](https://github.com/xiangzixuebit/picture/raw/3d73005f3642f10400975659479e856fb99f6518/NS1.png) | ||
|
||
The velocity vector is set to zero at all walls and the pressure is set to p = 0 at the outlet. The fluid density is taken as $\rho = 1kg/m^3$ and the dynamic viscosity is taken as $\mu = 2 · 10^{−2}kg/m^3$ . The velocity profile on the inlet is set as $u(0, y)=4 \frac{U_M}{H^2}(H-y) y$ with $U_M = 1m/s$ and $H = 0.41m$. | ||
|
||
The two-dimensional steady-state Navier-Stokes equation is equivalently transformed into the following equations: | ||
|
||
$$ | ||
\begin{equation} | ||
\begin{aligned} | ||
\sigma^{11} &=-p+2 \mu u_x \\ | ||
\sigma^{22} &=-p+2 \mu v_y \\ | ||
\sigma^{12} &=\mu\left(u_y+v_x\right) \\ | ||
p &=-\frac{1}{2}\left(\sigma^{11}+\sigma^{22}\right) \\ | ||
\left(u u_x+v u_y\right) &=\mu\left(\sigma_x^{11}+\sigma_y^{12}\right) \\ | ||
\left(u v_x+v v_y\right) &=\mu\left(\sigma_x^{12}+\sigma_y^{22}\right) | ||
\end{aligned} | ||
\end{equation} | ||
$$ | ||
|
||
We construct a neural network with six outputs to satisfy the PDE constraints above: | ||
|
||
$$ | ||
\begin{equation} | ||
u, v, p, \sigma^{11}, \sigma^{12}, \sigma^{22}=net(x, y) | ||
\end{equation} | ||
$$ | ||
|
||
### Define Symbols and Geometric Objects | ||
|
||
For the 2d problem, we define two coordinate symbols`x`and`y`, six variables$ u, v, p, \sigma^{11}, \sigma^{12}, \sigma^{22}$ are defined. | ||
|
||
The geometry object is a simple rectangle and circle with the operator `-`. | ||
|
||
```python | ||
x = Symbol('x') | ||
y = Symbol('y') | ||
rec = sc.Rectangle((0., 0.), (1.1, 0.41)) | ||
cir = sc.Circle((0.2, 0.2), 0.05) | ||
geo = rec - cir | ||
u = sp.Function('u')(x, y) | ||
v = sp.Function('v')(x, y) | ||
p = sp.Function('p')(x, y) | ||
s11 = sp.Function('s11')(x, y) | ||
s22 = sp.Function('s22')(x, y) | ||
s12 = sp.Function('s12')(x, y) | ||
``` | ||
|
||
### Define Sampling Methods and Constraints | ||
|
||
For the problem, three boundary conditions , PDE constraint and external data are presented. We use the robust-PINN model inspired by the traditional LAD (Least Absolute Derivation) approach, where the L1 loss replaces the squared L2 data loss. | ||
|
||
```python | ||
@sc.datanode | ||
class Inlet(sc.SampleDomain): | ||
def sampling(self, *args, **kwargs): | ||
points = rec.sample_boundary(1000, sieve=(sp.Eq(x, 0.))) | ||
constraints = sc.Variables({'u': 4 * (0.41 - y) * y / (0.41 * 0.41)}) | ||
return points, constraints | ||
|
||
@sc.datanode | ||
class Outlet(sc.SampleDomain): | ||
def sampling(self, *args, **kwargs): | ||
points = geo.sample_boundary(1000, sieve=(sp.Eq(x, 1.1))) | ||
constraints = sc.Variables({'p': 0.}) | ||
return points, constraints | ||
|
||
@sc.datanode | ||
class Wall(sc.SampleDomain): | ||
def sampling(self, *args, **kwargs): | ||
points = geo.sample_boundary(1000, sieve=((x > 0.) & (x < 1.1))) | ||
#print("points3", points) | ||
constraints = sc.Variables({'u': 0., 'v': 0.}) | ||
return points, constraints | ||
|
||
@sc.datanode(name='NS_external') | ||
class Interior_domain(sc.SampleDomain): | ||
def __init__(self): | ||
self.density = 2000 | ||
|
||
def sampling(self, *args, **kwargs): | ||
points = geo.sample_interior(2000) | ||
constraints = {'f_s11': 0., 'f_s22': 0., 'f_s12': 0., 'f_u': 0., 'f_v': 0., 'f_p': 0.} | ||
return points, constraints | ||
|
||
@sc.datanode(name='NS_domain', loss_fn='L1') | ||
class NSExternal(sc.SampleDomain): | ||
def __init__(self): | ||
points = pd.read_csv('NSexternel_sample.csv') | ||
self.points = {col: points[col].to_numpy().reshape(-1, 1) for col in points.columns} | ||
self.constraints = {'u': self.points.pop('u'), 'v': self.points.pop('v'), 'p': self.points.pop('p')} | ||
|
||
def sampling(self, *args, **kwargs): | ||
return self.points, self.constraints | ||
``` | ||
|
||
### Define Neural Networks and PDEs | ||
|
||
In the PDE definition part, we add these PDE nodes: | ||
|
||
```python | ||
net = sc.MLP([2, 40, 40, 40, 40, 40, 40, 40, 40, 6], activation=sc.Activation.tanh) | ||
net = sc.get_net_node(inputs=('x', 'y'), outputs=('u', 'v', 'p', 's11', 's22', 's12'), name='net', arch=sc.Arch.mlp) | ||
pde1 = sc.ExpressionNode(name='f_s11', expression=-p + 2 * nu * u.diff(x) - s11) | ||
pde2 = sc.ExpressionNode(name='f_s22', expression=-p + 2 * nu * v.diff(y) - s22) | ||
pde3 = sc.ExpressionNode(name='f_s12', expression=nu * (u.diff(y) + v.diff(x)) - s12) | ||
pde4 = sc.ExpressionNode(name='f_u', expression=u * u.diff(x) + v * u.diff(y) - nu * (s11.diff(x) + s12.diff(y))) | ||
pde5 = sc.ExpressionNode(name='f_v', expression=u * v.diff(x) + v * v.diff(y) - nu * (s12.diff(x) + s22.diff(y))) | ||
pde6 = sc.ExpressionNode(name='f_p', expression=p + (s11 + s22) / 2) | ||
``` | ||
|
||
### Define A Solver | ||
|
||
Direct use of Adam optimization is less effective, so the LBFGS optimization method or a combination of both (Adam+LBFGS) is used for training: | ||
|
||
```python | ||
s = sc.Solver(sample_domains=(Inlet(), Outlet(), Wall(), Interior_domain(), NSExternal()), | ||
netnodes=[net], | ||
init_network_dirs=['network_dir_adam'], | ||
pdes=[pde1, pde2, pde3, pde4, pde5, pde6], | ||
max_iter=300, | ||
opt_config = dict(optimizer='LBFGS', lr=1) | ||
) | ||
``` | ||
|
||
The result is shown as follows: | ||
![image](https://github.com/xiangzixuebit/picture/raw/3d73005f3642f10400975659479e856fb99f6518/NS11.png) | ||
|
||
## Unsteady 2D N-S equations with unknown parameters | ||
|
||
A two-dimensional incompressible flow and dynamic vortex shedding past a circular cylinder in a steady-state are numerically simulated. Respectively, the Reynolds number of the incompressible flow is $Re = 100$. The kinematic viscosity of the fluid is $\nu = 0.01$. The cylinder diameter D is 1. The simulation domain size is | ||
$[-15,25] × [-8,8]$. The computational domain is much smaller: $[1,8] × [-2,2]× [0,20]$. | ||
|
||
![image](https://github.com/xiangzixuebit/picture/raw/3d73005f3642f10400975659479e856fb99f6518/NS2.png) | ||
|
||
$$ | ||
\begin{equation} | ||
\begin{aligned} | ||
&u_t+\lambda_1\left(u u_x+v u_y\right)=-p_x+\lambda_2\left(u_{x x}+u_{y y}\right) \\ | ||
&v_t+\lambda_1\left(u v_x+v v_y\right)=-p_y+\lambda_2\left(v_{x x}+v_{y y}\right) | ||
\end{aligned} | ||
\end{equation} | ||
$$ | ||
|
||
where $\lambda_1$ and $\lambda_2$ are two unknown parameters to be recovered. We make the assumption that $u=\psi_y, \quad v=-\psi_x$ | ||
|
||
for some stream function $\psi(x, y)$. Under this assumption, the continuity equation will be automatically satisfied. The following architecture is used in this example, | ||
|
||
$$ | ||
\begin{equation} | ||
\psi, p=net\left(t, x, y, \lambda_1, \lambda_2\right) | ||
\end{equation} | ||
$$ | ||
|
||
### Define Symbols and Geometric Objects | ||
|
||
We define three coordinate symbols `x`, `y` and `t`, three variables $u,v,p$ are defined. | ||
|
||
```python | ||
x = Symbol('x') | ||
y = Symbol('y') | ||
t = Symbol('t') | ||
geo = sc.Rectangle((1., -2.), (8., 2.)) | ||
u = sp.Function('u')(x, y, t) | ||
v = sp.Function('v')(x, y, t) | ||
p = sp.Function('p')(x, y, t) | ||
time_range = {t: (0, 20)} | ||
``` | ||
|
||
### Define Sampling Methods and Constraints | ||
|
||
This example has only two equation constraints, while the former has six equation constraints. We also use the LAD-PINN model. Then the PDE constrained optimization model is formulated as: | ||
|
||
$$ | ||
\min _{\theta, \lambda} \frac{1}{\# \mathbf{D}_u} \sum_{\left(t_i, x_i, u_i\right) \in \mathbf{D}_u}\left|u_i-u_\theta\left(t_i, x_i ; \lambda\right)\right|+\omega \cdot L_{p d e} . | ||
$$ | ||
|
||
```python | ||
@sc.datanode(name='NS_domain', loss_fn='L1') | ||
class NSExternal(sc.SampleDomain): | ||
def __init__(self): | ||
points = pd.read_csv('NSexternel_sample.csv') | ||
self.points = {col: points[col].to_numpy().reshape(-1, 1) for col in points.columns} | ||
self.constraints = {'u': self.points.pop('u'), 'v': self.points.pop('v'), 'p': self.points.pop('p')} | ||
|
||
def sampling(self, *args, **kwargs): | ||
return self.points, self.constraints | ||
|
||
@sc.datanode(name='NS_external') | ||
class NSEq(sc.SampleDomain): | ||
def sampling(self, *args, **kwargs): | ||
points = geo.sample_interior(density=2000, param_ranges=time_range) | ||
constraints = {'continuity': 0, 'momentum_x': 0, 'momentum_y': 0} | ||
return points, constraints | ||
``` | ||
|
||
### Define Neural Networks and PDEs | ||
|
||
IDRLnet defines a network node to represent the unknown Parameters. | ||
|
||
```python | ||
net = sc.MLP([3, 20, 20, 20, 20, 20, 20, 20, 20, 3], activation=sc.Activation.tanh) | ||
net = sc.get_net_node(inputs=('x', 'y', 't'), outputs=('u', 'v', 'p'), name='net', arch=sc.Arch.mlp) | ||
var_nr = sc.get_net_node(inputs=('x', 'y'), outputs=('nu', 'rho'), arch=sc.Arch.single_var) | ||
pde = sc.NavierStokesNode(nu='nu', rho='rho', dim=2, time=True, u='u', v='v', p='p') | ||
``` | ||
|
||
### Define A Solver | ||
|
||
Two nodes trained together | ||
|
||
```python | ||
s = sc.Solver(sample_domains=(NSExternal(), NSEq()), | ||
netnodes=[net, var_nr], | ||
pdes=[pde], | ||
network_dir='network_dir', | ||
max_iter=10000) | ||
``` | ||
|
||
Finally, the real velocity field and pressure field at t=10s are compared with the predicted results: | ||
![image](https://github.com/xiangzixuebit/picture/raw/3d73005f3642f10400975659479e856fb99f6518/NS22.png) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.