-
Notifications
You must be signed in to change notification settings - Fork 778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add pinnx
submodule
#1932
base: master
Are you sure you want to change the base?
Add pinnx
submodule
#1932
Conversation
Hi @chaoming0625 , thank you for adding the code. It is nice. My first suggestion is that as there is some overlapped code between DeepXDE and PINNx, such as geometry, we should reduce the duplication as much as possible. You can import and reuse the code in DeepXDE. |
Hi, @lululxvi , I have made some changes. However, |
In import brainstate as bst
bst.environ.set(precision='b16') Models will be trained using Here is an example: import brainstate as bst
import brainunit as u
import optax
from deepxde import pinnx
bst.environ.set(precision='b16') # or, 32, 64, 16
geometry = pinnx.geometry.GeometryXTime(
geometry=pinnx.geometry.Interval(-1, 1.),
timedomain=pinnx.geometry.TimeDomain(0, 0.99)
).to_dict_point(x=u.meter, t=u.second)
uy = u.meter / u.second
bc = pinnx.icbc.DirichletBC(lambda x: {'y': 0. * uy})
ic = pinnx.icbc.IC(lambda x: {'y': -u.math.sin(u.math.pi * x['x'] / u.meter) * uy})
v = 0.01 / u.math.pi * u.meter ** 2 / u.second
def pde(x, y):
jacobian = approximator.jacobian(x)
hessian = approximator.hessian(x)
dy_x = jacobian['y']['x']
dy_t = jacobian['y']['t']
dy_xx = hessian['y']['x']['x']
residual = dy_t + y['y'] * dy_x - v * dy_xx
return residual
approximator = pinnx.nn.Model(
pinnx.nn.DictToArray(x=u.meter, t=u.second),
pinnx.nn.FNN(
[geometry.dim] + [20] * 3 + [1],
"tanh",
),
pinnx.nn.ArrayToDict(y=uy)
)
problem = pinnx.problem.TimePDE(
geometry,
pde,
[bc, ic],
approximator,
num_domain=2540,
num_boundary=80,
num_initial=160,
)
trainer = pinnx.Trainer(problem, )
trainer.compile(bst.optim.OptaxOptimizer(optax.adamw(1e-3)))
trainer.train(iterations=10000)
|
Therefore, hosting |
Hi, @lululxvi , in the next, my colleague will continue to promote the integration of 'pinnx' into 'deepxde', and he will focus more on solving every problem in the integration. |
No problem. |
deepxde/pinnx/__init__.py
Outdated
@@ -0,0 +1,39 @@ | |||
# Copyright 2024 BDP Ecosystem Limited. All Rights Reserved. | |||
# | |||
# Licensed under the Apache License, Version 2.0 (the "License"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DeepXDE uses LGPL-2.1 license. Can we also use LGPL-2.1 license in these files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for this line. It should be changed to DeepXDE
's license.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then we just remove these comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @lululxvi, I have just changed to LGPL-2.1 license in these files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we simply remove the license info in source code? As DeepXDE has the license file at https://github.com/lululxvi/deepxde/blob/master/LICENSE
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I have removed the liccense info in source code.
In response to chaobrain/pinnx#19, #1904, and #1899, I am trying to merge pinnx into
DeepXDE
.This pull request includes a submodule
deepxde.pinnx
, which enables explicit variables and physical units in physics-informed neural networks.A code example:
This new submodule support most of PINN examples in
DeepXDE
.The documentation is hosted in
docs/pinnx_docs
directory. The ample examples are included inexamples/pinnx_examples
.