Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bugs in DataLoader and tutorial5 notebook #376

Draft
wants to merge 7 commits into
base: 0.2
Choose a base branch
from

Conversation

FilippoOlivo
Copy link

No description provided.

@FilippoOlivo FilippoOlivo marked this pull request as draft November 4, 2024 14:08
@FilippoOlivo FilippoOlivo marked this pull request as ready for review November 7, 2024 12:58
pina/solvers/pinns/basepinn.py Show resolved Hide resolved
@@ -53,11 +55,20 @@ def __init__(
:param torch.nn.Module loss: The loss function used as minimizer,
default :class:`torch.nn.MSELoss`.
"""
if optimizers is None:
optimizers = TorchOptimizer(torch.optim.Adam, lr=0.001)
Copy link
Collaborator

@dario-coscia dario-coscia Nov 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are you not passing kwargs?

Copy link
Author

@FilippoOlivo FilippoOlivo Nov 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kwargs are passed when TorchOptimizer is initialized outside the Solver class

from pina.optim import TorchOptimizer
from pina.solver import PINN
from torch.optim import SGD

optimizer = TorchOptimizer(SGD, lr=0.1, ...) 

in_ = LabelTensor(torch.rand((10,2)), ['u_0', 'u_1'])
out_ = LabelTensor(torch.rand((10,1)), ['u'])

class TestProblem(AbstractProblem):
    input_variables = ['u_0', 'u_1']
    output_variables = ['u']

    conditions = {
        'data': Condition(input_points=in_, output_points=out_),
    }

solver = SupervisedSolver(problem=problem, model=model, optimizer=optimizer)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we pass the standard torch optimizer and inside solvers do the wrapper? To me, for the user, this is more intuitive

Copy link
Collaborator

@dario-coscia dario-coscia Nov 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ndem0 what do you think?

pina/solvers/pinns/basepinn.py Outdated Show resolved Hide resolved
pina/solvers/pinns/basepinn.py Outdated Show resolved Hide resolved
pina/solvers/pinns/basepinn.py Show resolved Hide resolved
pina/problem/inverse_problem.py Show resolved Hide resolved
…LabelParameter class (equivalent of LabelTensor for torch.nn.Parameters)
@FilippoOlivo FilippoOlivo marked this pull request as draft November 14, 2024 13:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants