Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird behaviour in terms of training the normalizing flow. #6

Open
kazewong opened this issue Apr 12, 2023 · 0 comments
Open

Weird behaviour in terms of training the normalizing flow. #6

kazewong opened this issue Apr 12, 2023 · 0 comments

Comments

@kazewong
Copy link
Owner

There is a report on the following breaking mode:

When analyzing GW170817 as shown in the example, if the prior of cos(iota) is changed to [-1,-0.9], then choose a number of chain = 800 will give non-sensible result, such as nan in the log_prob.

Further investigation shows number of chains = 601, 801 will also give the same behavior. But 100, 500, 501 performs normally.

This is very weird, which I think it has to do with the training of the normalizing flow and thinning.

kazewong pushed a commit that referenced this issue Aug 22, 2024
…m-class-from-prior-class

98 moving naming tracking into jim class from prior class
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant