Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perform likelihood calculations on log scale to improve accuracy in edge cases #24

Open
2 of 7 tasks
AngusMcLure opened this issue Nov 30, 2023 · 0 comments
Open
2 of 7 tasks
Assignees

Comments

@AngusMcLure
Copy link
Owner

AngusMcLure commented Nov 30, 2023

The fi_pool_cluster() currently does a lot of integration over integrands involving lots of subtraction, powers, and multiplications, which begin to suffer when prevalence (the variable of integration) is close to the 0 or 1. Changing these calculations to be robust at these edge cases might avoid some of the convergence issues we are getting with extreme values of beta. I have already done this for logitnorm and this looks promising, but should apply this more broadly. A list of functions consider this for:

  • FI Integrand functions (link-norm)
  • phi and 1-phi
  • FI Integrand functions (beta)
  • Change of variable integrand functions (link-norm only)
  • link functions
  • inverse link functions?
  • derivative of inverse link functions?
@AngusMcLure AngusMcLure self-assigned this Nov 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant