Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big-O notation, find a resolution #50

Open
MatthewCaseres opened this issue Oct 8, 2023 · 2 comments
Open

Big-O notation, find a resolution #50

MatthewCaseres opened this issue Oct 8, 2023 · 2 comments

Comments

@MatthewCaseres
Copy link
Contributor

We have an approach for Big-O notation memory complexity in the case of our iterative Julia models - #34. I wanted a way of doing this for lifelib models and had an idea I thought would work, but it doesn't seem like we should do it - #49.

We want to classify or categorize different common approaches to actuarial modeling by this Big-O notation.

  • Seriatim memoized
  • Vectorized memoized
  • Iterative

It is possible that we simplify the model while maintaining the same fundamental structure and then sort of just make an appeal to logic/math in the way that a computer scientist might. Then the argument for big-O notation isn't dependent on a specific implementation?

Anyhow, we don't have argument that the lifelib models should satisfy certain properties. Like Seriatim having different memory complexity than vectorized is sort of obvious in some sense, but saying it in the right way might be necessary to really have done the topic justice in my opinion.

@serenity4
Copy link
Contributor

Yeah, I don't know how computer scientists typically go about justifying claims of time or memory complexity. I think that typically, in the case of the iterative Julia implementations, it's quite obvious and can be typically inferred from how many for loops you have and over what. But when caching is involved (which is the case in many real-world domains outside of math-y algorithms) as is the case of memoization-based implementations, then things get more complicated. We could investigate how other computer scientists would approach the problem, e.g. maybe read research articles on implementation domains that face similar challenges.

@serenity4
Copy link
Contributor

I guess we could either try to think hard in terms of the logic, maybe simplify the model and come up with a theoretical result, or try to find ways to empirically estimate complexity.

Time complexity is generally quite easy, as it's just about benchmarking on different parameters, so it's more about how to derive memory complexity. I think that typical ways to do it is to look at the memory consumption of a given process, and it's done for languages without garbage collector; but maybe we can find e.g. studies for Python algorithms that have to deal with such estimations (or maybe Javascript?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants