Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the R_MAX of CMRR.rs #219

Merged
merged 2 commits into from
Sep 8, 2024

Conversation

Expertium
Copy link
Contributor

@Expertium Expertium commented Sep 7, 2024

open-spaced-repetition/fsrs4anki#686 (comment)
It appears that our previous estimates of optimal retention were underestimates. I think it makes sense to expand the range.
image
Note that here it says "workload", but it's not just workload, it's workload/knowledge

Copy link
Member

@L-M-Sherlock L-M-Sherlock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@L-M-Sherlock L-M-Sherlock merged commit 479c387 into open-spaced-repetition:main Sep 8, 2024
3 checks passed
@user1823
Copy link
Contributor

user1823 commented Sep 8, 2024

I think that DR above 0.96 doesn't make much sense because the returns are diminishing.

image

In this image, see the drastic increase in slope above DR = 0.96. (There are occasional high slopes before DR = 0.96 too but after DR = 0.96, most of the values are too high.)

Data: open-spaced-repetition/fsrs4anki#686 (comment)

@L-M-Sherlock
Copy link
Member

The data is based on default parameters.

@user1823
Copy link
Contributor

user1823 commented Sep 8, 2024

Well, it is OK if you think that some people can have such a high value of MRR. But, I am not really convinced. Nonetheless, let's forget about this for now. We can always decrease the value if someone complains.

@Expertium
Copy link
Contributor Author

Expertium commented Sep 8, 2024

We can change it slightly to 0.97 instead of 0.98

@Expertium
Copy link
Contributor Author

Expertium commented Sep 9, 2024

@L-M-Sherlock do you think we should revert this (or set R_MAX to 0.96 or 0.97)? It seems like the idea that previous values were underestimates was based on the wrong data
Btw, here's the most recent graph, but with workload divided by retention. This is with loss_aversion = 1.
Figure_1

L-M-Sherlock added a commit that referenced this pull request Sep 9, 2024
L-M-Sherlock added a commit that referenced this pull request Sep 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants