-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimized w[3] too low #69
Comments
Because we use |
But the orange line doesn't match any of the data points. |
Unrelated, but in 8ac5d5b#diff-66ddf16d3b863ea428c3d0d49c515150cb2c4cd81dc1abe88123b4c08824d550R831, shouldn't the range be (1, 4) rather than (1, 5)? |
In python, range(1, 4) doesn't include 4.
I think 129 data points are not enough to allow the parameter to deviate the default value too much. |
But, I think that w[3] for my collection should still be somewhat higher. Even if it doesn't reach 60, it should still be at least 40. |
I prefer to be conservative here. And a short interval could allow FSRS to collect the data more quickly. |
In that case, I will have to manually increase the value of w[3] because such a low value is not acceptable to me (especially when it is equal to w[2]). By the way, after manually increasing the value of w[3] to 60 and clicking |
@Expertium, what do you think? |
Well, according to my testing, the new implementation is more accurate. That's all I can say. |
And the default w[3] has decreased in the recent update. It would also induce this issue. I will update the default weights in the this week. |
I think that the real issue is 8ac5d5b#diff-66ddf16d3b863ea428c3d0d49c515150cb2c4cd81dc1abe88123b4c08824d550R656. I used python optimizer version 4.12.0 to obtain a more complete S0 dataset from my deck file. A screenshot of the relevant portion is here: ![]() As you can see, I don't have only 129 datapoints with initial_rating = 4, but 294 datapoints. The above change wiped off more than half of the available data. Well, at one point of time, I myself suggested filtering out such datapoints (open-spaced-repetition/fsrs4anki#282 (comment)). But now, I am very sure that filtering this data is not a good idea. |
I decided to remove that line myself and see what happens. (user1823@803d4b2)
|
With Anki 23.10.1, the optimized w[3] for my collection is 60.9883.
With Anki 23.12, the optimized w[3] for my collection is just 21.8133 (which is the same as w[2]).
With the Python optimizer (v4.20.2), the optimized w[3] is 21.2803. So, the problem is common to both Python and Rust optimizers.
Note: All the three values above are obtained using exactly the same collection.
S0 dataset:

Forgetting curve for Easy first rating:

1.2447, 1.8229, 18.9728, 60.9883, 4.4507, 1.3387, 1.9628, 0.0193, 1.7256, 0.1265, 1.1219, 1.7763, 0.1475, 0.6263, 0.3309, 0.0095, 6.5533
1.2404, 3.9049, 21.8133, 21.8133, 4.8042, 1.3335, 1.7620, 0.0061, 1.7545, 0.1255, 1.0512, 1.7787, 0.1733, 0.5952, 0.1943, 0.0207, 3.9980
1.1968, 3.7785, 21.1966, 21.2803, 4.4494, 1.733, 2.1947, 0.0, 1.8094, 0.1566, 1.1751, 1.4114, 0.1838, 0.7023, 0.0132, 0.0, 4.0
The text was updated successfully, but these errors were encountered: