You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 25, 2024. It is now read-only.
It is recommended to implement caching for request results, avoiding the need to send requests every time. This is particularly beneficial when dealing with a large number of tags, as it prevents fetching all tags with each grouping.
Adopting this approach ensures that user actions are not overwritten, providing a safeguard for their interactions.
Consider offering an option to re-sort all tags (Maybe not, because Ungroup All can already achieve the goal)
The text was updated successfully, but these errors were encountered:
I have no experience in the caching OpenAI results before, I'll try to learn and finish that.
If there can be an option to sort group it will be great.
Though I use OpenAI API every day to boost my workflow, I still receive only below $2 invoice every month. Cache sounds like more a good-to-have feature, but really interesting. Let me know if you want it so much and I can contribute.
The best way to save credits: don't use gpt-4 :D
I think this feature needs more discussion since it may have much work to do, and maybe conflicted with #35? I have no idea if the cache ways are the same for different models or other classify solutions
The text was updated successfully, but these errors were encountered: