Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GraphQL Caching #2599

Open
SachaG opened this issue Jul 9, 2020 · 4 comments
Open

GraphQL Caching #2599

SachaG opened this issue Jul 9, 2020 · 4 comments

Comments

@SachaG
Copy link
Contributor

SachaG commented Jul 9, 2020

I'd like to explore how we could handle caching better. What makes this issue complex is that there are many different solutions available.

Patterns

Persisted Queries

https://www.apollographql.com/docs/apollo-server/performance/apq/

This seems like a good idea no matter what.

Resolver-Level Caching

https://www.apollographql.com/docs/apollo-server/performance/caching/#adding-cache-hints-dynamically-in-your-resolvers

This is currently implemented in Vulcan (passing enableCache: true as a resolver argument) according to the Apollo Server docs but as far as I can tell it's not working. I'm not sure if it's an issue on our side or with Apollo Server.

Another way to implement this directly inside Vulcan would be to add caching using node-cache inside the default resolvers. This would have the advantage that we control everything directly including invalidation. This could be a enableVulcanCache option maybe, although maybe we don't want to have two similar caching solutions…

Schema-Level Cache Hints

https://www.apollographql.com/docs/apollo-server/performance/caching/#adding-cache-hints-statically-in-your-schema

I don't fully understand how this works yet, but we could probably add a cache: true option to schema field definitions without too much trouble.

Full Response Caching

https://www.apollographql.com/docs/apollo-server/performance/caching/#saving-full-responses-to-a-cache

This seems to me like the simplest/best solution, you just cache the full response whenever you can.

Issues

Invalidation

The main issue with all Apollo Server solutions is that (afaik) there is no way to manually invalidate the cache when, for example, you add or modify a document. Also, the fact that Apollo Client also has a cache makes it very hard to google any info about how Apollo Server's cache works.

Storage

I think starting with an in-memory cache is fine to begin with, but eventually it'd be nice to use Redis or something similar.

@eric-burel
Copy link
Contributor

I've played around with Redis in the past. I am not sure you will gain a lot in this scenario, as Redis is really just... an inmemory cache, but running as a server.
Advantage of Redis is the powerful methods it has, like fetching keys based a on a custom range. You can add powerful features such as rate limiting with a sliding window. I had an extremely positive experience with it (like, everything worked immediately and intuitively, from install to deploy, including unit testing), but in the our scenarios that may be slightly overkill?

@SachaG
Copy link
Contributor Author

SachaG commented Jul 10, 2020

Yeah… maybe another advantage with Redis is that you don't risk to overload your main server's memory? I could see that happening if your cache is not well-configured and just keeps getting bigger and bigger maybe?

@eric-burel
Copy link
Contributor

eric-burel commented Jul 10, 2020

Yeah that's totally right. I'll read the docs you listed to be more knowledgeable on this part, that's very interesting. I tend to trust the DB and use indexing in those scenarios but at a certain scale every company I meet end up introducing such key-based caches.

I know Kuzzle (https://kuzzle.io/) is extremely good at performances and scaling, that could be a source of inspiration (not GraphQL though but in the end it's the same issues for all node devs).

Edit: they use Redis and Elastic Seach out of the box for instance.

@lorensr
Copy link

lorensr commented Aug 16, 2020

no way to manually invalidate the cache

Correct, at least not out of the box. apollographql/apollo-server#3228

So I think of maxAge has "how long I'm okay with stale data being read for."

Yeah… maybe another advantage with Redis is that you don't risk to overload your main server's memory? I could see that happening if your cache is not well-configured and just keeps getting bigger and bigger maybe?

Ha! I'm actually surprised that it looks like the default Apollo Server cache defaults to infinite size 😄. It is, however, easily customizable:

https://github.com/apollographql/apollo-server/blob/main/packages/apollo-server-caching/src/InMemoryLRUCache.ts#L19

But for the Next.js version of Vulcan, I think only Redis makes sense, since it's serverless.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants