Title: My Intro to Caching Notes
At work we've reached the point where we have the opportunity (privilege?) to think about optimizing our services by reducing load on the origin and reducing latency. As part of this, I've finally had time to ask questions like: What is caching?, How does it work?, and Who can cache?
Caching is the act of taking responses to HTTP requests and storing them . . . somewhere. The reason we cache is to improve responsiveness by storing a pre-calculated response closer to the client and thereby reduce load on the origin. Lastly, caching can happen, seemingly, at just about any stage of a requests life-cycle.
Caching can happen in the clients browser, the CDN, or the server itself. Caching at the browser is managed via HTTP headers like Cache-Control
and Expires
as well as others. Caching at the CDN is your neighborhood middleman who can return cached responses before they make it all the way to the server. Server caching is also possible and while it doesn't reduce the load at the origin it does reduce latency.
For my work, I'm most interested in caching at the CDN which would be Cloudfront in AWS. Cloudfront provides alot of flexibility for managing the cache. By default, it will create a cache key that consists of the request URL and the HOST for GET and HEAD requests. For example, if the GET request is to www.example.com/api/orders?order_id=123
it will add www.example.com/api/orders
to the cache key. I believe this means that if another request is sent with order_id=456
then it will return the wrong response. Thus, one will likely have to use a custom cache policy where they can add additional cache keys.
Questions:
1. For cloudfront, can you cache only some GET requests but not others?
References:
1. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/understanding-the-cache-key.html
2. https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching
3.