- TTL is a no-op on foyer. Remove it.
- Optional sentry instrumentation.
- Prometheus metrics.
- Attributes on certain metrics. Move to using a prometheus lib.
- Wire up foyer metrics with
mixtricks - Distributed tracing: accept sentry trace baggage (?)
- Stream object instead of fetch -> save -> serve.
- What happens if client disconnects early? I want cache writes to go through still.
- More store types (GCS, filesystem, etc).
- Fixed-token auth, when pre-signing is not necessary.
-
/populate/{bucket}/{path}endpoint to pre-populate cache. This would be useful for objects that are expected to be hot but haven't been accessed yet. - Multiple config files. If given, merge all in order before parsing final config. This can be useful when secrets are split in some environments, like in kubernetes with configmaps and secrets.
- Stream through if object is too large
-
Hybrid in-memory + disk cache. In-memory is faster but more expensive, so we can use it for hot objects and fall back to disk for less popular ones. This will allow us to scale the cache beyond available memory while still on a single node. We can use foyer as the backbone.
-
Gateway node If a single node is unable to handle workload, we'll want to scale it out. At that point to maintain cache hits we'll want consistent-ish routing of requests to available nodes. Note: We might just want to use rendezvous hashing to sidestep the cascading overload problem, but with the usecase I'm writing this for that's not so relevant.
-
Multiple buckets to serve the same logical bucket. Could be useful to fulfill QoS requirements.
- Write through: expose PUT. This is kind of important for fresh objects, object storage is eventually consistent.
- Object paging. Cachey has this as a requirement. We could have an optional version of this design.