-
-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caching aggregates to avoid repeated loads #157
Comments
Stating the obvious.... This requires the state (aka aggregate instance) to be thread-safe to cover for when two callers take it from the cache for an overlapping period. I'd say that having the state require this is a good pattern in general, but it's definitely another concept that needs to be covered in the high level docs. (i.e. referring to using When designing the API... Sometimes it's useful to be able to specify that you want to skip reading newer events and assume that, if the cache has a value, that it's worth doing a roundtrip assuming that the cache is in-date (in Equinox, I call that |
You're right, I haven't considered concurrent calls for the same object. The expected scenario, however, was a bit different and doesn't involve concurrency. In many cases, a user opens a single page and uses some task-based UI to execute several operations in a relatively short time (for a computer), sequentially. Regarding the implementation, I planned to use the ASP.NET Core caching extensions and avoid thinking about concurrency. When the cached state is loaded, I'd expect it to be a deserialised instance (I might be wrong), so concurrent calls won't get the same instance. |
If you look in the Equinox impl, it uses |
I planned to use |
I'm in need of this - any thoughts on ETA? :-) (Sorry for asking...) |
It does not rely on serialization - it holds onto the actual object in your single process(only) |
As I am currently working on subscriptions (split of physical subscription and consumer pipeline), it is now up for grabs. It's relatively straightforward, and I think the good first step would be to add it at the lowest level (read of stream in the event reader). Composition would work there nicely. |
#218 , #219 and #220 together add implicit support for IMemoryCache (if injected). Tests in my own application indicate that more than 10 times as many (basic) commands can be processed in 1 second than without caching. With longer intervals, the difference should be exponentially better. I've ignored composability, but I'll look into it next - use of IMemoryCache must at least be optional. |
Proposal:
It can be done in a composable version of the
AggregateStore
.EVE-35
The text was updated successfully, but these errors were encountered: