Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add throttle caching #17

Merged
merged 4 commits into from
Feb 11, 2024
Merged

Add throttle caching #17

merged 4 commits into from
Feb 11, 2024

Conversation

julik
Copy link
Contributor

@julik julik commented Feb 11, 2024

The cached throttles can be used when you want to lift your throttle blocks into a higher-level cache. If you are dealing with clients which are hammering on your throttles a lot, it is useful to have a process-local cache of the timestamp when the blocks that are set are going to expire. If you are running, say, 10 web app containers - and someone is hammering at an endpoint which starts blocking - you don't really need to query your DB for every request. The first request indicated as "blocked" by Pecorino can write a cache entry into a shared in-memory table, and all subsequent calls to the same process can reuse that blocked_until value to quickly refuse the request

This can be used when a block is in effect and a faster, memory-based cache store can be used
@julik julik marked this pull request as ready for review February 11, 2024 18:23
@julik julik merged commit b1bf06c into main Feb 11, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant