You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Rate limiting of the API is primarily on a per access token basis. If a method allows, for example, for 75 requests per rate limit window, then it allows 75 requests per window per access token. This number can depend on the system state and may need to change. If it does, Crossref will publish it in the response headers.
The problem with us (people) and several services (Azure, gha) sharing the same token is that one of these users might (accidentally) exhaust the rate limit at the expense of another user or machine.
This can easily happen, because it is generally ok to go up to the rate limit on any individual machine or service.
As a result, seemingly unrelated services or other users queries may break in an intermittent fashion, which could be quite surprising and hard to debug.
This is unlikely to be an issue initially, but may well become one eventually and should be addressed head on with at least a token per user and service.
Depending on our scaling Azure may even need several tokens, or the shiny app must ask Azure how many instances there currently are and then share/divide the rate limit accordingly.
As a (slow) workaround, falling back to the open api might help #36
The text was updated successfully, but these errors were encountered:
another alternative would be (if licensing/cost prohibit additional tokens), as we previously considered, to upload the dumps into our own db (BigQuery or similar) and to run the queries against that.
Then we can administer our own access credentials and rate limits.
However, this would carry quite a lot of overhead, so hopefully we won't need that.
Aside from security best practices, there may be another reason to use separate API tokens for separate people and, especially, services: rate limits.
Crossref states for Metadata Plus that:
The problem with us (people) and several services (Azure, gha) sharing the same token is that one of these users might (accidentally) exhaust the rate limit at the expense of another user or machine.
This can easily happen, because it is generally ok to go up to the rate limit on any individual machine or service.
As a result, seemingly unrelated services or other users queries may break in an intermittent fashion, which could be quite surprising and hard to debug.
This is unlikely to be an issue initially, but may well become one eventually and should be addressed head on with at least a token per user and service.
Depending on our scaling Azure may even need several tokens, or the shiny app must ask Azure how many instances there currently are and then share/divide the rate limit accordingly.
As a (slow) workaround, falling back to the open api might help #36
The text was updated successfully, but these errors were encountered: