You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there. I tried looking in the docs to see if there was any support for explicitly setting number of retries for the Google Storage Client when writing to a cloud bucket using a CloudPath object. Specifically for my use case I am trying to read/write to a GCS bucket many times but I sometimes get a 503 PUT error from GCS, and our infra wants to retry more frequently than the default retry rate for gcs.
Seems like this problem happened before in this issue: #267
The text was updated successfully, but these errors were encountered:
The other option would be to implement it in your code using a library like tenacity wherever you have methods that use CloudPaths to do reading/writing.
pjbull
changed the title
Is there currently any support for handling custom retries?
Support SDK-specific retry parameters
Oct 9, 2024
As @jayqi we likely won't implement an independent retry method in cloudpathlib. We recommend tenacity or similar.
We would, however, support retry parameters for the provider SDKs as discussed here. In addition to GCS, I think we could similarly support custom retries using the boto functionality for S3. In fact, on S3 the version that uses the AWS Configuration File may just work as is.
Hi there. I tried looking in the docs to see if there was any support for explicitly setting number of retries for the Google Storage Client when writing to a cloud bucket using a
CloudPath
object. Specifically for my use case I am trying to read/write to a GCS bucket many times but I sometimes get a 503 PUT error from GCS, and our infra wants to retry more frequently than the default retry rate for gcs.Seems like this problem happened before in this issue: #267
The text was updated successfully, but these errors were encountered: