Retry Helpers#
- class redis.retry.AbstractRetry(backoff, retries, supported_errors)[source]#
Retry a specific number of times after a failure
- Parameters
backoff (AbstractBackoff) –
retries (int) –
supported_errors (Tuple[Type[E], ...]) –
- class redis.retry.Retry(backoff, retries, supported_errors=(<class 'redis.exceptions.ConnectionError'>, <class 'redis.exceptions.TimeoutError'>, <class 'TimeoutError'>))[source]#
- Parameters
backoff (AbstractBackoff) –
retries (int) –
supported_errors (Tuple[Type[Exception], ...]) –
- call_with_retry(do, fail, is_retryable=None, with_failure_count=False)[source]#
Execute an operation that might fail and returns its result, or raise the exception that was thrown depending on the Backoff object. do: the operation to call. Expects no argument. fail: the failure handler, expects the last error that was thrown
is_retryable: optional function to determine if an error is retryablewith_failure_count: if True, the failure count is passed to the failure handler- Parameters
do (Callable[[], T]) –
fail (Union[Callable[[Exception], Any], Callable[[Exception, int], Any]]) –
is_retryable (Optional[Callable[[Exception], bool]]) –
with_failure_count (bool) –
- Return type
T
Retry in Redis Standalone#
>>> from redis.backoff import ExponentialBackoff
>>> from redis.retry import Retry
>>> from redis.client import Redis
>>> from redis.exceptions import (
>>> BusyLoadingError,
>>> RedisError,
>>> )
>>>
>>> # Run 3 retries with exponential backoff strategy
>>> retry = Retry(ExponentialBackoff(), 3)
>>> # Redis client with retries on custom errors in addition to the errors
>>> # that are already retried by default
>>> r = Redis(host='localhost', port=6379, retry=retry, retry_on_error=[BusyLoadingError, RedisError])
As you can see from the example above, Redis client supports 2 parameters to configure the retry behaviour:
retry:Retryinstance with a Backoff strategy and the max number of retriesThe
Retryinstance has default set of Exceptions to retry on, which can be overridden by passing a tuple with Exceptions to thesupported_errorsparameter.
retry_on_error: list of additional Exceptions to retry on
If no retry is provided, a default one is created with ExponentialWithJitterBackoff as backoff strategy
and 3 retries.
Retry in Redis Cluster#
>>> from redis.backoff import ExponentialBackoff
>>> from redis.retry import Retry
>>> from redis.cluster import RedisCluster
>>>
>>> # Run 3 retries with exponential backoff strategy
>>> retry = Retry(ExponentialBackoff(), 3)
>>> # Redis Cluster client with retries
>>> rc = RedisCluster(host='localhost', port=6379, retry=retry)
Retry behaviour in Redis Cluster is a little bit different from Standalone:
retry:Retryinstance with a Backoff strategy and the max number of retries, default value isRetry(ExponentialWithJitterBackoff(base=1, cap=10), cluster_error_retry_attempts)cluster_error_retry_attempts: number of times to retry before raising an error whenTimeoutError,ConnectionError,ClusterDownErrororSlotNotCoveredErrorare encountered, default value is3This argument is deprecated - it is used to initialize the number of retries for the retry object, only in the case when the
retryobject is not provided. When theretryargument is provided, thecluster_error_retry_attemptsargument is ignored!
The retry object is not yet fully utilized in the cluster client. The retry object is used only to determine the number of retries for the cluster level calls.
Let’s consider the following example:
>>> from redis.backoff import ExponentialBackoff
>>> from redis.retry import Retry
>>> from redis.cluster import RedisCluster
>>>
>>> rc = RedisCluster(host='localhost', port=6379, retry=Retry(ExponentialBackoff(), 6))
>>> rc.set('foo', 'bar')
the client library calculates the hash slot for key ‘foo’.
given the hash slot, it then determines which node to connect to, in order to execute the command.
during the connection, a
ConnectionErroris raised.because we set
retry=Retry(ExponentialBackoff(), 6), the cluster client starts a cluster update, removes the failed node from the startup nodes, and re-initializes the cluster.the cluster client retries the command until it either succeeds or the max number of retries is reached.