Redis Modules Commands#

Accessing redis module commands requires the installation of the supported Redis module. For a quick start with redis modules, try the Redismod docker.

RedisBloom Commands#

These are the commands for interacting with the RedisBloom module. Below is a brief example, as well as documentation on the commands themselves.

Create and add to a bloom filter

import redis
r = redis.Redis()
r.bf().create("bloom", 0.01, 1000)
r.bf().add("bloom", "foo")

Create and add to a cuckoo filter

import redis
r = redis.Redis()
r.cf().create("cuckoo", 1000)
r.cf().add("cuckoo", "filter")

Create Count-Min Sketch and get information

import redis
r = redis.Redis()
r.cms().initbydim("dim", 1000, 5)
r.cms().incrby("dim", ["foo"], [5])
r.cms().info("dim")

Create a topk list, and access the results

import redis
r = redis.Redis()
r.topk().reserve("mytopk", 3, 50, 4, 0.9)
r.topk().info("mytopk")
class redis.commands.bf.commands.BFCommands[source]#

Bloom Filter commands.

add(key, item) int[source]#
add(key, item) Awaitable[int]

Add to a Bloom Filter key an item. For more information see BF.ADD.

card(key) int[source]#
card(key) Awaitable[int]

Returns the cardinality of a Bloom filter - number of items that were added to a Bloom filter and detected as unique (items that caused at least one bit to be set in at least one sub-filter). For more information see BF.CARD.

create(key, errorRate, capacity, expansion=None, noScale=None) bool[source]#
create(key, errorRate, capacity, expansion=None, noScale=None) Awaitable[bool]

Create a new Bloom Filter key with desired probability of false positives errorRate expected entries to be inserted as capacity. Default expansion value is 2. By default, filter is auto-scaling. For more information see BF.RESERVE.

exists(key, item) int[source]#
exists(key, item) Awaitable[int]

Check whether an item exists in Bloom Filter key. For more information see BF.EXISTS.

info(key) BFInfo | dict[str, Any][source]#
info(key) Awaitable[BFInfo | dict[str, Any]]

Return capacity, size, number of filters, number of items inserted, and expansion rate. For more information see BF.INFO.

insert(key, items, capacity=None, error=None, noCreate=None, expansion=None, noScale=None) list[int][source]#
insert(key, items, capacity=None, error=None, noCreate=None, expansion=None, noScale=None) Awaitable[list[int]]

Add to a Bloom Filter key multiple items.

If nocreate remain None and key does not exist, a new Bloom Filter key will be created with desired probability of false positives errorRate and expected entries to be inserted as size. For more information see BF.INSERT.

loadchunk(key, iter, data) bytes | str[source]#
loadchunk(key, iter, data) Awaitable[bytes | str]

Restore a filter previously saved using SCANDUMP.

See the SCANDUMP command for example usage. This command will overwrite any bloom filter stored under key. Ensure that the bloom filter will not be modified between invocations. For more information see BF.LOADCHUNK.

madd(key, *items) list[int][source]#
madd(key, *items) Awaitable[list[int]]

Add to a Bloom Filter key multiple items. For more information see BF.MADD.

mexists(key, *items) list[int][source]#
mexists(key, *items) Awaitable[list[int]]

Check whether items exist in Bloom Filter key. For more information see BF.MEXISTS.

reserve(key, errorRate, capacity, expansion=None, noScale=None)#

Create a new Bloom Filter key with desired probability of false positives errorRate expected entries to be inserted as capacity. Default expansion value is 2. By default, filter is auto-scaling. For more information see BF.RESERVE.

Return type

Union[bool, Awaitable[bool]]

scandump(key, iter) BloomScanDumpResponse[source]#
scandump(key, iter) Awaitable[BloomScanDumpResponse]

Begin an incremental save of the bloom filter key.

This is useful for large bloom filters which cannot fit into the normal SAVE and RESTORE model. The first time this command is called, the value of iter should be 0. This command will return successive (iter, data) pairs until (0, NULL) to indicate completion. For more information see BF.SCANDUMP.

class redis.commands.bf.commands.CFCommands[source]#

Cuckoo Filter commands.

add(key, item) int[source]#
add(key, item) Awaitable[int]

Add an item to a Cuckoo Filter key. For more information see CF.ADD.

addnx(key, item) int[source]#
addnx(key, item) Awaitable[int]

Add an item to a Cuckoo Filter key only if item does not yet exist. Command might be slower that add. For more information see CF.ADDNX.

count(key, item) int[source]#
count(key, item) Awaitable[int]

Return the number of times an item may be in the key. For more information see CF.COUNT.

create(key, capacity, expansion=None, bucket_size=None, max_iterations=None) bool[source]#
create(key, capacity, expansion=None, bucket_size=None, max_iterations=None) Awaitable[bool]

Create a new Cuckoo Filter key an initial capacity items. For more information see CF.RESERVE.

delete(key, item) int[source]#
delete(key, item) Awaitable[int]

Delete item from key. For more information see CF.DEL.

exists(key, item) int[source]#
exists(key, item) Awaitable[int]

Check whether an item exists in Cuckoo Filter key. For more information see CF.EXISTS.

info(key) CFInfo | dict[str, Any][source]#
info(key) Awaitable[CFInfo | dict[str, Any]]

Return size, number of buckets, number of filter, number of items inserted, number of items deleted, bucket size, expansion rate, and max iteration. For more information see CF.INFO.

insert(key, items, capacity=None, nocreate=None) list[int][source]#
insert(key, items, capacity=None, nocreate=None) Awaitable[list[int]]

Add multiple items to a Cuckoo Filter key, allowing the filter to be created with a custom capacity if it does not yet exist. items must be provided as a list. For more information see CF.INSERT.

insertnx(key, items, capacity=None, nocreate=None) list[int][source]#
insertnx(key, items, capacity=None, nocreate=None) Awaitable[list[int]]

Add multiple items to a Cuckoo Filter key only if they do not exist yet, allowing the filter to be created with a custom capacity if it does not yet exist. items must be provided as a list. For more information see CF.INSERTNX.

loadchunk(key, iter, data) bytes | str[source]#
loadchunk(key, iter, data) Awaitable[bytes | str]

Restore a filter previously saved using SCANDUMP. See the SCANDUMP command for example usage.

This command will overwrite any Cuckoo filter stored under key. Ensure that the Cuckoo filter will not be modified between invocations. For more information see CF.LOADCHUNK.

mexists(key, *items) list[int][source]#
mexists(key, *items) Awaitable[list[int]]

Check whether an items exist in Cuckoo Filter key. For more information see CF.MEXISTS.

reserve(key, capacity, expansion=None, bucket_size=None, max_iterations=None)#

Create a new Cuckoo Filter key an initial capacity items. For more information see CF.RESERVE.

Return type

Union[bool, Awaitable[bool]]

scandump(key, iter) BloomScanDumpResponse[source]#
scandump(key, iter) Awaitable[BloomScanDumpResponse]

Begin an incremental save of the Cuckoo filter key.

This is useful for large Cuckoo filters which cannot fit into the normal SAVE and RESTORE model. The first time this command is called, the value of iter should be 0. This command will return successive (iter, data) pairs until (0, NULL) to indicate completion. For more information see CF.SCANDUMP.

class redis.commands.bf.commands.CMSCommands[source]#

Count-Min Sketch Commands

incrby(key, items, increments) list[int][source]#
incrby(key, items, increments) Awaitable[list[int]]

Add/increase items to a Count-Min Sketch key by ‘’increments’’. Both items and increments are lists. For more information see CMS.INCRBY.

Example:

>>> cmsincrby('A', ['foo'], [1])
info(key) CMSInfo | dict[str, Any][source]#
info(key) Awaitable[CMSInfo | dict[str, Any]]

Return width, depth and total count of the sketch. For more information see CMS.INFO.

initbydim(key, width, depth) bool[source]#
initbydim(key, width, depth) Awaitable[bool]

Initialize a Count-Min Sketch key to dimensions (width, depth) specified by user. For more information see CMS.INITBYDIM.

initbyprob(key, error, probability) bool[source]#
initbyprob(key, error, probability) Awaitable[bool]

Initialize a Count-Min Sketch key to characteristics (error, probability) specified by user. For more information see CMS.INITBYPROB.

merge(destKey, numKeys, srcKeys, weights=[]) bool[source]#
merge(destKey, numKeys, srcKeys, weights=[]) Awaitable[bool]

Merge numKeys of sketches into destKey. Sketches specified in srcKeys. All sketches must have identical width and depth. Weights can be used to multiply certain sketches. Default weight is 1. Both srcKeys and weights are lists. For more information see CMS.MERGE.

query(key, *items) list[int][source]#
query(key, *items) Awaitable[list[int]]

Return count for an item from key. Multiple items can be queried with one call. For more information see CMS.QUERY.

class redis.commands.bf.commands.TOPKCommands[source]#

TOP-k Filter commands.

add(key, *items) ModuleListResponse[source]#
add(key, *items) Awaitable[ModuleListResponse]

Add one item or more to a Top-K Filter key. For more information see TOPK.ADD.

count(key, *items) list[int][source]#
count(key, *items) Awaitable[list[int]]

Return count for one item or more from key. For more information see TOPK.COUNT.

incrby(key, items, increments) ModuleListResponse[source]#
incrby(key, items, increments) Awaitable[ModuleListResponse]

Add/increase items to a Top-K Sketch key by ‘’increments’’. Both items and increments are lists. For more information see TOPK.INCRBY.

Example:

>>> topkincrby('A', ['foo'], [1])
info(key) TopKInfo | dict[str, Any][source]#
info(key) Awaitable[TopKInfo | dict[str, Any]]

Return k, width, depth and decay values of key. For more information see TOPK.INFO.

list(key, withcount: bool = False) ModuleListResponse[source]#
list(key, withcount: bool = False) Awaitable[ModuleListResponse]

Return full list of items in Top-K list of key. If withcount set to True, return full list of items with probabilistic count in Top-K list of key. For more information see TOPK.LIST.

query(key, *items) list[int][source]#
query(key, *items) Awaitable[list[int]]

Check whether one item or more is a Top-K item at key. For more information see TOPK.QUERY.

reserve(key, k, width, depth, decay) bool[source]#
reserve(key, k, width, depth, decay) Awaitable[bool]

Create a new Top-K Filter key with desired probability of false positives errorRate expected entries to be inserted as size. For more information see TOPK.RESERVE.


RedisJSON Commands#

These are the commands for interacting with the RedisJSON module. Below is a brief example, as well as documentation on the commands themselves.

Create a json object

import redis
r = redis.Redis()
r.json().set("mykey", ".", {"hello": "world", "i am": ["a", "json", "object!"]})

Examples of how to combine search and json can be found here.

class redis.commands.json.commands.JSONCommands[source]#

json commands.

arrappend(name: str, path: str | None = Path.root_path(), *args: JsonType) int | list[int | None] | None[source]#
arrappend(name: str, path: str | None = Path.root_path(), *args: JsonType) Awaitable[int | list[int | None] | None]

Append the objects args to the array under the path` in key ``name.

For more information see JSON.ARRAPPEND..

arrindex(name: str, path: str, scalar: int, start: int | None = None, stop: int | None = None) int | list[int | None] | None[source]#
arrindex(name: str, path: str, scalar: int, start: int | None = None, stop: int | None = None) Awaitable[int | list[int | None] | None]

Return the index of scalar in the JSON array under path at key name.

The search can be limited using the optional inclusive start and exclusive stop indices.

For more information see JSON.ARRINDEX.

arrinsert(name: str, path: str, index: int, *args: JsonType) int | list[int | None] | None[source]#
arrinsert(name: str, path: str, index: int, *args: JsonType) Awaitable[int | list[int | None] | None]

Insert the objects args to the array at index index under the path` in key ``name.

For more information see JSON.ARRINSERT.

arrlen(name: str, path: str | None = Path.root_path()) int | list[int | None] | None[source]#
arrlen(name: str, path: str | None = Path.root_path()) Awaitable[int | list[int | None] | None]

Return the length of the array JSON value under path at key``name``.

For more information see JSON.ARRLEN.

arrpop(name: str, path: str | None = Path.root_path(), index: int | None = -1) JsonType | str | list[Any] | None[source]#
arrpop(name: str, path: str | None = Path.root_path(), index: int | None = -1) Awaitable[JsonType | str | list[Any] | None]

Pop the element at index in the array JSON value under path at key name.

For more information see JSON.ARRPOP.

arrtrim(name: str, path: str, start: int, stop: int) int | list[int | None] | None[source]#
arrtrim(name: str, path: str, start: int, stop: int) Awaitable[int | list[int | None] | None]

Trim the array JSON value under path at key name to the inclusive range given by start and stop.

For more information see JSON.ARRTRIM.

clear(name: str, path: str | None = Path.root_path()) int[source]#
clear(name: str, path: str | None = Path.root_path()) Awaitable[int]

Empty arrays and objects (to have zero slots/keys without deleting the array/object).

Return the count of cleared paths (ignoring non-array and non-objects paths).

For more information see JSON.CLEAR.

debug(subcommand: str, key: str | None = None, path: str | None = Path.root_path()) int | list[str][source]#
debug(subcommand: str, key: str | None = None, path: str | None = Path.root_path()) Awaitable[int | list[str]]

Return the memory usage in bytes of a value under path from key name.

For more information see JSON.DEBUG.

delete(key: str, path: str | None = Path.root_path()) int[source]#
delete(key: str, path: str | None = Path.root_path()) Awaitable[int]

Delete the JSON value stored at key key under path.

For more information see JSON.DEL.

forget(key, path='.')#

Delete the JSON value stored at key key under path.

For more information see JSON.DEL.

Parameters
  • key (str) –

  • path (str | None) –

Return type

Union[int, Awaitable[int]]

get(name: str, *args, no_escape: bool | None = False) Any | None[source]#
get(name: str, *args, no_escape: bool | None = False) Awaitable[Any | None]

Get the object stored as a JSON value at key name.

args is zero or more paths, and defaults to root path `no_escape is a boolean flag to add no_escape option to get non-ascii characters

For more information see JSON.GET.

merge(name: str, path: str, obj: JsonType, decode_keys: bool | None = False) bool[source]#
merge(name: str, path: str, obj: JsonType, decode_keys: bool | None = False) Awaitable[bool]

Merges a given JSON value into matching paths. Consequently, JSON values at matching paths are updated, deleted, or expanded with new children

decode_keys If set to True, the keys of obj will be decoded with utf-8.

For more information see JSON.MERGE.

mget(keys: list[str], path: str) list[JsonType | None][source]#
mget(keys: list[str], path: str) Awaitable[list[JsonType | None]]

Get the objects stored as a JSON values under path. keys is a list of one or more keys.

For more information see JSON.MGET.

mset(triplets: list[tuple[str, str, JsonType]]) bool[source]#
mset(triplets: list[tuple[str, str, JsonType]]) Awaitable[bool]

Set the JSON value at key name under the path to obj for one or more keys.

triplets is a list of one or more triplets of key, path, value.

For the purpose of using this within a pipeline, this command is also aliased to JSON.MSET.

For more information see JSON.MSET.

numincrby(name: str, path: str, number: int) int | float | list[int | float | None][source]#
numincrby(name: str, path: str, number: int) Awaitable[int | float | list[int | float | None]]

Increment the numeric (integer or floating point) JSON value under path at key name by the provided number.

For more information see JSON.NUMINCRBY.

nummultby(name: str, path: str, number: int) int | float | list[int | float | None][source]#
nummultby(name: str, path: str, number: int) Awaitable[int | float | list[int | float | None]]

Multiply the numeric (integer or floating point) JSON value under path at key name with the provided number.

For more information see JSON.NUMMULTBY.

objkeys(name: str, path: str | None = Path.root_path()) list[str] | list[list[str] | None] | None[source]#
objkeys(name: str, path: str | None = Path.root_path()) Awaitable[list[str] | list[list[str] | None] | None]

Return the key names in the dictionary JSON value under path at key name.

For more information see JSON.OBJKEYS.

objlen(name: str, path: str | None = Path.root_path()) int | list[int | None] | None[source]#
objlen(name: str, path: str | None = Path.root_path()) Awaitable[int | list[int | None] | None]

Return the length of the dictionary JSON value under path at key name.

For more information see JSON.OBJLEN.

resp(name: str, path: str | None = Path.root_path()) Any[source]#
resp(name: str, path: str | None = Path.root_path()) Awaitable[Any]

Return the JSON value under path at key name.

For more information see JSON.RESP.

set(name: str, path: str, obj: JsonType, nx: bool | None = False, xx: bool | None = False, decode_keys: bool | None = False, fpha: FPHAType | str | None = None) bool | None[source]#
set(name: str, path: str, obj: JsonType, nx: bool | None = False, xx: bool | None = False, decode_keys: bool | None = False, fpha: FPHAType | str | None = None) Awaitable[bool | None]

Set the JSON value at key name under the path to obj.

nx if set to True, set value only if it does not exist. xx if set to True, set value only if it exists. decode_keys If set to True, the keys of obj will be decoded with utf-8. fpha if set, forces Redis to use the specified floating-point type for storing all FP homogeneous arrays in obj. Accepts a FPHAType enum value or a string ("BF16", "FP16", "FP32", "FP64").

For the purpose of using this within a pipeline, this command is also aliased to JSON.SET.

For more information see JSON.SET.

set_file(name: str, path: str, file_name: str, nx: bool | None = False, xx: bool | None = False, decode_keys: bool | None = False, fpha: FPHAType | str | None = None) bool | None[source]#
set_file(name: str, path: str, file_name: str, nx: bool | None = False, xx: bool | None = False, decode_keys: bool | None = False, fpha: FPHAType | str | None = None) Awaitable[bool | None]

Set the JSON value at key name under the path to the content of the json file file_name.

nx if set to True, set value only if it does not exist. xx if set to True, set value only if it exists. decode_keys If set to True, the keys of obj will be decoded with utf-8. fpha if set, forces Redis to use the specified floating-point type for storing all FP homogeneous arrays in the file content. Accepts a FPHAType enum value or a string ("BF16", "FP16", "FP32", "FP64").

set_path(json_path: str, root_folder: str, nx: bool | None = False, xx: bool | None = False, decode_keys: bool | None = False, fpha: FPHAType | str | None = None) dict[str, bool][source]#
set_path(json_path: str, root_folder: str, nx: bool | None = False, xx: bool | None = False, decode_keys: bool | None = False, fpha: FPHAType | str | None = None) Awaitable[dict[str, bool]]

Iterate over root_folder and set each JSON file to a value under json_path with the file name as the key.

nx if set to True, set value only if it does not exist. xx if set to True, set value only if it exists. decode_keys If set to True, the keys of obj will be decoded with utf-8. fpha if set, forces Redis to use the specified floating-point type for storing all FP homogeneous arrays in the file content. Accepts a FPHAType enum value or a string ("BF16", "FP16", "FP32", "FP64").

strappend(name: str, value: str, path: str | None = Path.root_path()) int | list[int | None] | None[source]#
strappend(name: str, value: str, path: str | None = Path.root_path()) Awaitable[int | list[int | None] | None]

Append to the string JSON value. If two options are specified after the key name, the path is determined to be the first. If a single option is passed, then the root_path (i.e Path.root_path()) is used.

For more information see JSON.STRAPPEND.

strlen(name: str, path: str | None = None) int | list[int | None] | None[source]#
strlen(name: str, path: str | None = None) Awaitable[int | list[int | None] | None]

Return the length of the string JSON value under path at key name.

For more information see JSON.STRLEN.

toggle(name: str, path: str | None = Path.root_path()) bool | list[int | None] | None[source]#
toggle(name: str, path: str | None = Path.root_path()) Awaitable[bool | list[int | None] | None]

Toggle boolean value under path at key name. returning the new value.

For more information see JSON.TOGGLE.

type(name: str, path: str | None = Path.root_path()) str | None | list[str | None] | list[list[str]][source]#
type(name: str, path: str | None = Path.root_path()) Awaitable[str | None | list[str | None] | list[list[str]]]

Get the type of the JSON value under path from key name.

For more information see JSON.TYPE.


RediSearch Commands#

These are the commands for interacting with the RediSearch module. Below is a brief example, as well as documentation on the commands themselves. In the example below, an index named my_index is being created. When an index name is not specified, an index named idx is created.

Create a search index, and display its information

import redis
from redis.commands.search.field import TextField

r = redis.Redis()
index_name = "my_index"
schema = (
    TextField("play", weight=5.0),
    TextField("ball"),
)
r.ft(index_name).create_index(schema)
print(r.ft(index_name).info())
class redis.commands.search.commands.SearchCommands[source]#

Search commands.

add_document(doc_id, nosave=False, score=1.0, payload=None, replace=False, partial=False, language=None, no_create=False, **fields)[source]#

Add a single document to the index.

Parameters
  • doc_id (str) – the id of the saved document.

  • nosave (bool) – if set to true, we just index the document, and don’t save a copy of it. This means that searches will just return ids.

  • score (float) – the document ranking, between 0.0 and 1.0

  • payload (Optional[bool]) – optional inner-index payload we can save for fast access in scoring functions

  • replace (bool) – if True, and the document already is in the index, we perform an update and reindex the document

  • partial (bool) – if True, the fields specified will be added to the existing document. This has the added benefit that any fields specified with no_index will not be reindexed again. Implies replace

  • language (Optional[str]) – Specify the language used for document tokenization.

  • no_create (bool) – if True, the document is only updated and reindexed if it already exists. If the document does not exist, an error will be returned. Implies replace

  • fields (List[str]) – kwargs dictionary of the document fields to be saved and/or indexed. NOTE: Geo points shoule be encoded as strings of “lon,lat”

add_document_hash(doc_id, score=1.0, language=None, replace=False)[source]#

Add a hash document to the index.

### Parameters

  • doc_id: the document’s id. This has to be an existing HASH key

    in Redis that will hold the fields the index needs.

  • score: the document ranking, between 0.0 and 1.0

  • replace: if True, and the document already is in the index, we

    perform an update and reindex the document

  • language: Specify the language used for document tokenization.

aggregate(query, query_params=None)[source]#

Issue an aggregation query.

### Parameters

query: This can be either an AggregateRequest, or a Cursor

An AggregateResult object is returned. You can access the rows from its rows property, which will always yield the rows of the result.

For more information see FT.AGGREGATE.

Parameters
  • query (Union[AggregateRequest, Cursor]) –

  • query_params (Optional[Dict[str, Union[str, int, float, bytes]]]) –

aliasadd(alias)[source]#

Alias a search index - will fail if alias already exists

### Parameters

  • alias: Name of the alias to create

For more information see FT.ALIASADD.

Parameters

alias (str) –

aliasdel(alias)[source]#

Removes an alias to a search index

### Parameters

  • alias: Name of the alias to delete

For more information see FT.ALIASDEL.

Parameters

alias (str) –

aliasupdate(alias)[source]#

Updates an alias - will fail if alias does not already exist

### Parameters

  • alias: Name of the alias to create

For more information see FT.ALIASUPDATE.

Parameters

alias (str) –

alter_schema_add(fields)[source]#

Alter the existing search index by adding new fields. The index must already exist.

### Parameters:

  • fields: a list of Field objects to add for the index

For more information see FT.ALTER.

Parameters

fields (Union[Field, List[Field]]) –

batch_indexer(chunk_size=100)[source]#

Create a new batch indexer from the client with a given chunk size

config_get(option)[source]#

Get runtime configuration option value.

### Parameters

  • option: the name of the configuration option.

For more information see FT.CONFIG GET.

Parameters

option (str) –

Return type

str

config_set(option, value)[source]#

Set runtime configuration option.

### Parameters

  • option: the name of the configuration option.

  • value: a value for the configuration option.

For more information see FT.CONFIG SET.

Parameters
  • option (str) –

  • value (str) –

Return type

bool

create_index(fields, no_term_offsets=False, no_field_flags=False, stopwords=None, definition=None, max_text_fields=False, temporary=None, no_highlight=False, no_term_frequencies=False, skip_initial_scan=False)[source]#

Creates the search index. The index must not already exist.

For more information, see https://redis.io/commands/ft.create/

Parameters
  • fields (List[Field]) – A list of Field objects.

  • no_term_offsets (bool) – If true, term offsets will not be saved in the index.

  • no_field_flags (bool) – If true, field flags that allow searching in specific fields will not be saved.

  • stopwords (Optional[List[str]]) – If provided, the index will be created with this custom stopword list. The list can be empty.

  • definition (Optional[IndexDefinition]) – If provided, the index will be created with this custom index definition.

  • max_text_fields – If true, indexes will be encoded as if there were more than 32 text fields, allowing for additional fields beyond 32.

  • temporary – Creates a lightweight temporary index which will expire after the specified period of inactivity. The internal idle timer is reset whenever the index is searched or added to.

  • no_highlight (bool) – If true, disables highlighting support. Also implied by no_term_offsets.

  • no_term_frequencies (bool) – If true, term frequencies will not be saved in the index.

  • skip_initial_scan (bool) – If true, the initial scan and indexing will be skipped.

delete_document(doc_id, conn=None, delete_actual_document=False)[source]#

Delete a document from index Returns 1 if the document was deleted, 0 if not

### Parameters

  • delete_actual_document: if set to True, RediSearch also delete

    the actual document if it is in the index

dict_add(name, *terms)[source]#

Adds terms to a dictionary.

### Parameters

  • name: Dictionary name.

  • terms: List of items for adding to the dictionary.

For more information see FT.DICTADD.

Parameters
  • name (str) –

  • terms (List[str]) –

dict_del(name, *terms)[source]#

Deletes terms from a dictionary.

### Parameters

  • name: Dictionary name.

  • terms: List of items for removing from the dictionary.

For more information see FT.DICTDEL.

Parameters
  • name (str) –

  • terms (List[str]) –

dict_dump(name)[source]#

Dumps all terms in the given dictionary.

### Parameters

  • name: Dictionary name.

For more information see FT.DICTDUMP.

Parameters

name (str) –

dropindex(delete_documents=False)[source]#

Drop the index if it exists. Replaced drop_index in RediSearch 2.0. Default behavior was changed to not delete the indexed documents.

### Parameters:

  • delete_documents: If True, all documents will be deleted.

For more information see FT.DROPINDEX.

Parameters

delete_documents (bool) –

explain(query, query_params=None)[source]#

Returns the execution plan for a complex query.

For more information see FT.EXPLAIN.

Parameters
  • query (Union[str, Query]) –

  • query_params (Optional[Dict[str, Union[str, int, float, bytes]]]) –

get(*ids)[source]#

Returns the full contents of multiple documents.

### Parameters

  • ids: the ids of the saved documents.

Execute a hybrid search using both text and vector queries

Parameters
  • query (-) – HybridQuery object Contains the text and vector queries

  • combine_method (-) – CombineResultsMethod object Contains the combine method and parameters

  • post_processing (-) – HybridPostProcessingConfig object Contains the post processing configuration

  • params_substitution (-) – Dict[str, Union[str, int, float, bytes]] Contains the parameters substitution

  • timeout (-) – int - contains the timeout in milliseconds

  • cursor (-) – HybridCursorQuery object - contains the cursor configuration

Return type

Union[HybridResult, HybridCursorResult, Pipeline]

For more information see FT.SEARCH <https://redis.io/commands/ft.hybrid>.

info()[source]#

Get info an stats about the the current index, including the number of documents, memory consumption, etc

For more information see FT.INFO.

load_document(id, field_encodings=None)[source]#

Load a single document by id

  • field_encodings: optional dict mapping field names to encodings. If a field’s encoding is None the raw bytes value is preserved (useful for binary data such as vectors).

Parameters

field_encodings (Optional[Dict[str, Any]]) –

profile(query, limited=False, query_params=None)[source]#

Performs a search or aggregate command and collects performance information.

### Parameters

query: This can be either an AggregateRequest or Query. limited: If set to True, removes details of reader iterator. query_params: Define one or more value parameters. Each parameter has a name and a value.

Parameters
  • query (Union[Query, AggregateRequest]) –

  • limited (bool) –

  • query_params (Optional[Dict[str, Union[str, int, float, bytes]]]) –

Return type

Union[tuple[Union[redis.commands.search.result.Result, redis.commands.search.aggregation.AggregateResult], redis.commands.search.profile_information.ProfileInformation], ProfileInformation]

search(query, query_params=None)[source]#

Search the index for a given query, and return a result of documents

### Parameters

  • query: the search query. Either a text for simple queries with

    default parameters, or a Query object for complex queries. See RediSearch’s documentation on query format

For more information see FT.SEARCH.

Parameters
  • query (Union[str, Query]) –

  • query_params (Optional[Dict[str, Union[str, int, float, bytes]]]) –

spellcheck(query, distance=None, include=None, exclude=None)[source]#

Issue a spellcheck query

Parameters
  • query – search query.

  • distance – the maximal Levenshtein distance for spelling suggestions (default: 1, max: 4).

  • include – specifies an inclusion custom dictionary.

  • exclude – specifies an exclusion custom dictionary.

For more information see FT.SPELLCHECK.

sugadd(key, *suggestions, **kwargs)[source]#

Add suggestion terms to the AutoCompleter engine. Each suggestion has a score and string. If kwargs[“increment”] is true and the terms are already in the server’s dictionary, we increment their scores.

For more information see FT.SUGADD.

sugdel(key, string)[source]#

Delete a string from the AutoCompleter index. Returns 1 if the string was found and deleted, 0 otherwise.

For more information see FT.SUGDEL.

Parameters
  • key (str) –

  • string (str) –

Return type

int

sugget(key, prefix, fuzzy=False, num=10, with_scores=False, with_payloads=False)[source]#

Get a list of suggestions from the AutoCompleter, for a given prefix.

Parameters:

prefixstr

The prefix we are searching. Must be valid ascii or utf-8

fuzzybool

If set to true, the prefix search is done in fuzzy mode. NOTE: Running fuzzy searches on short (<3 letters) prefixes can be very slow, and even scan the entire index.

with_scoresbool

If set to true, we also return the (refactored) score of each suggestion. This is normally not needed, and is NOT the original score inserted into the index.

with_payloadsbool

Return suggestion payloads

numint

The maximum number of results we return. Note that we might return less. The algorithm trims irrelevant suggestions.

Returns:

list:

A list of Suggestion objects. If with_scores was False, the score of all suggestions is 1.

For more information see FT.SUGGET.

Parameters
  • key (str) –

  • prefix (str) –

  • fuzzy (bool) –

  • num (int) –

  • with_scores (bool) –

  • with_payloads (bool) –

Return type

List[SuggestionParser]

suglen(key)[source]#

Return the number of entries in the AutoCompleter index.

For more information see FT.SUGLEN.

Parameters

key (str) –

Return type

int

syndump()[source]#

Dumps the contents of a synonym group.

The command is used to dump the synonyms data structure. Returns a list of synonym terms and their synonym group ids.

For more information see FT.SYNDUMP.

synupdate(groupid, skipinitial=False, *terms)[source]#

Updates a synonym group. The command is used to create or update a synonym group with additional terms. Only documents which were indexed after the update will be affected.

Parameters:

groupid :

Synonym group id.

skipinitialbool

If set to true, we do not scan and index.

terms :

The terms.

For more information see FT.SYNUPDATE.

Parameters
  • groupid (str) –

  • skipinitial (bool) –

  • terms (List[str]) –

tagvals(tagfield)[source]#

Return a list of all possible tag values

### Parameters

  • tagfield: Tag field name

For more information see FT.TAGVALS.

Parameters

tagfield (str) –


RedisTimeSeries Commands#

These are the commands for interacting with the RedisTimeSeries module. Below is a brief example, as well as documentation on the commands themselves.

Create a timeseries object with 5 second retention

import redis
r = redis.Redis()
r.ts().create(2, retention_msecs=5000)
class redis.commands.timeseries.commands.TimeSeriesCommands[source]#

RedisTimeSeries Commands.

add(key: KeyT, timestamp: int | str, value: Number | str, retention_msecs: int | None = None, uncompressed: bool | None = False, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None, on_duplicate: str | None = None) int[source]#
add(key: KeyT, timestamp: int | str, value: Number | str, retention_msecs: int | None = None, uncompressed: bool | None = False, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None, on_duplicate: str | None = None) Awaitable[int]

Append a sample to a time series. When the specified key does not exist, a new time series is created.

For more information see https://redis.io/commands/ts.add/

Parameters
  • key – The time-series key.

  • timestamp – Timestamp of the sample. * can be used for automatic timestamp (using the system clock).

  • value – Numeric data value of the sample.

  • retention_msecs – Maximum age for samples, compared to the highest reported timestamp in milliseconds. If None or 0 is passed, the series is not trimmed at all.

  • uncompressed – Changes data storage from compressed (default) to uncompressed.

  • labels – A dictionary of label-value pairs that represent metadata labels of the key.

  • chunk_size – Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [48..1048576]. In earlier versions of the module the minimum value was different.

  • duplicate_policy

    Policy for handling multiple samples with identical timestamps. Can be one of:

    • ’block’: An error will occur and the new value will be ignored.

    • ’first’: Ignore the new value.

    • ’last’: Override with the latest value.

    • ’min’: Only override if the value is lower than the existing value.

    • ’max’: Only override if the value is higher than the existing value.

    • ’sum’: If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value.

  • ignore_max_time_diff – A non-negative integer value, in milliseconds, that sets an ignore threshold for added timestamps. If the difference between the last timestamp and the new timestamp is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_val_diff is also set. Available since RedisTimeSeries version 1.12.0.

  • ignore_max_val_diff – A non-negative floating point value, that sets an ignore threshold for added values. If the difference between the last value and the new value is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_time_diff is also set. Available since RedisTimeSeries version 1.12.0.

  • on_duplicate – Use a specific duplicate policy for the specified timestamp. Overrides the duplicate policy set by duplicate_policy.

alter(key: KeyT, retention_msecs: int | None = None, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None) bool[source]#
alter(key: KeyT, retention_msecs: int | None = None, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None) Awaitable[bool]

Update an existing time series.

For more information see https://redis.io/commands/ts.alter/

Parameters
  • key – The time-series key.

  • retention_msecs – Maximum age for samples, compared to the highest reported timestamp in milliseconds. If None or 0 is passed, the series is not trimmed at all.

  • labels – A dictionary of label-value pairs that represent metadata labels of the key.

  • chunk_size – Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [48..1048576]. In earlier versions of the module the minimum value was different. Changing this value does not affect existing chunks.

  • duplicate_policy

    Policy for handling multiple samples with identical timestamps. Can be one of:

    • ’block’: An error will occur and the new value will be ignored.

    • ’first’: Ignore the new value.

    • ’last’: Override with the latest value.

    • ’min’: Only override if the value is lower than the existing value.

    • ’max’: Only override if the value is higher than the existing value.

    • ’sum’: If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value.

  • ignore_max_time_diff – A non-negative integer value, in milliseconds, that sets an ignore threshold for added timestamps. If the difference between the last timestamp and the new timestamp is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_val_diff is also set. Available since RedisTimeSeries version 1.12.0.

  • ignore_max_val_diff – A non-negative floating point value, that sets an ignore threshold for added values. If the difference between the last value and the new value is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_time_diff is also set. Available since RedisTimeSeries version 1.12.0.

create(key: KeyT, retention_msecs: int | None = None, uncompressed: bool | None = False, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None) bool[source]#
create(key: KeyT, retention_msecs: int | None = None, uncompressed: bool | None = False, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None) Awaitable[bool]

Create a new time-series.

For more information see https://redis.io/commands/ts.create/

Parameters
  • key – The time-series key.

  • retention_msecs – Maximum age for samples, compared to the highest reported timestamp in milliseconds. If None or 0 is passed, the series is not trimmed at all.

  • uncompressed – Changes data storage from compressed (default) to uncompressed.

  • labels – A dictionary of label-value pairs that represent metadata labels of the key.

  • chunk_size – Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [48..1048576]. In earlier versions of the module the minimum value was different.

  • duplicate_policy

    Policy for handling multiple samples with identical timestamps. Can be one of:

    • ’block’: An error will occur and the new value will be ignored.

    • ’first’: Ignore the new value.

    • ’last’: Override with the latest value.

    • ’min’: Only override if the value is lower than the existing value.

    • ’max’: Only override if the value is higher than the existing value.

    • ’sum’: If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value.

  • ignore_max_time_diff – A non-negative integer value, in milliseconds, that sets an ignore threshold for added timestamps. If the difference between the last timestamp and the new timestamp is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_val_diff is also set. Available since RedisTimeSeries version 1.12.0.

  • ignore_max_val_diff – A non-negative floating point value, that sets an ignore threshold for added values. If the difference between the last value and the new value is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_time_diff is also set. Available since RedisTimeSeries version 1.12.0.

createrule(source_key: KeyT, dest_key: KeyT, aggregation_type: str, bucket_size_msec: int, align_timestamp: int | None = None) bool[source]#
createrule(source_key: KeyT, dest_key: KeyT, aggregation_type: str, bucket_size_msec: int, align_timestamp: int | None = None) Awaitable[bool]

Create a compaction rule from values added to source_key into dest_key.

For more information see https://redis.io/commands/ts.createrule/

Parameters
  • source_key – Key name for source time series.

  • dest_key – Key name for destination (compacted) time series.

  • aggregation_type – Aggregation type: One of the following: [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa, ‘countNaN’, ‘countAll’]

  • bucket_size_msec – Duration of each bucket, in milliseconds.

  • align_timestamp – Assure that there is a bucket that starts at exactly align_timestamp and align all other buckets accordingly.

decrby(key: KeyT, value: Number, timestamp: int | str | None = None, retention_msecs: int | None = None, uncompressed: bool | None = False, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None) int[source]#
decrby(key: KeyT, value: Number, timestamp: int | str | None = None, retention_msecs: int | None = None, uncompressed: bool | None = False, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None) Awaitable[int]

Decrement the latest sample’s of a series. When the specified key does not exist, a new time series is created.

This command can be used as a counter or gauge that automatically gets history as a time series.

For more information see https://redis.io/commands/ts.decrby/

Parameters
  • key – The time-series key.

  • value – Numeric value to subtract (subtrahend).

  • timestamp – Timestamp of the sample. * can be used for automatic timestamp (using the system clock). timestamp must be equal to or higher than the maximum existing timestamp in the series. When equal, the value of the sample with the maximum existing timestamp is decreased. If it is higher, a new sample with a timestamp set to timestamp is created, and its value is set to the value of the sample with the maximum existing timestamp minus subtrahend.

  • retention_msecs – Maximum age for samples, compared to the highest reported timestamp in milliseconds. If None or 0 is passed, the series is not trimmed at all.

  • uncompressed – Changes data storage from compressed (default) to uncompressed.

  • labels – A dictionary of label-value pairs that represent metadata labels of the key.

  • chunk_size – Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [48..1048576]. In earlier versions of the module the minimum value was different.

  • duplicate_policy

    Policy for handling multiple samples with identical timestamps. Can be one of:

    • ’block’: An error will occur and the new value will be ignored.

    • ’first’: Ignore the new value.

    • ’last’: Override with the latest value.

    • ’min’: Only override if the value is lower than the existing value.

    • ’max’: Only override if the value is higher than the existing value.

    • ’sum’: If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value.

  • ignore_max_time_diff – A non-negative integer value, in milliseconds, that sets an ignore threshold for added timestamps. If the difference between the last timestamp and the new timestamp is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_val_diff is also set. Available since RedisTimeSeries version 1.12.0.

  • ignore_max_val_diff – A non-negative floating point value, that sets an ignore threshold for added values. If the difference between the last value and the new value is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_time_diff is also set. Available since RedisTimeSeries version 1.12.0.

Returns

The timestamp of the sample that was modified or added.

delete(key: KeyT, from_time: int, to_time: int) int[source]#
delete(key: KeyT, from_time: int, to_time: int) Awaitable[int]

Delete all samples between two timestamps for a given time series.

The given timestamp interval is closed (inclusive), meaning that samples whose timestamp equals from_time or to_time are also deleted.

For more information see https://redis.io/commands/ts.del/

Parameters
  • key – The time-series key.

  • from_time – Start timestamp for the range deletion.

  • to_time – End timestamp for the range deletion.

Returns

The number of samples deleted.

deleterule(source_key: KeyT, dest_key: KeyT) bool[source]#
deleterule(source_key: KeyT, dest_key: KeyT) Awaitable[bool]

Delete a compaction rule from source_key to dest_key.

For more information see https://redis.io/commands/ts.deleterule/

get(key: KeyT, latest: bool | None = False) TimeSeriesSample | None[source]#
get(key: KeyT, latest: bool | None = False) Awaitable[TimeSeriesSample | None]

Get the last sample of key.

For more information see https://redis.io/commands/ts.get/

Parameters

latest – Used when a time series is a compaction, reports the compacted value of the latest (possibly partial) bucket.

incrby(key: KeyT, value: Number, timestamp: int | str | None = None, retention_msecs: int | None = None, uncompressed: bool | None = False, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None) int[source]#
incrby(key: KeyT, value: Number, timestamp: int | str | None = None, retention_msecs: int | None = None, uncompressed: bool | None = False, labels: Dict[str, str] | None = None, chunk_size: int | None = None, duplicate_policy: str | None = None, ignore_max_time_diff: int | None = None, ignore_max_val_diff: Number | None = None) Awaitable[int]

Increment the latest sample’s of a series. When the specified key does not exist, a new time series is created.

This command can be used as a counter or gauge that automatically gets history as a time series.

For more information see https://redis.io/commands/ts.incrby/

Parameters
  • key – The time-series key.

  • value – Numeric value to be added (addend).

  • timestamp – Timestamp of the sample. * can be used for automatic timestamp (using the system clock). timestamp must be equal to or higher than the maximum existing timestamp in the series. When equal, the value of the sample with the maximum existing timestamp is increased. If it is higher, a new sample with a timestamp set to timestamp is created, and its value is set to the value of the sample with the maximum existing timestamp plus the addend.

  • retention_msecs – Maximum age for samples, compared to the highest reported timestamp in milliseconds. If None or 0 is passed, the series is not trimmed at all.

  • uncompressed – Changes data storage from compressed (default) to uncompressed.

  • labels – A dictionary of label-value pairs that represent metadata labels of the key.

  • chunk_size – Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [48..1048576]. In earlier versions of the module the minimum value was different.

  • duplicate_policy

    Policy for handling multiple samples with identical timestamps. Can be one of:

    • ’block’: An error will occur and the new value will be ignored.

    • ’first’: Ignore the new value.

    • ’last’: Override with the latest value.

    • ’min’: Only override if the value is lower than the existing value.

    • ’max’: Only override if the value is higher than the existing value.

    • ’sum’: If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value.

  • ignore_max_time_diff – A non-negative integer value, in milliseconds, that sets an ignore threshold for added timestamps. If the difference between the last timestamp and the new timestamp is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_val_diff is also set. Available since RedisTimeSeries version 1.12.0.

  • ignore_max_val_diff – A non-negative floating point value, that sets an ignore threshold for added values. If the difference between the last value and the new value is lower than this threshold, the new entry is ignored. Only applicable if duplicate_policy is set to last, and if ignore_max_time_diff is also set. Available since RedisTimeSeries version 1.12.0.

Returns

The timestamp of the sample that was modified or added.

info(key: KeyT) TSInfo | dict[str, Any][source]#
info(key: KeyT) Awaitable[TSInfo | dict[str, Any]]

Get information of key.

For more information see https://redis.io/commands/ts.info/

madd(ktv_tuples: List[Tuple[KeyT, int | str, Number | str]]) list[int][source]#
madd(ktv_tuples: List[Tuple[KeyT, int | str, Number | str]]) Awaitable[list[int]]

Append new samples to one or more time series.

Each time series must already exist.

The method expects a list of tuples. Each tuple should contain three elements: (key, timestamp, value). The value will be appended to the time series identified by ‘key’, at the given ‘timestamp’.

For more information see https://redis.io/commands/ts.madd/

Parameters

ktv_tuples

A list of tuples, where each tuple contains:
  • key: The key of the time series.

  • timestamp: The timestamp at which the value should be appended.

  • value: The value to append to the time series.

Returns

A list that contains, for each sample, either the timestamp that was used, or an error, if the sample could not be added.

mget(filters: List[str], with_labels: bool | None = False, select_labels: List[str] | None = None, latest: bool | None = False) list[Any] | dict[str, list[Any]][source]#
mget(filters: List[str], with_labels: bool | None = False, select_labels: List[str] | None = None, latest: bool | None = False) Awaitable[list[Any] | dict[str, list[Any]]]

Get the last samples matching the specific filter.

For more information see https://redis.io/commands/ts.mget/

Parameters
  • filters – Filter to match the time-series labels.

  • with_labels – Include in the reply all label-value pairs representing metadata labels of the time series.

  • select_labels – Include in the reply only a subset of the key-value pair labels o the time series.

  • latest – Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket.

mrange(from_time: int | str, to_time: int | str, filters: List[str], count: int | None = None, aggregation_type: str | list[str] | None = None, bucket_size_msec: int | None = 0, with_labels: bool | None = False, filter_by_ts: List[int] | None = None, filter_by_min_value: int | None = None, filter_by_max_value: int | None = None, groupby: str | None = None, reduce: str | None = None, select_labels: List[str] | None = None, align: int | str | None = None, latest: bool | None = False, bucket_timestamp: str | None = None, empty: bool | None = False) TimeSeriesMRangeResponse[source]#
mrange(from_time: int | str, to_time: int | str, filters: List[str], count: int | None = None, aggregation_type: str | list[str] | None = None, bucket_size_msec: int | None = 0, with_labels: bool | None = False, filter_by_ts: List[int] | None = None, filter_by_min_value: int | None = None, filter_by_max_value: int | None = None, groupby: str | None = None, reduce: str | None = None, select_labels: List[str] | None = None, align: int | str | None = None, latest: bool | None = False, bucket_timestamp: str | None = None, empty: bool | None = False) Awaitable[TimeSeriesMRangeResponse]

Query a range across multiple time-series by filters in forward direction.

For more information see https://redis.io/commands/ts.mrange/

Parameters
  • from_time – Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

  • to_time – End timestamp for range query, + can be used to express the maximum possible timestamp.

  • filters – Filter to match the time-series labels.

  • count – Limits the number of returned samples.

  • aggregation_type – Optional aggregation type. Can be a single string or a list of strings for multiple aggregators (requires Redis 8.8+). Valid values: [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa, countNaN, countAll]. When a list is passed, each sample in the response contains values in the same order as the specified aggregators. Note: GROUPBY is not allowed when multiple aggregators are specified.

  • bucket_size_msec – Time bucket for aggregation in milliseconds.

  • with_labels – Include in the reply all label-value pairs representing metadata labels of the time series.

  • filter_by_ts – List of timestamps to filter the result by specific timestamps.

  • filter_by_min_value – Filter result by minimum value (must mention also filter_by_max_value).

  • filter_by_max_value – Filter result by maximum value (must mention also filter_by_min_value).

  • groupby – Grouping by fields the results (must mention also reduce).

  • reduce – Applying reducer functions on each group. Can be one of [avg sum, min, max, range, count, std.p, std.s, var.p, var.s].

  • select_labels – Include in the reply only a subset of the key-value pair labels of a series.

  • align – Timestamp for alignment control for aggregation.

  • latest – Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket.

  • bucket_timestamp – Controls how bucket timestamps are reported. Can be one of [-, low, +, high, ~, mid].

  • empty – Reports aggregations for empty buckets.

mrevrange(from_time: int | str, to_time: int | str, filters: List[str], count: int | None = None, aggregation_type: str | list[str] | None = None, bucket_size_msec: int | None = 0, with_labels: bool | None = False, filter_by_ts: List[int] | None = None, filter_by_min_value: int | None = None, filter_by_max_value: int | None = None, groupby: str | None = None, reduce: str | None = None, select_labels: List[str] | None = None, align: int | str | None = None, latest: bool | None = False, bucket_timestamp: str | None = None, empty: bool | None = False) TimeSeriesMRangeResponse[source]#
mrevrange(from_time: int | str, to_time: int | str, filters: List[str], count: int | None = None, aggregation_type: str | list[str] | None = None, bucket_size_msec: int | None = 0, with_labels: bool | None = False, filter_by_ts: List[int] | None = None, filter_by_min_value: int | None = None, filter_by_max_value: int | None = None, groupby: str | None = None, reduce: str | None = None, select_labels: List[str] | None = None, align: int | str | None = None, latest: bool | None = False, bucket_timestamp: str | None = None, empty: bool | None = False) Awaitable[TimeSeriesMRangeResponse]

Query a range across multiple time-series by filters in reverse direction.

For more information see https://redis.io/commands/ts.mrevrange/

Parameters
  • from_time – Start timestamp for the range query. ‘-’ can be used to express the minimum possible timestamp (0).

  • to_time – End timestamp for range query, ‘+’ can be used to express the maximum possible timestamp.

  • filters – Filter to match the time-series labels.

  • count – Limits the number of returned samples.

  • aggregation_type – Optional aggregation type. Can be a single string or a list of strings for multiple aggregators (requires Redis 8.8+). Valid values: [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa, countNaN, countAll]. When a list is passed, each sample in the response contains values in the same order as the specified aggregators. Note: GROUPBY is not allowed when multiple aggregators are specified.

  • bucket_size_msec – Time bucket for aggregation in milliseconds.

  • with_labels – Include in the reply all label-value pairs representing metadata labels of the time series.

  • filter_by_ts – List of timestamps to filter the result by specific timestamps.

  • filter_by_min_value – Filter result by minimum value (must mention also filter_by_max_value).

  • filter_by_max_value – Filter result by maximum value (must mention also filter_by_min_value).

  • groupby – Grouping by fields the results (must mention also reduce).

  • reduce – Applying reducer functions on each group. Can be one of [avg sum, min, max, range, count, std.p, std.s, var.p, var.s].

  • select_labels – Include in the reply only a subset of the key-value pair labels of a series.

  • align – Timestamp for alignment control for aggregation.

  • latest – Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket.

  • bucket_timestamp – Controls how bucket timestamps are reported. Can be one of [-, low, +, high, ~, mid].

  • empty – Reports aggregations for empty buckets.

queryindex(filters: List[str]) list[bytes | str][source]#
queryindex(filters: List[str]) Awaitable[list[bytes | str]]

Get all time series keys matching the filter list.

For more information see https://redis.io/commands/ts.queryindex/

range(key: KeyT, from_time: int | str, to_time: int | str, count: int | None = None, aggregation_type: str | list[str] | None = None, bucket_size_msec: int | None = 0, filter_by_ts: List[int] | None = None, filter_by_min_value: int | None = None, filter_by_max_value: int | None = None, align: int | str | None = None, latest: bool | None = False, bucket_timestamp: str | None = None, empty: bool | None = False) TimeSeriesRangeResponse[source]#
range(key: KeyT, from_time: int | str, to_time: int | str, count: int | None = None, aggregation_type: str | list[str] | None = None, bucket_size_msec: int | None = 0, filter_by_ts: List[int] | None = None, filter_by_min_value: int | None = None, filter_by_max_value: int | None = None, align: int | str | None = None, latest: bool | None = False, bucket_timestamp: str | None = None, empty: bool | None = False) Awaitable[TimeSeriesRangeResponse]

Query a range in forward direction for a specific time-series.

For more information see https://redis.io/commands/ts.range/

Parameters
  • key – Key name for timeseries.

  • from_time – Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

  • to_time – End timestamp for range query, + can be used to express the maximum possible timestamp.

  • count – Limits the number of returned samples.

  • aggregation_type – Optional aggregation type. Can be a single string or a list of strings for multiple aggregators (requires Redis 8.8+). Valid values: [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa, countNaN, countAll]. When a list is passed, each sample in the response contains values in the same order as the specified aggregators.

  • bucket_size_msec – Time bucket for aggregation in milliseconds.

  • filter_by_ts – List of timestamps to filter the result by specific timestamps.

  • filter_by_min_value – Filter result by minimum value (must mention also filter by_max_value).

  • filter_by_max_value – Filter result by maximum value (must mention also filter by_min_value).

  • align – Timestamp for alignment control for aggregation.

  • latest – Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket.

  • bucket_timestamp – Controls how bucket timestamps are reported. Can be one of [-, low, +, high, ~, mid].

  • empty – Reports aggregations for empty buckets.

revrange(key: KeyT, from_time: int | str, to_time: int | str, count: int | None = None, aggregation_type: str | list[str] | None = None, bucket_size_msec: int | None = 0, filter_by_ts: List[int] | None = None, filter_by_min_value: int | None = None, filter_by_max_value: int | None = None, align: int | str | None = None, latest: bool | None = False, bucket_timestamp: str | None = None, empty: bool | None = False) TimeSeriesRangeResponse[source]#
revrange(key: KeyT, from_time: int | str, to_time: int | str, count: int | None = None, aggregation_type: str | list[str] | None = None, bucket_size_msec: int | None = 0, filter_by_ts: List[int] | None = None, filter_by_min_value: int | None = None, filter_by_max_value: int | None = None, align: int | str | None = None, latest: bool | None = False, bucket_timestamp: str | None = None, empty: bool | None = False) Awaitable[TimeSeriesRangeResponse]

Query a range in reverse direction for a specific time-series.

Note: This command is only available since RedisTimeSeries >= v1.4

For more information see https://redis.io/commands/ts.revrange/

Parameters
  • key – Key name for timeseries.

  • from_time – Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

  • to_time – End timestamp for range query, + can be used to express the maximum possible timestamp.

  • count – Limits the number of returned samples.

  • aggregation_type – Optional aggregation type. Can be a single string or a list of strings for multiple aggregators (requires Redis 8.8+). Valid values: [avg, sum, min, max, range, count, first, last, std.p, std.s, var.p, var.s, twa, countNaN, countAll]. When a list is passed, each sample in the response contains values in the same order as the specified aggregators.

  • bucket_size_msec – Time bucket for aggregation in milliseconds.

  • filter_by_ts – List of timestamps to filter the result by specific timestamps.

  • filter_by_min_value – Filter result by minimum value (must mention also filter_by_max_value).

  • filter_by_max_value – Filter result by maximum value (must mention also filter_by_min_value).

  • align – Timestamp for alignment control for aggregation.

  • latest – Used when a time series is a compaction, reports the compacted value of the latest possibly partial bucket.

  • bucket_timestamp – Controls how bucket timestamps are reported. Can be one of [-, low, +, high, ~, mid].

  • empty – Reports aggregations for empty buckets.