This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Commands

1 - ACL

A container for Access List Control commands

This is a container command for Access Control List commands.

To see the list of available commands you can call ACL HELP.

2 - ACL CAT

List the ACL categories or the commands inside a category

The command shows the available ACL categories if called without arguments. If a category name is given, the command shows all the Redis commands in the specified category.

ACL categories are very useful in order to create ACL rules that include or exclude a large set of commands at once, without specifying every single command. For instance, the following rule will let the user karin perform everything but the most dangerous operations that may affect the server stability:

ACL SETUSER karin on +@all -@dangerous

We first add all the commands to the set of commands that karin is able to execute, but then we remove all the dangerous commands.

Checking for all the available categories is as simple as:

> ACL CAT
 1) "keyspace"
 2) "read"
 3) "write"
 4) "set"
 5) "sortedset"
 6) "list"
 7) "hash"
 8) "string"
 9) "bitmap"
10) "hyperloglog"
11) "geo"
12) "stream"
13) "pubsub"
14) "admin"
15) "fast"
16) "slow"
17) "blocking"
18) "dangerous"
19) "connection"
20) "transaction"
21) "scripting"

Then we may want to know what commands are part of a given category:

> ACL CAT dangerous
 1) "flushdb"
 2) "acl"
 3) "slowlog"
 4) "debug"
 5) "role"
 6) "keys"
 7) "pfselftest"
 8) "client"
 9) "bgrewriteaof"
10) "replicaof"
11) "monitor"
12) "restore-asking"
13) "latency"
14) "replconf"
15) "pfdebug"
16) "bgsave"
17) "sync"
18) "config"
19) "flushall"
20) "cluster"
21) "info"
22) "lastsave"
23) "slaveof"
24) "swapdb"
25) "module"
26) "restore"
27) "migrate"
28) "save"
29) "shutdown"
30) "psync"
31) "sort"

Return

Array reply: a list of ACL categories or a list of commands inside a given category. The command may return an error if an invalid category name is given as argument.

3 - ACL DELUSER

Remove the specified ACL users and the associated rules

Delete all the specified ACL users and terminate all the connections that are authenticated with such users. Note: the special default user cannot be removed from the system, this is the default user that every new connection is authenticated with. The list of users may include usernames that do not exist, in such case no operation is performed for the non existing users.

Return

Integer reply: The number of users that were deleted. This number will not always match the number of arguments since certain users may not exist.

Examples

> ACL DELUSER antirez
1

4 - ACL DRYRUN

Returns whether the user can execute the given command without executing the command.

Simulate the execution of a given command by a given user. This command can be used to test the permissions of a given user without having to enable the user or cause the side effects of running the command.

Return

Simple string reply: OK on success. Bulk string reply: An error describing why the user can't execute the command.

Examples

> ACL SETUSER VIRGINIA +SET ~*
"OK"
> ACL DRYRUN VIRGINIA SET foo bar
"OK"
> ACL DRYRUN VIRGINIA GET foo bar
"This user has no permissions to run the 'GET' command"

5 - ACL GENPASS

Generate a pseudorandom secure password to use for ACL users

ACL users need a solid password in order to authenticate to the server without security risks. Such password does not need to be remembered by humans, but only by computers, so it can be very long and strong (unguessable by an external attacker). The ACL GENPASS command generates a password starting from /dev/urandom if available, otherwise (in systems without /dev/urandom) it uses a weaker system that is likely still better than picking a weak password by hand.

By default (if /dev/urandom is available) the password is strong and can be used for other uses in the context of a Redis application, for instance in order to create unique session identifiers or other kind of unguessable and not colliding IDs. The password generation is also very cheap because we don't really ask /dev/urandom for bits at every execution. At startup Redis creates a seed using /dev/urandom, then it will use SHA256 in counter mode, with HMAC-SHA256(seed,counter) as primitive, in order to create more random bytes as needed. This means that the application developer should be feel free to abuse ACL GENPASS to create as many secure pseudorandom strings as needed.

The command output is an hexadecimal representation of a binary string. By default it emits 256 bits (so 64 hex characters). The user can provide an argument in form of number of bits to emit from 1 to 1024 to change the output length. Note that the number of bits provided is always rounded to the next multiple of 4. So for instance asking for just 1 bit password will result in 4 bits to be emitted, in the form of a single hex character.

Return

Bulk string reply: by default 64 bytes string representing 256 bits of pseudorandom data. Otherwise if an argument if needed, the output string length is the number of specified bits (rounded to the next multiple of 4) divided by 4.

Examples

> ACL GENPASS
"dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc"

> ACL GENPASS 32
"355ef3dd"

> ACL GENPASS 5
"90"

6 - ACL GETUSER

Get the rules for a specific ACL user

The command returns all the rules defined for an existing ACL user.

Specifically, it lists the user's ACL flags, password hashes, commands, key patterns, channel patterns (Added in version 6.2) and selectors (Added in version 7.0). Additional information may be returned in the future if more metadata is added to the user.

Command rules are always returned in the same format as the one used in the ACL SETUSER command. Before version 7.0, keys and channels were returned as an array of patterns, however in version 7.0 later they are now also returned in same format as the one used in the ACL SETUSER command. Note: This description of command rules reflects the user's effective permissions, so while it may not be identical to the set of rules used to configure the user, it is still functionally identical.

Selectors are listed in the order they were applied to the user, and include information about commands, key patterns, and channel patterns.

Array reply: a list of ACL rule definitions for the user.

If user does not exist a Null reply is returned.

Examples

Here's an example configuration for a user

> ACL SETUSER sample on nopass +GET allkeys &* (+SET ~key2)
"OK"
> ACL GETUSER sample
1) "flags"
2) 1) "on"
   2) "allkeys"
   3) "nopass"
3) "passwords"
4) (empty array)
5) "commands"
6) "+@all"
7) "keys"
8) "~*"
9) "channels"
10) "&*"
11) "selectors"
12) 1) 1) "commands"
       6) "+SET"
       7) "keys"
       8) "~key2"
       9) "channels"
       10) "&*"

7 - ACL HELP

Show helpful text about the different subcommands

The ACL HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

8 - ACL LIST

List the current ACL rules in ACL config file format

The command shows the currently active ACL rules in the Redis server. Each line in the returned array defines a different user, and the format is the same used in the redis.conf file or the external ACL file, so you can cut and paste what is returned by the ACL LIST command directly inside a configuration file if you wish (but make sure to check ACL SAVE).

Return

An array of strings.

Examples

> ACL LIST
1) "user antirez on #9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 ~objects:* &* +@all -@admin -@dangerous"
2) "user default on nopass ~* &* +@all"

9 - ACL LOAD

Reload the ACLs from the configured ACL file

When Redis is configured to use an ACL file (with the aclfile configuration option), this command will reload the ACLs from the file, replacing all the current ACL rules with the ones defined in the file. The command makes sure to have an all or nothing behavior, that is:

  • If every line in the file is valid, all the ACLs are loaded.
  • If one or more line in the file is not valid, nothing is loaded, and the old ACL rules defined in the server memory continue to be used.

Return

Simple string reply: OK on success.

The command may fail with an error for several reasons: if the file is not readable, if there is an error inside the file, and in such case the error will be reported to the user in the error. Finally the command will fail if the server is not configured to use an external ACL file.

Examples

> ACL LOAD
+OK

> ACL LOAD
-ERR /tmp/foo:1: Unknown command or category name in ACL...

10 - ACL LOG

List latest events denied because of ACLs in place

The command shows a list of recent ACL security events:

  1. Failures to authenticate their connections with AUTH or HELLO.
  2. Commands denied because against the current ACL rules.
  3. Commands denied because accessing keys not allowed in the current ACL rules.

The optional argument specifies how many entries to show. By default up to ten failures are returned. The special RESET argument clears the log. Entries are displayed starting from the most recent.

Return

When called to show security events:

Array reply: a list of ACL security events.

When called with RESET:

Simple string reply: OK if the security log was cleared.

Examples

> AUTH someuser wrongpassword
(error) WRONGPASS invalid username-password pair
> ACL LOG 1
1)  1) "count"
    2) (integer) 1
    3) "reason"
    4) "auth"
    5) "context"
    6) "toplevel"
    7) "object"
    8) "AUTH"
    9) "username"
   10) "someuser"
   11) "age-seconds"
   12) "4.0960000000000001"
   13) "client-info"
   14) "id=6 addr=127.0.0.1:63026 fd=8 name= age=9 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=48 qbuf-free=32720 obl=0 oll=0 omem=0 events=r cmd=auth user=default"

11 - ACL SAVE

Save the current ACL rules in the configured ACL file

When Redis is configured to use an ACL file (with the aclfile configuration option), this command will save the currently defined ACLs from the server memory to the ACL file.

Return

Simple string reply: OK on success.

The command may fail with an error for several reasons: if the file cannot be written or if the server is not configured to use an external ACL file.

Examples

> ACL SAVE
+OK

> ACL SAVE
-ERR There was an error trying to save the ACLs. Please check the server logs for more information

12 - ACL SETUSER

Modify or create the rules for a specific ACL user

Create an ACL user with the specified rules or modify the rules of an existing user. This is the main interface in order to manipulate Redis ACL users interactively: if the username does not exist, the command creates the username without any privilege, then reads from left to right all the rules provided as successive arguments, setting the user ACL rules as specified.

If the user already exists, the provided ACL rules are simply applied in addition to the rules already set. For example:

ACL SETUSER virginia on allkeys +set

The above command will create a user called virginia that is active (the on rule), can access any key (allkeys rule), and can call the set command (+set rule). Then another SETUSER call can modify the user rules:

ACL SETUSER virginia +get

The above rule will not apply the new rule to the user virginia, so other than SET, the user virginia will now be able to also use the GET command.

Starting from Redis 7.0, ACL rules can also be grouped into multiple distinct sets of rules, called selectors. Selectors are added by wrapping the rules in parentheses and providing them just like any other rule. In order to execute a command, either the root permissions (rules defined outside of parenthesis) or any of the selectors (rules defined inside parenthesis) must match the given command. For example:

ACL SETUSER virginia on +GET allkeys (+SET ~app1*)

This sets a user with two sets of permission, one defined on the user and one defined with a selector. The root user permissions only allows executing the get command, but can be executed on any keys. The selector then grants a secondary set of permissions: access to the SET command to be executed on any key that starts with "app1". Using multiple selectors allows you to grant permissions that are different depending on what keys are being accessed.

When we want to be sure to define an user from scratch, without caring if it had previously defined rules associated, we can use the special rule reset as first rule, in order to flush all the other existing rules:

ACL SETUSER antirez reset [... other rules ...]

After resetting an user, it returns back to the status it has when it was just created: non active (off rule), can't execute any command, can't access any key:

> ACL SETUSER antirez reset
+OK
> ACL LIST
1) "user antirez off -@all"

ACL rules are either words like "on", "off", "reset", "allkeys", or are special rules that start with a special character, and are followed by another string (without any space in between), like "+SET".

The following documentation is a reference manual about the capabilities of this command, however our ACL tutorial may be a more gentle introduction to how the ACL system works in general.

List of rules

Redis ACL rules are split into two categories: rules that define command permissions, "Command rules", and rules that define user state, "User management rules". This is a list of all the supported Redis ACL rules:

Command rules

  • ~<pattern>: add the specified key pattern (glob style pattern, like in the KEYS command), to the list of key patterns accessible by the user. This grants both read and write permissions to keys that match the pattern. You can add multiple key patterns to the same user. Example: ~objects:*
  • %R~<pattern>: (Available in Redis 7.0 and later) Add the specified read key pattern. This behaves similar to the regular key pattern but only grants permission to read from keys that match the given pattern. See key permissions for more information.
  • %W~<pattern>: (Available in Redis 7.0 and later) Add the specified write key pattern. This behaves similar to the regular key pattern but only grants permission to write to keys that match the given pattern. See key permissions for more information.
  • %RW~<pattern>: (Available in Redis 7.0 and later) Alias for ~<pattern>.
  • allkeys: alias for ~*, it allows the user to access all the keys.
  • resetkeys: removes all the key patterns from the list of key patterns the user can access.
  • &<pattern>: (Available in Redis 6.2 and later) add the specified glob style pattern to the list of Pub/Sub channel patterns accessible by the user. You can add multiple channel patterns to the same user. Example: &chatroom:*
  • allchannels: alias for &*, it allows the user to access all Pub/Sub channels.
  • resetchannels: removes all channel patterns from the list of Pub/Sub channel patterns the user can access.
  • +<command>: Add the command to the list of commands the user can call. Can be used with | for allowing subcommands (e.g "+config|get").
  • +@<category>: add all the commands in the specified category to the list of commands the user is able to execute. Example: +@string (adds all the string commands). For a list of categories check the ACL CAT command.
  • +<command>|first-arg: Allow a specific first argument of an otherwise disabled command. It is only supported on commands with no sub-commands, and is not allowed as negative form like -SELECT|1, only additive starting with "+". This feature is deprecated and may be removed in the future.
  • allcommands: alias of +@all. Adds all the commands there are in the server, including future commands loaded via module, to be executed by this user.
  • -<command>: Remove the command to the list of commands the user can call. Starting Redis 7.0, it can be used with | for blocking subcommands (e.g "-config|set").
  • -@<category>: Like +@<category> but removes all the commands in the category instead of adding them.
  • nocommands: alias for -@all. Removes all the commands, the user will no longer be able to execute anything.

User management rules

  • on: set the user as active, it will be possible to authenticate as this user using AUTH <username> <password>.
  • off: set user as not active, it will be impossible to log as this user. Please note that if a user gets disabled (set to off) after there are connections already authenticated with such a user, the connections will continue to work as expected. To also kill the old connections you can use CLIENT KILL with the user option. An alternative is to delete the user with ACL DELUSER, that will result in all the connections authenticated as the deleted user to be disconnected.
  • nopass: the user is set as a "no password" user. It means that it will be possible to authenticate as such user with any password. By default, the default special user is set as "nopass". The nopass rule will also reset all the configured passwords for the user.
  • >password: Add the specified clear text password as an hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored as clear text inside the server. Example: >mypassword.
  • #<hashedpassword>: Add the specified hashed password to the list of user passwords. A Redis hashed password is hashed with SHA256 and translated into a hexadecimal string. Example: #c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2.
  • <password: Like >password but removes the password instead of adding it.
  • !<hashedpassword>: Like #<hashedpassword> but removes the password instead of adding it.
  • (<rule list>): (Available in Redis 7.0 and later) Create a new selector to match rules against. Selectors are evaluated after the user permissions, and are evaluated according to the order they are defined. If a command matches either the user permissions or any selector, it is allowed. See selectors for more information.
  • clearselectors: (Available in Redis 7.0 and later) Delete all of the selectors attached to the user.
  • reset: Remove any capability from the user. It is set to off, without passwords, unable to execute any command, unable to access any key.

Return

Simple string reply: OK on success.

If the rules contain errors, the error is returned.

Examples

> ACL SETUSER alan allkeys +@string +@set -SADD >alanpassword
+OK

> ACL SETUSER antirez heeyyyy
(error) ERR Error in ACL SETUSER modifier 'heeyyyy': Syntax error

13 - ACL USERS

List the username of all the configured ACL rules

The command shows a list of all the usernames of the currently configured users in the Redis ACL system.

Return

An array of strings.

Examples

> ACL USERS
1) "anna"
2) "antirez"
3) "default"

14 - ACL WHOAMI

Return the name of the user associated to the current connection

Return the username the current connection is authenticated with. New connections are authenticated with the "default" user. They can change user using AUTH.

Return

Bulk string reply: the username of the current connection.

Examples

> ACL WHOAMI
"default"

15 - APPEND

Append a value to a key

If key already exists and is a string, this command appends the value at the end of the string. If key does not exist it is created and set as an empty string, so APPEND will be similar to SET in this special case.

Return

Integer reply: the length of the string after the append operation.

Examples

EXISTS mykey APPEND mykey "Hello" APPEND mykey " World" GET mykey

Pattern: Time series

The APPEND command can be used to create a very compact representation of a list of fixed-size samples, usually referred as time series. Every time a new sample arrives we can store it using the command

APPEND timeseries "fixed-size sample"

Accessing individual elements in the time series is not hard:

  • STRLEN can be used in order to obtain the number of samples.
  • GETRANGE allows for random access of elements. If our time series have associated time information we can easily implement a binary search to get range combining GETRANGE with the Lua scripting engine available in Redis 2.6.
  • SETRANGE can be used to overwrite an existing time series.

The limitation of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable.

Hint: it is possible to switch to a different key based on the current Unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more friendly to be distributed across many Redis instances.

An example sampling the temperature of a sensor using fixed-size strings (using a binary format is better in real implementations).

APPEND ts "0043" APPEND ts "0035" GETRANGE ts 0 3 GETRANGE ts 4 7

16 - ASKING

Sent by cluster clients after an -ASK redirect

When a cluster client receives an -ASK redirect, the ASKING command is sent to the target node followed by the command which was redirected. This is normally done automatically by cluster clients.

If an -ASK redirect is received during a transaction, only one ASKING command needs to be sent to the target node before sending the complete transaction to the target node.

See ASK redirection in the Redis Cluster Specification for details.

Return

Simple string reply: OK.

17 - AUTH

Authenticate to the server

The AUTH command authenticates the current connection in two cases:

  1. If the Redis server is password protected via the requirepass option.
  2. If a Redis 6.0 instance, or greater, is using the Redis ACL system.

Redis versions prior of Redis 6 were only able to understand the one argument version of the command:

AUTH <password>

This form just authenticates against the password set with requirepass. In this configuration Redis will deny any command executed by the just connected clients, unless the connection gets authenticated via AUTH.

If the password provided via AUTH matches the password in the configuration file, the server replies with the OK status code and starts accepting commands. Otherwise, an error is returned and the clients needs to try a new password.

When Redis ACLs are used, the command should be given in an extended way:

AUTH <username> <password>

In order to authenticate the current connection with one of the connections defined in the ACL list (see ACL SETUSER) and the official ACL guide for more information.

When ACLs are used, the single argument form of the command, where only the password is specified, assumes that the implicit username is "default".

Security notice

Because of the high performance nature of Redis, it is possible to try a lot of passwords in parallel in very short time, so make sure to generate a strong and very long password so that this attack is infeasible. A good way to generate strong passwords is via the ACL GENPASS command.

Return

Simple string reply or an error if the password, or username/password pair, is invalid.

18 - BF.ADD

Adds an item to a Bloom Filter

Creates an empty Bloom Filter with a single sub-filter for the initial capacity requested and with an upper bound error_rate. By default, the filter auto-scales by creating additional sub-filters when capacity is reached. The new sub-filter is created with size of the previous sub-filter multiplied by expansion.

Though the filter can scale up by creating sub-filters, it is recommended to reserve the estimated required capacity since maintaining and querying sub-filters requires additional memory (each sub-filter uses an extra bits and hash function) and consume further CPU time than an equivalent filter that had the right capacity at creation time.

The number of hash functions is -log(error)/ln(2)^2. The number of bits per item is -log(error)/ln(2) ≈ 1.44.

  • 1% error rate requires 7 hash functions and 10.08 bits per item.
  • 0.1% error rate requires 10 hash functions and 14.4 bits per item.
  • 0.01% error rate requires 14 hash functions and 20.16 bits per item.

Parameters:

  • key: The key under which the filter is found
  • error_rate: The desired probability for false positives. The rate is a decimal value between 0 and 1. For example, for a desired false positive rate of 0.1% (1 in 1000), error_rate should be set to 0.001.
  • capacity: The number of entries intended to be added to the filter. If your filter allows scaling, performance will begin to degrade after adding more items than this number. The actual degradation depends on how far the limit has been exceeded. Performance degrades linearly with the number of sub-filters.

Optional parameters:

  • NONSCALING: Prevents the filter from creating additional sub-filters if initial capacity is reached. Non-scaling filters requires slightly less memory than their scaling counterparts. The filter returns an error when capacity is reached.
  • EXPANSION: When capacity is reached, an additional sub-filter is created. The size of the new sub-filter is the size of the last sub-filter multiplied by expansion. If the number of elements to be stored in the filter is unknown, we recommend that you use an expansion of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an expansion of 1 to reduce memory consumption. The default expansion value is 2.

Return

Integer reply - "1" if the item did not exist in the filter, "0" otherwise.

Examples

redis> BF.ADD bf item1
(integer) 0
redis> BF.ADD bf item_new
(integer) 1

19 - BF.EXISTS

Checks whether an item exists in a Bloom Filter

Determines whether an item may exist in the Bloom Filter or not.

Parameters

  • key: The name of the filter
  • item: The item to check for

Return

Integer reply - where "1" value means the item may exist in the filter, and a "0" value means it does not exist in the filter.

Examples

redis> BF.EXISTS bf item1
(integer) 1
redis> BF.EXISTS bf item_new
(integer) 0

20 - BF.INFO

Returns information about a Bloom Filter

Return information about key filter.

Parameters

  • key: Name of the key to return information about

Return

Array reply with information of the filter.

@example

redis> BF.INFO bf
1) Capacity
2) (integer) 1709
3) Size
4) (integer) 2200
5) Number of filters
6) (integer) 1
7) Number of items inserted
8) (integer) 0
9) Expansion rate
10) (integer) 1

21 - BF.INSERT

Adds one or more items to a Bloom Filter. A filter will be created if it does not exist

BF.INSERT is a sugarcoated combination of BF.RESERVE and BF.ADD. It creates a new filter if the key does not exist using the relevant arguments (see BF.RESERVE). Next, all ITEMS are inserted.

Parameters

  • key: The name of the filter
  • item: One or more items to add. The ITEMS keyword must precede the list of items to add.

Optional parameters:

  • NOCREATE: (Optional) Indicates that the filter should not be created if it does not already exist. If the filter does not yet exist, an error is returned rather than creating it automatically. This may be used where a strict separation between filter creation and filter addition is desired. It is an error to specify NOCREATE together with either CAPACITY or ERROR.
  • capacity: (Optional) Specifies the desired capacity for the filter to be created. This parameter is ignored if the filter already exists. If the filter is automatically created and this parameter is absent, then the module-level capacity is used. See BF.RESERVE for more information about the impact of this value.
  • error: (Optional) Specifies the error ratio of the newly created filter if it does not yet exist. If the filter is automatically created and error is not specified then the module-level error rate is used. See BF.RESERVE for more information about the format of this value.
  • NONSCALING: Prevents the filter from creating additional sub-filters if initial capacity is reached. Non-scaling filters require slightly less memory than their scaling counterparts. The filter returns an error when capacity is reached.
  • expansion: When capacity is reached, an additional sub-filter is created. The size of the new sub-filter is the size of the last sub-filter multiplied by expansion. If the number of elements to be stored in the filter is unknown, we recommend that you use an expansion of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an expansion of 1 to reduce memory consumption. The default expansion value is 2.

Return

An array of booleans (integers). Each element is either true or false depending on whether the corresponding input element was newly added to the filter or may have previously existed.

Examples

Add three items to a filter with default parameters if the filter does not already exist:

BF.INSERT filter ITEMS foo bar baz

Add one item to a filter with a capacity of 10000 if the filter does not already exist:

BF.INSERT filter CAPACITY 10000 ITEMS hello

Add 2 items to a filter with an error if the filter does not already exist:

BF.INSERT filter NOCREATE ITEMS foo bar

22 - BF.LOADCHUNK

Restores a filter previously saved using SCANDUMP

Restores a filter previously saved using SCANDUMP. See the SCANDUMP command for example usage.

This command overwrites any bloom filter stored under key. Make sure that the bloom filter is not be changed between invocations.

Parameters

  • key: Name of the key to restore
  • iter: Iterator value associated with data (returned by SCANDUMP)
  • data: Current data chunk (returned by SCANDUMP)

Return

[] otherwise.

Examples

See BF.SCANDUMP for an example.

23 - BF.MADD

Adds one or more items to a Bloom Filter. A filter will be created if it does not exist

Adds one or more items to the Bloom Filter and creates the filter if it does not exist yet. This command operates identically to BF.ADD except that it allows multiple inputs and returns multiple values.

Parameters

  • key: The name of the filter
  • item: One or more items to add

Return

[] - for each item which is either "1" or "0" depending on whether the corresponding input element was newly added to the filter or may have previously existed.

Examples

redis> BF.MADD bf item1 item2
1) (integer) 0
2) (integer) 1

24 - BF.MEXISTS

Checks whether one or more items exist in a Bloom Filter

Determines if one or more items may exist in the filter or not.

Parameters

  • key: The name of the filter
  • items: One or more items to check

Return

[] - for each item where "1" value means the corresponding item may exist in the filter, and a "0" value means it does not exist in the filter.

Examples

redis> BF.MEXISTS bf item1 item_new
1) (integer) 1
2) (integer) 0

25 - BF.RESERVE

Creates a new Bloom Filter

Creates an empty Bloom Filter with a single sub-filter for the initial capacity requested and with an upper bound error_rate. By default, the filter auto-scales by creating additional sub-filters when capacity is reached. The new sub-filter is created with size of the previous sub-filter multiplied by expansion.

Though the filter can scale up by creating sub-filters, it is recommended to reserve the estimated required capacity since maintaining and querying sub-filters requires additional memory (each sub-filter uses an extra bits and hash function) and consume further CPU time than an equivalent filter that had the right capacity at creation time.

The number of hash functions is -log(error)/ln(2)^2. The number of bits per item is -log(error)/ln(2) ≈ 1.44.

  • 1% error rate requires 7 hash functions and 10.08 bits per item.
  • 0.1% error rate requires 10 hash functions and 14.4 bits per item.
  • 0.01% error rate requires 14 hash functions and 20.16 bits per item.

Parameters:

  • key: The key under which the filter is found
  • error_rate: The desired probability for false positives. The rate is a decimal value between 0 and 1. For example, for a desired false positive rate of 0.1% (1 in 1000), error_rate should be set to 0.001.
  • capacity: The number of entries intended to be added to the filter. If your filter allows scaling, performance will begin to degrade after adding more items than this number. The actual degradation depends on how far the limit has been exceeded. Performance degrades linearly with the number of sub-filters.

Optional parameters:

  • NONSCALING: Prevents the filter from creating additional sub-filters if initial capacity is reached. Non-scaling filters requires slightly less memory than their scaling counterparts. The filter returns an error when capacity is reached.
  • EXPANSION: When capacity is reached, an additional sub-filter is created. The size of the new sub-filter is the size of the last sub-filter multiplied by expansion. If the number of elements to be stored in the filter is unknown, we recommend that you use an expansion of 2 or more to reduce the number of sub-filters. Otherwise, we recommend that you use an expansion of 1 to reduce memory consumption. The default expansion value is 2.

Return

[] otherwise.

Examples

redis> BF.RESERVE bf 0.01 1000
OK
redis> BF.RESERVE bf 0.01 1000
(error) ERR item exists
redis> BF.RESERVE bf_exp 1000 EXPANSION 2
OK
redis> BF.RESERVE bf_exp 1000 NONSCALING
OK

26 - BF.SCANDUMP

Begins an incremental save of the bloom filter

Begins an incremental save of the bloom filter. This is useful for large bloom filters which cannot fit into the normal SAVE and RESTORE model.

The first time this command is called, the value of iter should be 0. This command returns successive (iter, data) pairs until (0, NULL) to indicate completion.

Parameters

  • key: Name of the filter
  • iter: Iterator value; either 0 or the iterator from a previous invocation of this command

Return

[] (Data). The Iterator is passed as input to the next invocation of SCANDUMP. If Iterator is 0, then it means iteration has completed.

The iterator-data pair should also be passed to LOADCHUNK when restoring the filter.

@example

redis> BF.RESERVE bf 0.1 10
OK
redis> BF.ADD bf item1
1) (integer) 1
redis> BF.SCANDUMP bf 0
1) (integer) 1
2) "\x01\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00\x02\x00\x00\x00\b\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x9a\x99\x99\x99\x99\x99\xa9?J\xf7\xd4\x9e\xde\xf0\x18@\x05\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\x00"
redis> BF.SCANDUMP bf 1
1) (integer) 9
2) "\x01\b\x00\x80\x00\x04 \x00"
redis> BF.SCANDUMP bf 9
1) (integer) 0
2) ""
redis> FLUSHALL
OK
redis> BF.LOADCHUNK bf 1 "\x01\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00\x02\x00\x00\x00\b\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x9a\x99\x99\x99\x99\x99\xa9?J\xf7\xd4\x9e\xde\xf0\x18@\x05\x00\x00\x00\n\x00\x00\x00\x00\x00\x00\x00\x00"
OK
redis> BF.LOADCHUNK bf 9 "\x01\b\x00\x80\x00\x04 \x00"
OK
redis> BF.EXISTS bf item1
(integer) 1

python code:

chunks = []
iter = 0
while True:
    iter, data = BF.SCANDUMP(key, iter)
    if iter == 0:
        break
    else:
        chunks.append([iter, data])

# Load it back
for chunk in chunks:
    iter, data = chunk
    BF.LOADCHUNK(key, iter, data)

27 - BGREWRITEAOF

Asynchronously rewrite the append-only file

Instruct Redis to start an Append Only File rewrite process. The rewrite will create a small optimized version of the current Append Only File.

If BGREWRITEAOF fails, no data gets lost as the old AOF will be untouched.

The rewrite will be only triggered by Redis if there is not already a background process doing persistence.

Specifically:

  • If a Redis child is creating a snapshot on disk, the AOF rewrite is scheduled but not started until the saving child producing the RDB file terminates. In this case the BGREWRITEAOF will still return an positive status reply, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the INFO command as of Redis 2.6 or successive versions.
  • If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time.
  • If the AOF rewrite could start, but the attempt at starting it fails (for instance because of an error in creating the child process), an error is returned to the caller.

Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the BGREWRITEAOF command can be used to trigger a rewrite at any time.

Please refer to the persistence documentation for detailed information.

Return

Simple string reply: A simple string reply indicating that the rewriting started or is about to start ASAP, when the call is executed with success.

The command may reply with an error in certain cases, as documented above.

28 - BGSAVE

Asynchronously save the dataset to disk

Save the DB in background.

Normally the OK code is immediately returned. Redis forks, the parent continues to serve the clients, the child saves the DB on disk then exits.

An error is returned if there is already a background save running or if there is another non-background-save process running, specifically an in-progress AOF rewrite.

If BGSAVE SCHEDULE is used, the command will immediately return OK when an AOF rewrite is in progress and schedule the background save to run at the next opportunity.

A client may be able to check if the operation succeeded using the LASTSAVE command.

Please refer to the persistence documentation for detailed information.

Return

Simple string reply: Background saving started if BGSAVE started correctly or Background saving scheduled when used with the SCHEDULE subcommand.

29 - BITCOUNT

Count set bits in a string

Count the number of set bits (population counting) in a string.

By default all the bytes contained in the string are examined. It is possible to specify the counting operation only in an interval passing the additional arguments start and end.

Like for the GETRANGE command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last byte, -2 is the penultimate, and so forth.

Non-existent keys are treated as empty strings, so the command will return zero.

By default, the additional arguments start and end specify a byte index. We can use an additional argument BIT to specify a bit index. So 0 is the first bit, 1 is the second bit, and so forth. For negative values, -1 is the last bit, -2 is the penultimate, and so forth.

Return

Integer reply

The number of bits set to 1.

Examples

SET mykey "foobar" BITCOUNT mykey BITCOUNT mykey 0 0 BITCOUNT mykey 1 1 BITCOUNT mykey 1 1 BYTE BITCOUNT mykey 5 30 BIT

Pattern: real-time metrics using bitmaps

Bitmaps are a very space-efficient representation of certain kinds of information. One example is a Web application that needs the history of user visits, so that for instance it is possible to determine what users are good targets of beta features.

Using the SETBIT command this is trivial to accomplish, identifying every day with a small progressive integer. For instance day 0 is the first day the application was put online, day 1 the next day, and so forth.

Every time a user performs a page view, the application can register that in the current day the user visited the web site using the SETBIT command setting the bit corresponding to the current day.

Later it will be trivial to know the number of single days the user visited the web site simply calling the BITCOUNT command against the bitmap.

A similar pattern where user IDs are used instead of days is described in the article called "Fast easy realtime metrics using Redis bitmaps".

Performance considerations

In the above example of counting days, even after 10 years the application is online we still have just 365*10 bits of data per user, that is just 456 bytes per user. With this amount of data BITCOUNT is still as fast as any other O(1) Redis command like GET or INCR.

When the bitmap is big, there are two alternatives:

  • Taking a separated key that is incremented every time the bitmap is modified. This can be very efficient and atomic using a small Redis Lua script.
  • Running the bitmap incrementally using the BITCOUNT start and end optional parameters, accumulating the results client-side, and optionally caching the result into a key.

30 - BITFIELD

Perform arbitrary bitfield integer operations on strings

The command treats a Redis string as a array of bits, and is capable of addressing specific integer fields of varying bit widths and arbitrary non (necessary) aligned offset. In practical terms using this command you can set, for example, a signed 5 bits integer at bit offset 1234 to a specific value, retrieve a 31 bit unsigned integer from offset 4567. Similarly the command handles increments and decrements of the specified integers, providing guaranteed and well specified overflow and underflow behavior that the user can configure.

BITFIELD is able to operate with multiple bit fields in the same command call. It takes a list of operations to perform, and returns an array of replies, where each array matches the corresponding operation in the list of arguments.

For example the following command increments an 5 bit signed integer at bit offset 100, and gets the value of the 4 bit unsigned integer at bit offset 0:

> BITFIELD mykey INCRBY i5 100 1 GET u4 0
1) (integer) 1
2) (integer) 0

Note that:

  1. Addressing with GET bits outside the current string length (including the case the key does not exist at all), results in the operation to be performed like the missing part all consists of bits set to 0.
  2. Addressing with SET or INCRBY bits outside the current string length will enlarge the string, zero-padding it, as needed, for the minimal length needed, according to the most far bit touched.

Supported subcommands and integer encoding

The following is the list of supported commands.

  • GET <encoding> <offset> -- Returns the specified bit field.
  • SET <encoding> <offset> <value> -- Set the specified bit field and returns its old value.
  • INCRBY <encoding> <offset> <increment> -- Increments or decrements (if a negative increment is given) the specified bit field and returns the new value.

There is another subcommand that only changes the behavior of successive INCRBY and SET subcommands calls by setting the overflow behavior:

  • OVERFLOW [WRAP|SAT|FAIL]

Where an integer encoding is expected, it can be composed by prefixing with i for signed integers and u for unsigned integers with the number of bits of our integer encoding. So for example u8 is an unsigned integer of 8 bits and i16 is a signed integer of 16 bits.

The supported encodings are up to 64 bits for signed integers, and up to 63 bits for unsigned integers. This limitation with unsigned integers is due to the fact that currently the Redis protocol is unable to return 64 bit unsigned integers as replies.

Bits and positional offsets

There are two ways in order to specify offsets in the bitfield command. If a number without any prefix is specified, it is used just as a zero based bit offset inside the string.

However if the offset is prefixed with a # character, the specified offset is multiplied by the integer encoding's width, so for example:

BITFIELD mystring SET i8 #0 100 SET i8 #1 200

Will set the first i8 integer at offset 0 and the second at offset 8. This way you don't have to do the math yourself inside your client if what you want is a plain array of integers of a given size.

Overflow control

Using the OVERFLOW command the user is able to fine-tune the behavior of the increment or decrement overflow (or underflow) by specifying one of the following behaviors:

  • WRAP: wrap around, both with signed and unsigned integers. In the case of unsigned integers, wrapping is like performing the operation modulo the maximum value the integer can contain (the C standard behavior). With signed integers instead wrapping means that overflows restart towards the most negative value and underflows towards the most positive ones, so for example if an i8 integer is set to the value 127, incrementing it by 1 will yield -128.
  • SAT: uses saturation arithmetic, that is, on underflows the value is set to the minimum integer value, and on overflows to the maximum integer value. For example incrementing an i8 integer starting from value 120 with an increment of 10, will result into the value 127, and further increments will always keep the value at 127. The same happens on underflows, but towards the value is blocked at the most negative value.
  • FAIL: in this mode no operation is performed on overflows or underflows detected. The corresponding return value is set to NULL to signal the condition to the caller.

Note that each OVERFLOW statement only affects the INCRBY and SET commands that follow it in the list of subcommands, up to the next OVERFLOW statement.

By default, WRAP is used if not otherwise specified.

> BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1
1) (integer) 1
2) (integer) 1
> BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1
1) (integer) 2
2) (integer) 2
> BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1
1) (integer) 3
2) (integer) 3
> BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1
1) (integer) 0
2) (integer) 3

Return value

The command returns an array with each entry being the corresponding result of the sub command given at the same position. OVERFLOW subcommands don't count as generating a reply.

The following is an example of OVERFLOW FAIL returning NULL.

> BITFIELD mykey OVERFLOW FAIL incrby u2 102 1
1) (nil)

Motivations

The motivation for this command is that the ability to store many small integers as a single large bitmap (or segmented over a few keys to avoid having huge keys) is extremely memory efficient, and opens new use cases for Redis to be applied, especially in the field of real time analytics. This use cases are supported by the ability to specify the overflow in a controlled way.

Fun fact: Reddit's 2017 April fools' project r/place was built using the Redis BITFIELD command in order to take an in-memory representation of the collaborative canvas.

Performance considerations

Usually BITFIELD is a fast command, however note that addressing far bits of currently short strings will trigger an allocation that may be more costly than executing the command on bits already existing.

Orders of bits

The representation used by BITFIELD considers the bitmap as having the bit number 0 to be the most significant bit of the first byte, and so forth, so for example setting a 5 bits unsigned integer to value 23 at offset 7 into a bitmap previously set to all zeroes, will produce the following representation:

+--------+--------+
|00000001|01110000|
+--------+--------+

When offsets and integer sizes are aligned to bytes boundaries, this is the same as big endian, however when such alignment does not exist, its important to also understand how the bits inside a byte are ordered.

31 - BITFIELD_RO

Perform arbitrary bitfield integer operations on strings. Read-only variant of BITFIELD

Read-only variant of the BITFIELD command. It is like the original BITFIELD but only accepts GET subcommand and can safely be used in read-only replicas.

Since the original BITFIELD has SET and INCRBY options it is technically flagged as a writing command in the Redis command table. For this reason read-only replicas in a Redis Cluster will redirect it to the master instance even if the connection is in read-only mode (see the READONLY command of Redis Cluster).

Since Redis 6.2, the BITFIELD_RO variant was introduced in order to allow BITFIELD behavior in read-only replicas without breaking compatibility on command flags.

See original BITFIELD for more details.

Examples

BITFIELD_RO hello GET i8 16

Return

Array reply: An array with each entry being the corresponding result of the subcommand given at the same position.

32 - BITOP

Perform bitwise operations between strings

Perform a bitwise operation between multiple keys (containing string values) and store the result in the destination key.

The BITOP command supports four bitwise operations: AND, OR, XOR and NOT, thus the valid forms to call the command are:

  • BITOP AND destkey srckey1 srckey2 srckey3 ... srckeyN
  • BITOP OR destkey srckey1 srckey2 srckey3 ... srckeyN
  • BITOP XOR destkey srckey1 srckey2 srckey3 ... srckeyN
  • BITOP NOT destkey srckey

As you can see NOT is special as it only takes an input key, because it performs inversion of bits so it only makes sense as an unary operator.

The result of the operation is always stored at destkey.

Handling of strings with different lengths

When an operation is performed between strings having different lengths, all the strings shorter than the longest string in the set are treated as if they were zero-padded up to the length of the longest string.

The same holds true for non-existent keys, that are considered as a stream of zero bytes up to the length of the longest string.

Return

Integer reply

The size of the string stored in the destination key, that is equal to the size of the longest input string.

Examples

SET key1 "foobar" SET key2 "abcdef" BITOP AND dest key1 key2 GET dest

Pattern: real time metrics using bitmaps

BITOP is a good complement to the pattern documented in the BITCOUNT command documentation. Different bitmaps can be combined in order to obtain a target bitmap where the population counting operation is performed.

See the article called "Fast easy realtime metrics using Redis bitmaps" for a interesting use cases.

Performance considerations

BITOP is a potentially slow command as it runs in O(N) time. Care should be taken when running it against long input strings.

For real-time metrics and statistics involving large inputs a good approach is to use a replica (with read-only option disabled) where the bit-wise operations are performed to avoid blocking the master instance.

33 - BITPOS

Find first bit set or clear in a string

Return the position of the first bit set to 1 or 0 in a string.

The position is returned, thinking of the string as an array of bits from left to right, where the first byte's most significant bit is at position 0, the second byte's most significant bit is at position 8, and so forth.

The same bit position convention is followed by GETBIT and SETBIT.

By default, all the bytes contained in the string are examined. It is possible to look for bits only in a specified interval passing the additional arguments start and end (it is possible to just pass start, the operation will assume that the end is the last byte of the string. However there are semantic differences as explained later). By default, the range is interpreted as a range of bytes and not a range of bits, so start=0 and end=2 means to look at the first three bytes.

You can use the optional BIT modifier to specify that the range should be interpreted as a range of bits. So start=0 and end=2 means to look at the first three bits.

Note that bit positions are returned always as absolute values starting from bit zero even when start and end are used to specify a range.

Like for the GETRANGE command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last byte, -2 is the penultimate, and so forth. When BIT is specified, -1 is the last bit, -2 is the penultimate, and so forth.

Non-existent keys are treated as empty strings.

Return

Integer reply

The command returns the position of the first bit set to 1 or 0 according to the request.

If we look for set bits (the bit argument is 1) and the string is empty or composed of just zero bytes, -1 is returned.

If we look for clear bits (the bit argument is 0) and the string only contains bit set to 1, the function returns the first bit not part of the string on the right. So if the string is three bytes set to the value 0xff the command BITPOS key 0 will return 24, since up to bit 23 all the bits are 1.

Basically, the function considers the right of the string as padded with zeros if you look for clear bits and specify no range or the start argument only.

However, this behavior changes if you are looking for clear bits and specify a range with both start and end. If no clear bit is found in the specified range, the function returns -1 as the user specified a clear range and there are no 0 bits in that range.

Examples

SET mykey "\xff\xf0\x00" BITPOS mykey 0 SET mykey "\x00\xff\xf0" BITPOS mykey 1 0 BITPOS mykey 1 2 BITPOS mykey 1 2 -1 BYTE BITPOS mykey 1 7 15 BIT set mykey "\x00\x00\x00" BITPOS mykey 1 BITPOS mykey 1 7 -3 BIT

34 - BLMOVE

Pop an element from a list, push it to another list and return it; or block until one is available

BLMOVE is the blocking variant of LMOVE. When source contains elements, this command behaves exactly like LMOVE. When used inside a MULTI/EXEC block, this command behaves exactly like LMOVE. When source is empty, Redis will block the connection until another client pushes to it or until timeout (a double value specifying the maximum number of seconds to block) is reached. A timeout of zero can be used to block indefinitely.

This command comes in place of the now deprecated BRPOPLPUSH. Doing BLMOVE RIGHT LEFT is equivalent.

See LMOVE for more information.

Return

Bulk string reply: the element being popped from source and pushed to destination. If timeout is reached, a Null reply is returned.

Pattern: Reliable queue

Please see the pattern description in the LMOVE documentation.

Pattern: Circular list

Please see the pattern description in the LMOVE documentation.

35 - BLMPOP

Pop elements from a list, or block until one is available

BLMPOP is the blocking variant of LMPOP.

When any of the lists contains elements, this command behaves exactly like LMPOP. When used inside a MULTI/EXEC block, this command behaves exactly like LMPOP. When all lists are empty, Redis will block the connection until another client pushes to it or until the timeout (a double value specifying the maximum number of seconds to block) elapses. A timeout of zero can be used to block indefinitely.

See LMPOP for more information.

Return

Array reply: specifically:

  • A nil when no element could be popped, and timeout is reached.
  • A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of elements.

36 - BLPOP

Remove and get the first element in a list, or block until one is available

BLPOP is a blocking list pop primitive. It is the blocking version of LPOP because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the head of the first list that is non-empty, with the given keys being checked in the order that they are given.

Non-blocking behavior

When BLPOP is called, if at least one of the specified keys contains a non-empty list, an element is popped from the head of the list and returned to the caller together with the key it was popped from.

Keys are checked in the order that they are given. Let's say that the key list1 doesn't exist and list2 and list3 hold non-empty lists. Consider the following command:

BLPOP list1 list2 list3 0

BLPOP guarantees to return an element from the list stored at list2 (since it is the first non empty list when checking list1, list2 and list3 in that order).

Blocking behavior

If none of the specified keys exist, BLPOP blocks the connection until another client performs an LPUSH or RPUSH operation against one of the keys.

Once new data is present on one of the lists, the client returns with the name of the key unblocking it and the popped value.

When BLPOP causes a client to block and a non-zero timeout is specified, the client will unblock returning a nil multi-bulk value when the specified timeout has expired without a push operation against at least one of the specified keys.

The timeout argument is interpreted as a double value specifying the maximum number of seconds to block. A timeout of zero can be used to block indefinitely.

What key is served first? What client? What element? Priority ordering details.

  • If the client tries to blocks for multiple keys, but at least one key contains elements, the returned key / element pair is the first key from left to right that has one or more elements. In this case the client is not blocked. So for instance BLPOP key1 key2 key3 key4 0, assuming that both key2 and key4 are non-empty, will always return an element from key2.
  • If multiple clients are blocked for the same key, the first client to be served is the one that was waiting for more time (the first that blocked for the key). Once a client is unblocked it does not retain any priority, when it blocks again with the next call to BLPOP it will be served accordingly to the number of clients already blocked for the same key, that will all be served before it (from the first to the last that blocked).
  • When a client is blocking for multiple keys at the same time, and elements are available at the same time in multiple keys (because of a transaction or a Lua script added elements to multiple lists), the client will be unblocked using the first key that received a push operation (assuming it has enough elements to serve our client, as there may be other clients as well waiting for this key). Basically after the execution of every command Redis will run a list of all the keys that received data AND that have at least a client blocked. The list is ordered by new element arrival time, from the first key that received data to the last. For every key processed, Redis will serve all the clients waiting for that key in a FIFO fashion, as long as there are elements in this key. When the key is empty or there are no longer clients waiting for this key, the next key that received new data in the previous command / transaction / script is processed, and so forth.

Behavior of BLPOP when multiple elements are pushed inside a list.

There are times when a list can receive multiple elements in the context of the same conceptual command:

  • Variadic push operations such as LPUSH mylist a b c.
  • After an EXEC of a MULTI block with multiple push operations against the same list.
  • Executing a Lua Script with Redis 2.6 or newer.

When multiple elements are pushed inside a list where there are clients blocking, the behavior is different for Redis 2.4 and Redis 2.6 or newer.

For Redis 2.6 what happens is that the command performing multiple pushes is executed, and only after the execution of the command the blocked clients are served. Consider this sequence of commands.

Client A:   BLPOP foo 0
Client B:   LPUSH foo a b c

If the above condition happens using a Redis 2.6 server or greater, Client A will be served with the c element, because after the LPUSH command the list contains c,b,a, so taking an element from the left means to return c.

Instead Redis 2.4 works in a different way: clients are served in the context of the push operation, so as long as LPUSH foo a b c starts pushing the first element to the list, it will be delivered to the Client A, that will receive a (the first element pushed).

The behavior of Redis 2.4 creates a lot of problems when replicating or persisting data into the AOF file, so the much more generic and semantically simpler behavior was introduced into Redis 2.6 to prevent problems.

Note that for the same reason a Lua script or a MULTI/EXEC block may push elements into a list and afterward delete the list. In this case the blocked clients will not be served at all and will continue to be blocked as long as no data is present on the list after the execution of a single command, transaction, or script.

BLPOP inside a MULTI / EXEC transaction

BLPOP can be used with pipelining (sending multiple commands and reading the replies in batch), however this setup makes sense almost solely when it is the last command of the pipeline.

Using BLPOP inside a MULTI / EXEC block does not make a lot of sense as it would require blocking the entire server in order to execute the block atomically, which in turn does not allow other clients to perform a push operation. For this reason the behavior of BLPOP inside MULTI / EXEC when the list is empty is to return a nil multi-bulk reply, which is the same thing that happens when the timeout is reached.

If you like science fiction, think of time flowing at infinite speed inside a MULTI / EXEC block...

Return

Array reply: specifically:

  • A nil multi-bulk when no element could be popped and the timeout expired.
  • A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element.

Examples

redis> DEL list1 list2
(integer) 0
redis> RPUSH list1 a b c
(integer) 3
redis> BLPOP list1 list2 0
1) "list1"
2) "a"

Reliable queues

When BLPOP returns an element to the client, it also removes the element from the list. This means that the element only exists in the context of the client: if the client crashes while processing the returned element, it is lost forever.

This can be a problem with some application where we want a more reliable messaging system. When this is the case, please check the BRPOPLPUSH command, that is a variant of BLPOP that adds the returned element to a target list before returning it to the client.

Pattern: Event notification

Using blocking list operations it is possible to mount different blocking primitives. For instance for some application you may need to block waiting for elements into a Redis Set, so that as far as a new element is added to the Set, it is possible to retrieve it without resort to polling. This would require a blocking version of SPOP that is not available, but using blocking list operations we can easily accomplish this task.

The consumer will do:

LOOP forever
    WHILE SPOP(key) returns elements
        ... process elements ...
    END
    BRPOP helper_key
END

While in the producer side we'll use simply:

MULTI
SADD key element
LPUSH helper_key x
EXEC

37 - BRPOP

Remove and get the last element in a list, or block until one is available

BRPOP is a blocking list pop primitive. It is the blocking version of RPOP because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the tail of the first list that is non-empty, with the given keys being checked in the order that they are given.

See the BLPOP documentation for the exact semantics, since BRPOP is identical to BLPOP with the only difference being that it pops elements from the tail of a list instead of popping from the head.

Return

Array reply: specifically:

  • A nil multi-bulk when no element could be popped and the timeout expired.
  • A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element.

Examples

redis> DEL list1 list2
(integer) 0
redis> RPUSH list1 a b c
(integer) 3
redis> BRPOP list1 list2 0
1) "list1"
2) "c"

38 - BRPOPLPUSH

Pop an element from a list, push it to another list and return it; or block until one is available

BRPOPLPUSH is the blocking variant of RPOPLPUSH. When source contains elements, this command behaves exactly like RPOPLPUSH. When used inside a MULTI/EXEC block, this command behaves exactly like RPOPLPUSH. When source is empty, Redis will block the connection until another client pushes to it or until timeout is reached. A timeout of zero can be used to block indefinitely.

See RPOPLPUSH for more information.

Return

Bulk string reply: the element being popped from source and pushed to destination. If timeout is reached, a Null reply is returned.

Pattern: Reliable queue

Please see the pattern description in the RPOPLPUSH documentation.

Pattern: Circular list

Please see the pattern description in the RPOPLPUSH documentation.

39 - BZMPOP

Remove and return members with scores in a sorted set or block until one is available

BZMPOP is the blocking variant of ZMPOP.

When any of the sorted sets contains elements, this command behaves exactly like ZMPOP. When used inside a MULTI/EXEC block, this command behaves exactly like ZMPOP. When all sorted sets are empty, Redis will block the connection until another client adds members to one of the keys or until the timeout (a double value specifying the maximum number of seconds to block) elapses. A timeout of zero can be used to block indefinitely.

See ZMPOP for more information.

Return

Array reply: specifically:

  • A nil when no element could be popped.
  • A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of the popped elements. Every entry in the elements array is also an array that contains the member and its score.

40 - BZPOPMAX

Remove and return the member with the highest score from one or more sorted sets, or block until one is available

BZPOPMAX is the blocking variant of the sorted set ZPOPMAX primitive.

It is the blocking version because it blocks the connection when there are no members to pop from any of the given sorted sets. A member with the highest score is popped from first sorted set that is non-empty, with the given keys being checked in the order that they are given.

The timeout argument is interpreted as a double value specifying the maximum number of seconds to block. A timeout of zero can be used to block indefinitely.

See the BZPOPMIN documentation for the exact semantics, since BZPOPMAX is identical to BZPOPMIN with the only difference being that it pops members with the highest scores instead of popping the ones with the lowest scores.

Return

Array reply: specifically:

  • A nil multi-bulk when no element could be popped and the timeout expired.
  • A three-element multi-bulk with the first element being the name of the key where a member was popped, the second element is the popped member itself, and the third element is the score of the popped element.

Examples

redis> DEL zset1 zset2
(integer) 0
redis> ZADD zset1 0 a 1 b 2 c
(integer) 3
redis> BZPOPMAX zset1 zset2 0
1) "zset1"
2) "c"
3) "2"

41 - BZPOPMIN

Remove and return the member with the lowest score from one or more sorted sets, or block until one is available

BZPOPMIN is the blocking variant of the sorted set ZPOPMIN primitive.

It is the blocking version because it blocks the connection when there are no members to pop from any of the given sorted sets. A member with the lowest score is popped from first sorted set that is non-empty, with the given keys being checked in the order that they are given.

The timeout argument is interpreted as an double value specifying the maximum number of seconds to block. A timeout of zero can be used to block indefinitely.

See the BLPOP documentation for the exact semantics, since BZPOPMIN is identical to BLPOP with the only difference being the data structure being popped from.

Return

Array reply: specifically:

  • A nil multi-bulk when no element could be popped and the timeout expired.
  • A three-element multi-bulk with the first element being the name of the key where a member was popped, the second element is the popped member itself, and the third element is the score of the popped element.

Examples

redis> DEL zset1 zset2
(integer) 0
redis> ZADD zset1 0 a 1 b 2 c
(integer) 3
redis> BZPOPMIN zset1 zset2 0
1) "zset1"
2) "a"
3) "0"

42 - CF.ADD

Adds an item to a Cuckoo Filter

Adds an item to the cuckoo filter, creating the filter if it does not exist.

Cuckoo filters can contain the same item multiple times, and consider each insert as separate. You can use CF.ADDNX to only add the item if it does not exist yet. Keep in mind that deleting an element inserted using CF.ADDNX may cause false-negative errors.

Parameters

  • key: The name of the filter
  • item: The item to add

Complexity

O(n + i), where n is the number of sub-filters and i is maxIterations. Adding items requires up to 2 memory accesses per sub-filter. But as the filter fills up, both locations for an item might be full. The filter attempts to Cuckoo swap items up to maxIterations times.

Return

[] otherwise.

redis> CF.ADD cf item
(integer) 1

43 - CF.ADDNX

Adds an item to a Cuckoo Filter if the item did not exist previously.

Adds an item to a cuckoo filter if the item did not exist previously. See documentation on CF.ADD for more information on this command.

This command is equivalent to a CF.CHECK + CF.ADD command. It does not insert an element into the filter if its fingerprint already exists in order to use the available capacity more efficiently. However, deleting elements can introduce false negative error rate!

Note that this command is slower than CF.ADD because it first checks whether the item exists.

Parameters

  • key: The name of the filter
  • item: The item to add

Return

Integer reply - where "1" means the item has been added to the filter, and "0" mean, the item already existed.

Examples

redis> CF.ADDNX cf item1
(integer) 0
redis> CF.ADDNX cf item_new
(integer) 1

44 - CF.COUNT

Return the number of times an item might be in a Cuckoo Filter

Returns the number of times an item may be in the filter. Because this is a probabilistic data structure, this may not necessarily be accurate.

If you just want to know if an item exists in the filter, use CF.EXISTS because it is more efficient for that purpose.

Parameters

  • key: The name of the filter
  • item: The item to count

Return

Integer reply - with the count of possible matching copies of the item in the filter.

Examples

redis> CF.COUNT cf item1
(integer) 42
redis> CF.COUNT cf item_new
(integer) 0

45 - CF.DEL

Deletes an item from a Cuckoo Filter

CF.DEL

CF.DEL {key} {item}

Description

Deletes an item once from the filter. If the item exists only once, it will be removed from the filter. If the item was added multiple times, it will still be present.

!!! danger "" Deleting elements that are not in the filter may delete a different item, resulting in false negatives!

Parameters

  • key: The name of the filter
  • item: The item to delete from the filter

Complexity

O(n), where n is the number of sub-filters. Both alternative locations are checked on all sub-filters.

Return

Integer reply - where "1" means the item has been deleted from the filter, and "0" mean, the item was not found.

Examples

redis> CF.DEL cf item1
(integer) 1
redis> CF.DEL cf item_new
(integer) 0
redis> CF.DEL cf1 item_new
(error) Not found

46 - CF.EXISTS

Checks whether one or more items exist in a Cuckoo Filter

Check if an item exists in a Cuckoo Filter key

Parameters

  • key: The name of the filter
  • item: The item to check for

Return

Integer reply - where "1" value means the item may exist in the filter, and a "0" value means it does not exist in the filter.

Examples

redis> CF.EXISTS cf item1
(integer) 1
redis> CF.EXISTS cf item_new
(integer) 0

47 - CF.INFO

Returns information about a Cuckoo Filter

Return information about key

Parameters

  • key: Name of the key to restore

Return

Array reply with information of the filter.

@example

redis> CF.INFO cf
 1) Size
 2) (integer) 1080
 3) Number of buckets
 4) (integer) 512
 5) Number of filter
 6) (integer) 1
 7) Number of items inserted
 8) (integer) 0
 9) Number of items deleted
10) (integer) 0
11) Bucket size
12) (integer) 2
13) Expansion rate
14) (integer) 1
15) Max iteration
16) (integer) 20

48 - CF.INSERT

Adds one or more items to a Cuckoo Filter. A filter will be created if it does not exist

Adds one or more items to a cuckoo filter, allowing the filter to be created with a custom capacity if it does not exist yet.

These commands offers more flexibility over the ADD command, at the cost of more verbosity.

Parameters

  • key: The name of the filter
  • capacity: Specifies the desired capacity of the new filter, if this filter does not exist yet. If the filter already exists, then this parameter is ignored. If the filter does not exist yet and this parameter is not specified, then the filter is created with the module-level default capacity which is 1024. See CF.RESERVE for more information on cuckoo filter capacities.
  • NOCREATE: If specified, prevents automatic filter creation if the filter does not exist. Instead, an error is returned if the filter does not already exist. This option is mutually exclusive with CAPACITY.
  • item: One or more items to add. The ITEMS keyword must precede the list of items to add.

Return

[] otherwise.

Examples

redis> CF.INSERT cf CAPACITY 1000 ITEMS item1 item2 
1) (integer) 1
2) (integer) 1
redis> CF.INSERT cf1 CAPACITY 1000 NOCREATE ITEMS item1 item2 
(error) ERR not found

49 - CF.INSERTNX

Adds one or more items to a Cuckoo Filter if the items did not exist previously. A filter will be created if it does not exist

CF.INSERT

CF.INSERTNX

Note: CF.INSERTNX is an advanced command that can have unintended impact if used incorrectly.

CF.INSERT {key} [CAPACITY {capacity}] [NOCREATE] ITEMS {item ...}
CF.INSERTNX {key} [CAPACITY {capacity}] [NOCREATE] ITEMS {item ...}

Description

Adds one or more items to a cuckoo filter, allowing the filter to be created with a custom capacity if it does not exist yet.

This command is equivalent to a CF.CHECK + CF.ADD command. It does not insert an element into the filter if its fingerprint already exists and therefore better utilizes the available capacity. However, if you delete elements it might introduce false negative error rate!

These commands offers more flexibility over the ADD and ADDNX commands, at the cost of more verbosity.

Parameters

  • key: The name of the filter
  • capacity: Specifies the desired capacity of the new filter, if this filter does not exist yet. If the filter already exists, then this parameter is ignored. If the filter does not exist yet and this parameter is not specified, then the filter is created with the module-level default capacity which is 1024. See CF.RESERVE for more information on cuckoo filter capacities.
  • NOCREATE: If specified, prevents automatic filter creation if the filter does not exist. Instead, an error is returned if the filter does not already exist. This option is mutually exclusive with CAPACITY.
  • item: One or more items to add. The ITEMS keyword must precede the list of items to add.

Complexity

O(n + i), where n is the number of sub-filters and i is maxIterations. Adding items requires up to 2 memory accesses per sub-filter. But as the filter fills up, both locations for an item might be full. The filter attempts to Cuckoo swap items up to maxIterations times.

Returns

An array of booleans (as integers) corresponding to the items specified. Possible values for each element are:

  • > 0 if the item was successfully inserted
  • 0 if the item already existed and INSERTNX is used.
  • <0 if an error ocurred

Note that for CF.INSERT, the return value is always be an array of >0 values, unless an error occurs.

Return

[] - where "1" means the item has been added to the filter, and "0" mean, the item already existed. Error reply when filter parameters are erroneous

Examples

redis> CF.INSERTNX cf CAPACITY 1000 ITEMS item1 item2 
1) (integer) 1
2) (integer) 1
redis> CF.INSERTNX cf CAPACITY 1000 ITEMS item1 item2 item3
1) (integer) 0
2) (integer) 0
3) (integer) 1
redis> CF.INSERTNX cf_new CAPACITY 1000 NOCREATE ITEMS item1 item2 
(error) ERR not found

50 - CF.LOADCHUNK

Restores a filter previously saved using SCANDUMP

Restores a filter previously saved using SCANDUMP. See the SCANDUMP command for example usage.

This command overwrites any cuckoo filter stored under key. Make sure that the cuckoo filter is not be modified between invocations.

Parameters

  • key: Name of the key to restore
  • iter: Iterator value associated with data (returned by SCANDUMP)
  • data: Current data chunk (returned by SCANDUMP)

Return

[] otherwise.

Examples

See BF.SCANDUMP for an example.

51 - CF.MEXISTS

Checks whether one or more items exist in a Cuckoo Filter

Check if one or more items exists in a Cuckoo Filter key

Parameters

  • key: The name of the filter
  • items: The item to check for

Return

[] - for each item where "1" value means the corresponding item may exist in the filter, and a "0" value means it does not exist in the filter.

Examples

redis> CF.MEXISTS cf item1 item_new
1) (integer) 1
2) (integer) 0

52 - CF.RESERVE

Creates a new Cuckoo Filter

Create a Cuckoo Filter as key with a single sub-filter for the initial amount of capacity for items. Because of how Cuckoo Filters work, the filter is likely to declare itself full before capacity is reached and therefore fill rate will likely never reach 100%. The fill rate can be improved by using a larger bucketsize at the cost of a higher error rate. When the filter self-declare itself full, it will auto-expand by generating additional sub-filters at the cost of reduced performance and increased error rate. The new sub-filter is created with size of the previous sub-filter multiplied by expansion. Like bucket size, additional sub-filters grow the error rate linearly. The size of the new sub-filter is the size of the last sub-filter multiplied by expansion. The default value is 1.

The minimal false positive error rate is 2/255 ≈ 0.78% when bucket size of 1 is used. Larger buckets increase the error rate linearly (for example, a bucket size of 3 yields a 2.35% error rate) but improve the fill rate of the filter.

maxiterations dictates the number of attempts to find a slot for the incoming fingerprint. Once the filter gets full, high maxIterations value will slow down insertions. The default value is 20.

Unused capacity in prior sub-filters is automatically used when possible. The filter can grow up to 32 times.

Parameters:

  • key: The key under which the filter is found.
  • capacity: Estimated capacity for the filter. Capacity is rounded to the next 2^n number. The filter will likely not fill up to 100% of it's capacity. Make sure to reserve extra capacity if you want to avoid expansions.

Optional parameters:

  • bucketsize: Number of items in each bucket. A higher bucket size value improves the fill rate but also causes a higher error rate and slightly slower performance.
  • maxiterations: Number of attempts to swap items between buckets before declaring filter as full and creating an additional filter. A low value is better for performance and a higher number is better for filter fill rate.
  • expansion: When a new filter is created, its size is the size of the current filter multiplied by expansion. Expansion is rounded to the next 2^n number.

Return

[] otherwise.

Examples

redis> CF.RESERVE cf 1000
OK
redis> CF.RESERVE cf 1000
(error) ERR item exists
redis> CF.RESERVE cf_params 1000 BUCKETSIZE 8 MAXITERATIONS 20 EXPANSION 2
OK

53 - CF.SCANDUMP

Begins an incremental save of the bloom filter

Begins an incremental save of the cuckoo filter. This is useful for large cuckoo filters which cannot fit into the normal SAVE and RESTORE model.

The first time this command is called, the value of iter should be 0. This command returns successive (iter, data) pairs until (0, NULL) indicates completion.

Parameters

  • key: Name of the filter
  • iter: Iterator value. This is either 0, or the iterator from a previous invocation of this command

Return

[] (Data). The Iterator is passed as input to the next invocation of SCANDUMP. If Iterator is 0, then it means iteration has completed.

The iterator-data pair should also be passed to LOADCHUNK when restoring the filter.

@exmaples

redis> CF.RESERVE cf 8
OK
redis> CF.ADD cf item1
(integer) 1
redis> CF.SCANDUMP cf 0
1) (integer) 1
2) "\x01\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x14\x00\x01\x008\x9a\xe0\xd8\xc3\x7f\x00\x00"
redis> CF.SCANDUMP cf 1
1) (integer) 9
2) "\x00\x00\x00\x00\a\x00\x00\x00"
redis> CF.SCANDUMP cf 9
1) (integer) 0
2) (nil)
redis> FLUSHALL
OK
redis> CF.LOADCHUNK cf 1 "\x01\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x14\x00\x01\x008\x9a\xe0\xd8\xc3\x7f\x00\x00"
OK
redis> CF.LOADCHUNK cf 9 "\x00\x00\x00\x00\a\x00\x00\x00"
OK
redis> CF.EXISTS cf item1
(integer) 1

python code:

chunks = []
iter = 0
while True:
    iter, data = CF.SCANDUMP(key, iter)
    if iter == 0:
        break
    else:
        chunks.append([iter, data])

# Load it back
for chunk in chunks:
    iter, data = chunk
    CF.LOADCHUNK(key, iter, data)

54 - CLIENT

A container for client connection commands

This is a container command for client connection commands.

To see the list of available commands you can call CLIENT HELP.

55 - CLIENT CACHING

Instruct the server about tracking or not keys in the next request

This command controls the tracking of the keys in the next command executed by the connection, when tracking is enabled in OPTIN or OPTOUT mode. Please check the client side caching documentation for background information.

When tracking is enabled Redis, using the CLIENT TRACKING command, it is possible to specify the OPTIN or OPTOUT options, so that keys in read only commands are not automatically remembered by the server to be invalidated later. When we are in OPTIN mode, we can enable the tracking of the keys in the next command by calling CLIENT CACHING yes immediately before it. Similarly when we are in OPTOUT mode, and keys are normally tracked, we can avoid the keys in the next command to be tracked using CLIENT CACHING no.

Basically the command sets a state in the connection, that is valid only for the next command execution, that will modify the behavior of client tracking.

Return

Simple string reply: OK or an error if the argument is not yes or no.

56 - CLIENT GETNAME

Get the current connection name

The CLIENT GETNAME returns the name of the current connection as set by CLIENT SETNAME. Since every new connection starts without an associated name, if no name was assigned a null bulk reply is returned.

Return

Bulk string reply: The connection name, or a null bulk reply if no name is set.

57 - CLIENT GETREDIR

Get tracking notifications redirection client ID if any

This command returns the client ID we are redirecting our tracking notifications to. We set a client to redirect to when using CLIENT TRACKING to enable tracking. However in order to avoid forcing client libraries implementations to remember the ID notifications are redirected to, this command exists in order to improve introspection and allow clients to check later if redirection is active and towards which client ID.

Return

Integer reply: the ID of the client we are redirecting the notifications to. The command returns -1 if client tracking is not enabled, or 0 if client tracking is enabled but we are not redirecting the notifications to any client.

58 - CLIENT HELP

Show helpful text about the different subcommands

The CLIENT HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

59 - CLIENT ID

Returns the client ID for the current connection

The command just returns the ID of the current connection. Every connection ID has certain guarantees:

  1. It is never repeated, so if CLIENT ID returns the same number, the caller can be sure that the underlying client did not disconnect and reconnect the connection, but it is still the same connection.
  2. The ID is monotonically incremental. If the ID of a connection is greater than the ID of another connection, it is guaranteed that the second connection was established with the server at a later time.

This command is especially useful together with CLIENT UNBLOCK which was introduced also in Redis 5 together with CLIENT ID. Check the CLIENT UNBLOCK command page for a pattern involving the two commands.

Examples

CLIENT ID

Return

Integer reply

The id of the client.

60 - CLIENT INFO

Returns information about the current client connection.

The command returns information and statistics about the current client connection in a mostly human readable format.

The reply format is identical to that of CLIENT LIST, and the content consists only of information about the current client.

Examples

CLIENT INFO

Return

Bulk string reply: a unique string, as described at the CLIENT LIST page, for the current client.

61 - CLIENT KILL

Kill the connection of a client

The CLIENT KILL command closes a given client connection. This command support two formats, the old format:

CLIENT KILL addr:port

The ip:port should match a line returned by the CLIENT LIST command (addr field).

The new format:

CLIENT KILL <filter> <value> ... ... <filter> <value>

With the new form it is possible to kill clients by different attributes instead of killing just by address. The following filters are available:

  • CLIENT KILL ADDR ip:port. This is exactly the same as the old three-arguments behavior.
  • CLIENT KILL LADDR ip:port. Kill all clients connected to specified local (bind) address.
  • CLIENT KILL ID client-id. Allows to kill a client by its unique ID field. Client ID's are retrieved using the CLIENT LIST command.
  • CLIENT KILL TYPE type, where type is one of normal, master, replica and pubsub. This closes the connections of all the clients in the specified class. Note that clients blocked into the MONITOR command are considered to belong to the normal class.
  • CLIENT KILL USER username. Closes all the connections that are authenticated with the specified ACL username, however it returns an error if the username does not map to an existing ACL user.
  • CLIENT KILL SKIPME yes/no. By default this option is set to yes, that is, the client calling the command will not get killed, however setting this option to no will have the effect of also killing the client calling the command.

It is possible to provide multiple filters at the same time. The command will handle multiple filters via logical AND. For example:

CLIENT KILL addr 127.0.0.1:12345 type pubsub

is valid and will kill only a pubsub client with the specified address. This format containing multiple filters is rarely useful currently.

When the new form is used the command no longer returns OK or an error, but instead the number of killed clients, that may be zero.

CLIENT KILL and Redis Sentinel

Recent versions of Redis Sentinel (Redis 2.8.12 or greater) use CLIENT KILL in order to kill clients when an instance is reconfigured, in order to force clients to perform the handshake with one Sentinel again and update its configuration.

Notes

Due to the single-threaded nature of Redis, it is not possible to kill a client connection while it is executing a command. From the client point of view, the connection can never be closed in the middle of the execution of a command. However, the client will notice the connection has been closed only when the next command is sent (and results in network error).

Return

When called with the three arguments format:

Simple string reply: OK if the connection exists and has been closed

When called with the filter / value format:

Integer reply: the number of clients killed.

62 - CLIENT LIST

Get the list of client connections

The CLIENT LIST command returns information and statistics about the client connections server in a mostly human readable format.

You can use one of the optional subcommands to filter the list. The TYPE type subcommand filters the list by clients' type, where type is one of normal, master, replica, and pubsub. Note that clients blocked by the MONITOR command belong to the normal class.

The ID filter only returns entries for clients with IDs matching the client-id arguments.

Return

Bulk string reply: a unique string, formatted as follows:

  • One client connection per line (separated by LF)
  • Each line is composed of a succession of property=value fields separated by a space character.

Here is the meaning of the fields:

  • id: a unique 64-bit client ID
  • addr: address/port of the client
  • laddr: address/port of local address client connected to (bind address)
  • fd: file descriptor corresponding to the socket
  • name: the name set by the client with CLIENT SETNAME
  • age: total duration of the connection in seconds
  • idle: idle time of the connection in seconds
  • flags: client flags (see below)
  • db: current database ID
  • sub: number of channel subscriptions
  • psub: number of pattern matching subscriptions
  • multi: number of commands in a MULTI/EXEC context
  • qbuf: query buffer length (0 means no query pending)
  • qbuf-free: free space of the query buffer (0 means the buffer is full)
  • argv-mem: incomplete arguments for the next command (already extracted from query buffer)
  • multi-mem: memory is used up by buffered multi commands. Added in Redis 7.0
  • obl: output buffer length
  • oll: output list length (replies are queued in this list when the buffer is full)
  • omem: output buffer memory usage
  • tot-mem: total memory consumed by this client in its various buffers
  • events: file descriptor events (see below)
  • cmd: last command played
  • user: the authenticated username of the client
  • redir: client id of current client tracking redirection
  • resp: client RESP protocol version. Added in Redis 7.0

The client flags can be a combination of:

A: connection to be closed ASAP
b: the client is waiting in a blocking operation
c: connection to be closed after writing entire reply
d: a watched keys has been modified - EXEC will fail
i: the client is waiting for a VM I/O (deprecated)
M: the client is a master
N: no specific flag set
O: the client is a client in MONITOR mode
P: the client is a Pub/Sub subscriber
r: the client is in readonly mode against a cluster node
S: the client is a replica node connection to this instance
u: the client is unblocked
U: the client is connected via a Unix domain socket
x: the client is in a MULTI/EXEC context
t: the client enabled keys tracking in order to perform client side caching
R: the client tracking target client is invalid
B: the client enabled broadcast tracking mode 

The file descriptor events can be:

r: the client socket is readable (event loop)
w: the client socket is writable (event loop)

Notes

New fields are regularly added for debugging purpose. Some could be removed in the future. A version safe Redis client using this command should parse the output accordingly (i.e. handling gracefully missing fields, skipping unknown fields).

63 - CLIENT NO-EVICT

Set client eviction mode for the current connection

The CLIENT NO-EVICT command sets the client eviction mode for the current connection.

When turned on and client eviction is configured, the current connection will be excluded from the client eviction process even if we're above the configured client eviction threshold.

When turned off, the current client will be re-included in the pool of potential clients to be evicted (and evicted if needed).

See client eviction for more details.

Return

Simple string reply: OK.

64 - CLIENT PAUSE

Stop processing commands from clients for some time

CLIENT PAUSE is a connections control command able to suspend all the Redis clients for the specified amount of time (in milliseconds).

The command performs the following actions:

  • It stops processing all the pending commands from normal and pub/sub clients for the given mode. However interactions with replicas will continue normally. Note that clients are formally paused when they try to execute a command, so no work is taken on the server side for inactive clients.
  • However it returns OK to the caller ASAP, so the CLIENT PAUSE command execution is not paused by itself.
  • When the specified amount of time has elapsed, all the clients are unblocked: this will trigger the processing of all the commands accumulated in the query buffer of every client during the pause.

Client pause currently supports two modes:

  • ALL: This is the default mode. All client commands are blocked.
  • WRITE: Clients are only blocked if they attempt to execute a write command.

For the WRITE mode, some commands have special behavior:

  • EVAL/EVALSHA: Will block client for all scripts.
  • PUBLISH: Will block client.
  • PFCOUNT: Will block client.
  • WAIT: Acknowledgments will be delayed, so this command will appear blocked.

This command is useful as it makes able to switch clients from a Redis instance to another one in a controlled way. For example during an instance upgrade the system administrator could do the following:

  • Pause the clients using CLIENT PAUSE
  • Wait a few seconds to make sure the replicas processed the latest replication stream from the master.
  • Turn one of the replicas into a master.
  • Reconfigure clients to connect with the new master.

Since Redis 6.2, the recommended mode for client pause is WRITE. This mode will stop all replication traffic, can be aborted with the CLIENT UNPAUSE command, and allows reconfiguring the old master without risking accepting writes after the failover. This is also the mode used during cluster failover.

For versions before 6.2, it is possible to send CLIENT PAUSE in a MULTI/EXEC block together with the INFO replication command in order to get the current master offset at the time the clients are blocked. This way it is possible to wait for a specific offset in the replica side in order to make sure all the replication stream was processed.

Since Redis 3.2.10 / 4.0.0, this command also prevents keys to be evicted or expired during the time clients are paused. This way the dataset is guaranteed to be static not just from the point of view of clients not being able to write, but also from the point of view of internal operations.

Return

Simple string reply: The command returns OK or an error if the timeout is invalid.

Behavior change history

  • >= 3.2.0: Client pause prevents client pause and key eviction as well.

65 - CLIENT REPLY

Instruct the server whether to reply to commands

Sometimes it can be useful for clients to completely disable replies from the Redis server. For example when the client sends fire and forget commands or performs a mass loading of data, or in caching contexts where new data is streamed constantly. In such contexts to use server time and bandwidth in order to send back replies to clients, which are going to be ignored, is considered wasteful.

The CLIENT REPLY command controls whether the server will reply the client's commands. The following modes are available:

  • ON. This is the default mode in which the server returns a reply to every command.
  • OFF. In this mode the server will not reply to client commands.
  • SKIP. This mode skips the reply of command immediately after it.

Return

When called with either OFF or SKIP subcommands, no reply is made. When called with ON:

Simple string reply: OK.

66 - CLIENT SETNAME

Set the current connection name

The CLIENT SETNAME command assigns a name to the current connection.

The assigned name is displayed in the output of CLIENT LIST so that it is possible to identify the client that performed a given connection.

For instance when Redis is used in order to implement a queue, producers and consumers of messages may want to set the name of the connection according to their role.

There is no limit to the length of the name that can be assigned if not the usual limits of the Redis string type (512 MB). However it is not possible to use spaces in the connection name as this would violate the format of the CLIENT LIST reply.

It is possible to entirely remove the connection name setting it to the empty string, that is not a valid connection name since it serves to this specific purpose.

The connection name can be inspected using CLIENT GETNAME.

Every new connection starts without an assigned name.

Tip: setting names to connections is a good way to debug connection leaks due to bugs in the application using Redis.

Return

Simple string reply: OK if the connection name was successfully set.

67 - CLIENT TRACKING

Enable or disable server assisted client side caching support

This command enables the tracking feature of the Redis server, that is used for server assisted client side caching.

When tracking is enabled Redis remembers the keys that the connection requested, in order to send later invalidation messages when such keys are modified. Invalidation messages are sent in the same connection (only available when the RESP3 protocol is used) or redirected in a different connection (available also with RESP2 and Pub/Sub). A special broadcasting mode is available where clients participating in this protocol receive every notification just subscribing to given key prefixes, regardless of the keys that they requested. Given the complexity of the argument please refer to the main client side caching documentation for the details. This manual page is only a reference for the options of this subcommand.

In order to enable tracking, use:

CLIENT TRACKING on ... options ...

The feature will remain active in the current connection for all its life, unless tracking is turned off with CLIENT TRACKING off at some point.

The following are the list of options that modify the behavior of the command when enabling tracking:

  • REDIRECT <id>: send invalidation messages to the connection with the specified ID. The connection must exist. You can get the ID of a connection using CLIENT ID. If the connection we are redirecting to is terminated, when in RESP3 mode the connection with tracking enabled will receive tracking-redir-broken push messages in order to signal the condition.
  • BCAST: enable tracking in broadcasting mode. In this mode invalidation messages are reported for all the prefixes specified, regardless of the keys requested by the connection. Instead when the broadcasting mode is not enabled, Redis will track which keys are fetched using read-only commands, and will report invalidation messages only for such keys.
  • PREFIX <prefix>: for broadcasting, register a given key prefix, so that notifications will be provided only for keys starting with this string. This option can be given multiple times to register multiple prefixes. If broadcasting is enabled without this option, Redis will send notifications for every key. You can't delete a single prefix, but you can delete all prefixes by disabling and re-enabling tracking. Using this option adds the additional time complexity of O(N^2), where N is the total number of prefixes tracked.
  • OPTIN: when broadcasting is NOT active, normally don't track keys in read only commands, unless they are called immediately after a CLIENT CACHING yes command.
  • OPTOUT: when broadcasting is NOT active, normally track keys in read only commands, unless they are called immediately after a CLIENT CACHING no command.
  • NOLOOP: don't send notifications about keys modified by this connection itself.

Return

Simple string reply: OK if the connection was successfully put in tracking mode or if the tracking mode was successfully disabled. Otherwise an error is returned.

68 - CLIENT TRACKINGINFO

Return information about server assisted client side caching for the current connection

The command returns information about the current client connection's use of the server assisted client side caching feature.

Return

Array reply: a list of tracking information sections and their respective values, specifically:

  • flags: A list of tracking flags used by the connection. The flags and their meanings are as follows:
    • off: The connection isn't using server assisted client side caching.
    • on: Server assisted client side caching is enabled for the connection.
    • bcast: The client uses broadcasting mode.
    • optin: The client does not cache keys by default.
    • optout: The client caches keys by default.
    • caching-yes: The next command will cache keys (exists only together with optin).
    • caching-no: The next command won't cache keys (exists only together with optout).
    • noloop: The client isn't notified about keys modified by itself.
    • broken_redirect: The client ID used for redirection isn't valid anymore.
  • redirect: The client ID used for notifications redirection, or -1 when none.
  • prefixes: A list of key prefixes for which notifications are sent to the client.

69 - CLIENT UNBLOCK

Unblock a client blocked in a blocking command from a different connection

This command can unblock, from a different connection, a client blocked in a blocking operation, such as for instance BRPOP or XREAD or WAIT.

By default the client is unblocked as if the timeout of the command was reached, however if an additional (and optional) argument is passed, it is possible to specify the unblocking behavior, that can be TIMEOUT (the default) or ERROR. If ERROR is specified, the behavior is to unblock the client returning as error the fact that the client was force-unblocked. Specifically the client will receive the following error:

-UNBLOCKED client unblocked via CLIENT UNBLOCK

Note: of course as usually it is not guaranteed that the error text remains the same, however the error code will remain -UNBLOCKED.

This command is useful especially when we are monitoring many keys with a limited number of connections. For instance we may want to monitor multiple streams with XREAD without using more than N connections. However at some point the consumer process is informed that there is one more stream key to monitor. In order to avoid using more connections, the best behavior would be to stop the blocking command from one of the connections in the pool, add the new key, and issue the blocking command again.

To obtain this behavior the following pattern is used. The process uses an additional control connection in order to send the CLIENT UNBLOCK command if needed. In the meantime, before running the blocking operation on the other connections, the process runs CLIENT ID in order to get the ID associated with that connection. When a new key should be added, or when a key should no longer be monitored, the relevant connection blocking command is aborted by sending CLIENT UNBLOCK in the control connection. The blocking command will return and can be finally reissued.

This example shows the application in the context of Redis streams, however the pattern is a general one and can be applied to other cases.

Examples

Connection A (blocking connection):
> CLIENT ID
2934
> BRPOP key1 key2 key3 0
(client is blocked)

... Now we want to add a new key ...

Connection B (control connection):
> CLIENT UNBLOCK 2934
1

Connection A (blocking connection):
... BRPOP reply with timeout ...
NULL
> BRPOP key1 key2 key3 key4 0
(client is blocked again)

Return

Integer reply, specifically:

  • 1 if the client was unblocked successfully.
  • 0 if the client wasn't unblocked.

70 - CLIENT UNPAUSE

Resume processing of clients that were paused

CLIENT UNPAUSE is used to resume command processing for all clients that were paused by CLIENT PAUSE.

Return

Simple string reply: The command returns OK

71 - CLUSTER

A container for cluster commands

This is a container command for Redis Cluster commands.

To see the list of available commands you can call CLUSTER HELP.

72 - CLUSTER ADDSLOTS

Assign new hash slots to receiving node

This command is useful in order to modify a node's view of the cluster configuration. Specifically it assigns a set of hash slots to the node receiving the command. If the command is successful, the node will map the specified hash slots to itself, and will start broadcasting the new configuration.

However note that:

  1. The command only works if all the specified slots are, from the point of view of the node receiving the command, currently not assigned. A node will refuse to take ownership for slots that already belong to some other node (including itself).
  2. The command fails if the same slot is specified multiple times.
  3. As a side effect of the command execution, if a slot among the ones specified as argument is set as importing, this state gets cleared once the node assigns the (previously unbound) slot to itself.

Example

For example the following command assigns slots 1 2 3 to the node receiving the command:

> CLUSTER ADDSLOTS 1 2 3
OK

However trying to execute it again results into an error since the slots are already assigned:

> CLUSTER ADDSLOTS 1 2 3
ERR Slot 1 is already busy

Usage in Redis Cluster

This command only works in cluster mode and is useful in the following Redis Cluster operations:

  1. To create a new cluster ADDSLOTS is used in order to initially setup master nodes splitting the available hash slots among them.
  2. In order to fix a broken cluster where certain slots are unassigned.

Information about slots propagation and warnings

Note that once a node assigns a set of slots to itself, it will start propagating this information in heartbeat packet headers. However the other nodes will accept the information only if they have the slot as not already bound with another node, or if the configuration epoch of the node advertising the new hash slot, is greater than the node currently listed in the table.

This means that this command should be used with care only by applications orchestrating Redis Cluster, like redis-cli, and the command if used out of the right context can leave the cluster in a wrong state or cause data loss.

Return

Simple string reply: OK if the command was successful. Otherwise an error is returned.

73 - CLUSTER ADDSLOTSRANGE

Assign new hash slots to receiving node

The CLUSTER ADDSLOTSRANGE is similar to the CLUSTER ADDSLOTS command in that they both assign hash slots to nodes.

The difference between the two commands is that ADDSLOTS takes a list of slots to assign to the node, while ADDSLOTSRANGE takes a list of slot ranges (specified by start and end slots) to assign to the node.

Example

To assign slots 1 2 3 4 5 to the node, the ADDSLOTS command is:

> CLUSTER ADDSLOTS 1 2 3 4 5
OK

The same operation can be completed with the following ADDSLOTSRANGE command:

> CLUSTER ADDSLOTSRANGE 1 5
OK

Usage in Redis Cluster

This command only works in cluster mode and is useful in the following Redis Cluster operations:

  1. To create a new cluster ADDSLOTSRANGE is used in order to initially setup master nodes splitting the available hash slots among them.
  2. In order to fix a broken cluster where certain slots are unassigned.

Return

Simple string reply: OK if the command was successful. Otherwise an error is returned.

74 - CLUSTER BUMPEPOCH

Advance the cluster config epoch

Advances the cluster config epoch.

The CLUSTER BUMPEPOCH command triggers an increment to the cluster's config epoch from the connected node. The epoch will be incremented if the node's config epoch is zero, or if it is less than the cluster's greatest epoch.

Note: config epoch management is performed internally by the cluster, and relies on obtaining a consensus of nodes. The CLUSTER BUMPEPOCH attempts to increment the config epoch WITHOUT getting the consensus, so using it may violate the "last failover wins" rule. Use it with caution.

Return

Simple string reply: BUMPED if the epoch was incremented, or STILL if the node already has the greatest config epoch in the cluster.

75 - CLUSTER COUNT-FAILURE-REPORTS

Return the number of failure reports active for a given node

The command returns the number of failure reports for the specified node. Failure reports are the way Redis Cluster uses in order to promote a PFAIL state, that means a node is not reachable, to a FAIL state, that means that the majority of masters in the cluster agreed within a window of time that the node is not reachable.

A few more details:

  • A node flags another node with PFAIL when the node is not reachable for a time greater than the configured node timeout, which is a fundamental configuration parameter of a Redis Cluster.
  • Nodes in PFAIL state are provided in gossip sections of heartbeat packets.
  • Every time a node processes gossip packets from other nodes, it creates (and refreshes the TTL if needed) failure reports, remembering that a given node said another given node is in PFAIL condition.
  • Each failure report has a time to live of two times the node timeout time.
  • If at a given time a node has another node flagged with PFAIL, and at the same time collected the majority of other master nodes failure reports about this node (including itself if it is a master), then it elevates the failure state of the node from PFAIL to FAIL, and broadcasts a message forcing all the nodes that can be reached to flag the node as FAIL.

This command returns the number of failure reports for the current node which are currently not expired (so received within two times the node timeout time). The count does not include what the node we are asking this count believes about the node ID we pass as argument, the count only includes the failure reports the node received from other nodes.

This command is mainly useful for debugging, when the failure detector of Redis Cluster is not operating as we believe it should.

Return

Integer reply: the number of active failure reports for the node.

76 - CLUSTER COUNTKEYSINSLOT

Return the number of local keys in the specified hash slot

Returns the number of keys in the specified Redis Cluster hash slot. The command only queries the local data set, so contacting a node that is not serving the specified hash slot will always result in a count of zero being returned.

> CLUSTER COUNTKEYSINSLOT 7000
(integer) 50341

Return

Integer reply: The number of keys in the specified hash slot, or an error if the hash slot is invalid.

77 - CLUSTER DELSLOTS

Set hash slots as unbound in receiving node

In Redis Cluster, each node keeps track of which master is serving a particular hash slot.

The CLUSTER DELSLOTS command asks a particular Redis Cluster node to forget which master is serving the hash slots specified as arguments.

In the context of a node that has received a CLUSTER DELSLOTS command and has consequently removed the associations for the passed hash slots, we say those hash slots are unbound. Note that the existence of unbound hash slots occurs naturally when a node has not been configured to handle them (something that can be done with the CLUSTER ADDSLOTS command) and if it has not received any information about who owns those hash slots (something that it can learn from heartbeat or update messages).

If a node with unbound hash slots receives a heartbeat packet from another node that claims to be the owner of some of those hash slots, the association is established instantly. Moreover, if a heartbeat or update message is received with a configuration epoch greater than the node's own, the association is re-established.

However, note that:

  1. The command only works if all the specified slots are already associated with some node.
  2. The command fails if the same slot is specified multiple times.
  3. As a side effect of the command execution, the node may go into down state because not all hash slots are covered.

Example

The following command removes the association for slots 5000 and 5001 from the node receiving the command:

> CLUSTER DELSLOTS 5000 5001
OK

Usage in Redis Cluster

This command only works in cluster mode and may be useful for debugging and in order to manually orchestrate a cluster configuration when a new cluster is created. It is currently not used by redis-cli, and mainly exists for API completeness.

Return

Simple string reply: OK if the command was successful. Otherwise an error is returned.

78 - CLUSTER DELSLOTSRANGE

Set hash slots as unbound in receiving node

The CLUSTER DELSLOTSRANGE command is similar to the CLUSTER DELSLOTS command in that they both remove hash slots from the node. The difference is that CLUSTER DELSLOTS takes a list of hash slots to remove from the node, while CLUSTER DELSLOTSRANGE takes a list of slot ranges (specified by start and end slots) to remove from the node.

Example

To remove slots 1 2 3 4 5 from the node, the CLUSTER DELSLOTS command is:

> CLUSTER DELSLOTS 1 2 3 4 5
OK

The same operation can be completed with the following CLUSTER DELSLOTSRANGE command:

> CLUSTER DELSLOTSRANGE 1 5
OK

However, note that:

  1. The command only works if all the specified slots are already associated with the node.
  2. The command fails if the same slot is specified multiple times.
  3. As a side effect of the command execution, the node may go into down state because not all hash slots are covered.

Usage in Redis Cluster

This command only works in cluster mode and may be useful for debugging and in order to manually orchestrate a cluster configuration when a new cluster is created. It is currently not used by redis-cli, and mainly exists for API completeness.

Return

Simple string reply: OK if the command was successful. Otherwise an error is returned.

79 - CLUSTER FAILOVER

Forces a replica to perform a manual failover of its master.

This command, that can only be sent to a Redis Cluster replica node, forces the replica to start a manual failover of its master instance.

A manual failover is a special kind of failover that is usually executed when there are no actual failures, but we wish to swap the current master with one of its replicas (which is the node we send the command to), in a safe way, without any window for data loss. It works in the following way:

  1. The replica tells the master to stop processing queries from clients.
  2. The master replies to the replica with the current replication offset.
  3. The replica waits for the replication offset to match on its side, to make sure it processed all the data from the master before it continues.
  4. The replica starts a failover, obtains a new configuration epoch from the majority of the masters, and broadcasts the new configuration.
  5. The old master receives the configuration update: unblocks its clients and starts replying with redirection messages so that they'll continue the chat with the new master.

This way clients are moved away from the old master to the new master atomically and only when the replica that is turning into the new master has processed all of the replication stream from the old master.

FORCE option: manual failover when the master is down

The command behavior can be modified by two options: FORCE and TAKEOVER.

If the FORCE option is given, the replica does not perform any handshake with the master, that may be not reachable, but instead just starts a failover ASAP starting from point 4. This is useful when we want to start a manual failover while the master is no longer reachable.

However using FORCE we still need the majority of masters to be available in order to authorize the failover and generate a new configuration epoch for the replica that is going to become master.

TAKEOVER option: manual failover without cluster consensus

There are situations where this is not enough, and we want a replica to failover without any agreement with the rest of the cluster. A real world use case for this is to mass promote replicas in a different data center to masters in order to perform a data center switch, while all the masters are down or partitioned away.

The TAKEOVER option implies everything FORCE implies, but also does not uses any cluster authorization in order to failover. A replica receiving CLUSTER FAILOVER TAKEOVER will instead:

  1. Generate a new configEpoch unilaterally, just taking the current greatest epoch available and incrementing it if its local configuration epoch is not already the greatest.
  2. Assign itself all the hash slots of its master, and propagate the new configuration to every node which is reachable ASAP, and eventually to every other node.

Note that TAKEOVER violates the last-failover-wins principle of Redis Cluster, since the configuration epoch generated by the replica violates the normal generation of configuration epochs in several ways:

  1. There is no guarantee that it is actually the higher configuration epoch, since, for example, we can use the TAKEOVER option within a minority, nor any message exchange is performed to generate the new configuration epoch.
  2. If we generate a configuration epoch which happens to collide with another instance, eventually our configuration epoch, or the one of another instance with our same epoch, will be moved away using the configuration epoch collision resolution algorithm.

Because of this the TAKEOVER option should be used with care.

Implementation details and notes

  • CLUSTER FAILOVER, unless the TAKEOVER option is specified, does not execute a failover synchronously. It only schedules a manual failover, bypassing the failure detection stage.
  • An OK reply is no guarantee that the failover will succeed.
  • A replica can only be promoted to a master if it is known as a replica by a majority of the masters in the cluster. If the replica is a new node that has just been added to the cluster (for example after upgrading it), it may not yet be known to all the masters in the cluster. To check that the masters are aware of a new replica, you can send CLUSTER NODES or CLUSTER REPLICAS to each of the master nodes and check that it appears as a replica, before sending CLUSTER FAILOVER to the replica.
  • To check that the failover has actually happened you can use ROLE, INFO REPLICATION (which indicates "role:master" after successful failover), or CLUSTER NODES to verify that the state of the cluster has changed sometime after the command was sent.
  • To check if the failover has failed, check the replica's log for "Manual failover timed out", which is logged if the replica has given up after a few seconds.

Return

Simple string reply: OK if the command was accepted and a manual failover is going to be attempted. An error if the operation cannot be executed, for example if we are talking with a node which is already a master.

80 - CLUSTER FLUSHSLOTS

Delete a node's own slots information

Deletes all slots from a node.

The CLUSTER FLUSHSLOTS deletes all information about slots from the connected node. It can only be called when the database is empty.

Return

Simple string reply: OK

81 - CLUSTER FORGET

Remove a node from the nodes table

The command is used in order to remove a node, specified via its node ID, from the set of known nodes of the Redis Cluster node receiving the command. In other words the specified node is removed from the nodes table of the node receiving the command.

Because when a given node is part of the cluster, all the other nodes participating in the cluster knows about it, in order for a node to be completely removed from a cluster, the CLUSTER FORGET command must be sent to all the remaining nodes, regardless of the fact they are masters or replicas.

However the command cannot simply drop the node from the internal node table of the node receiving the command, it also implements a ban-list, not allowing the same node to be added again as a side effect of processing the gossip section of the heartbeat packets received from other nodes.

Details on why the ban-list is needed

In the following example we'll show why the command must not just remove a given node from the nodes table, but also prevent it for being re-inserted again for some time.

Let's assume we have four nodes, A, B, C and D. In order to end with just a three nodes cluster A, B, C we may follow these steps:

  1. Reshard all the hash slots from D to nodes A, B, C.
  2. D is now empty, but still listed in the nodes table of A, B and C.
  3. We contact A, and send CLUSTER FORGET D.
  4. B sends node A a heartbeat packet, where node D is listed.
  5. A does no longer known node D (see step 3), so it starts an handshake with D.
  6. D ends re-added in the nodes table of A.

As you can see in this way removing a node is fragile, we need to send CLUSTER FORGET commands to all the nodes ASAP hoping there are no gossip sections processing in the meantime. Because of this problem the command implements a ban-list with an expire time for each entry.

So what the command really does is:

  1. The specified node gets removed from the nodes table.
  2. The node ID of the removed node gets added to the ban-list, for 1 minute.
  3. The node will skip all the node IDs listed in the ban-list when processing gossip sections received in heartbeat packets from other nodes.

This way we have a 60 second window to inform all the nodes in the cluster that we want to remove a node.

Special conditions not allowing the command execution

The command does not succeed and returns an error in the following cases:

  1. The specified node ID is not found in the nodes table.
  2. The node receiving the command is a replica, and the specified node ID identifies its current master.
  3. The node ID identifies the same node we are sending the command to.

Return

Simple string reply: OK if the command was executed successfully, otherwise an error is returned.

82 - CLUSTER GETKEYSINSLOT

Return local key names in the specified hash slot

The command returns an array of keys names stored in the contacted node and hashing to the specified hash slot. The maximum number of keys to return is specified via the count argument, so that it is possible for the user of this API to batch-processing keys.

The main usage of this command is during rehashing of cluster slots from one node to another. The way the rehashing is performed is exposed in the Redis Cluster specification, or in a more simple to digest form, as an appendix of the CLUSTER SETSLOT command documentation.

> CLUSTER GETKEYSINSLOT 7000 3
1) "key_39015"
2) "key_89793"
3) "key_92937"

Return

Array reply: From 0 to count key names in a Redis array reply.

83 - CLUSTER HELP

Show helpful text about the different subcommands

The CLUSTER HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

84 - CLUSTER INFO

Provides info about Redis Cluster node state

CLUSTER INFO provides INFO style information about Redis Cluster vital parameters. The following is a sample output, followed by the description of each field reported.

cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:2
cluster_stats_messages_sent:1483972
cluster_stats_messages_received:1483968
total_cluster_links_buffer_limit_exceeded:0
  • cluster_state: State is ok if the node is able to receive queries. fail if there is at least one hash slot which is unbound (no node associated), in error state (node serving it is flagged with FAIL flag), or if the majority of masters can't be reached by this node.
  • cluster_slots_assigned: Number of slots which are associated to some node (not unbound). This number should be 16384 for the node to work properly, which means that each hash slot should be mapped to a node.
  • cluster_slots_ok: Number of hash slots mapping to a node not in FAIL or PFAIL state.
  • cluster_slots_pfail: Number of hash slots mapping to a node in PFAIL state. Note that those hash slots still work correctly, as long as the PFAIL state is not promoted to FAIL by the failure detection algorithm. PFAIL only means that we are currently not able to talk with the node, but may be just a transient error.
  • cluster_slots_fail: Number of hash slots mapping to a node in FAIL state. If this number is not zero the node is not able to serve queries unless cluster-require-full-coverage is set to no in the configuration.
  • cluster_known_nodes: The total number of known nodes in the cluster, including nodes in HANDSHAKE state that may not currently be proper members of the cluster.
  • cluster_size: The number of master nodes serving at least one hash slot in the cluster.
  • cluster_current_epoch: The local Current Epoch variable. This is used in order to create unique increasing version numbers during fail overs.
  • cluster_my_epoch: The Config Epoch of the node we are talking with. This is the current configuration version assigned to this node.
  • cluster_stats_messages_sent: Number of messages sent via the cluster node-to-node binary bus.
  • cluster_stats_messages_received: Number of messages received via the cluster node-to-node binary bus.
  • total_cluster_links_buffer_limit_exceeded: Accumulated count of cluster links freed due to exceeding the cluster-link-sendbuf-limit configuration.

More information about the Current Epoch and Config Epoch variables are available in the Redis Cluster specification document.

Return

Bulk string reply: A map between named fields and values in the form of <field>:<value> lines separated by newlines composed by the two bytes CRLF.

85 - CLUSTER KEYSLOT

Returns the hash slot of the specified key

Returns an integer identifying the hash slot the specified key hashes to. This command is mainly useful for debugging and testing, since it exposes via an API the underlying Redis implementation of the hashing algorithm. Example use cases for this command:

  1. Client libraries may use Redis in order to test their own hashing algorithm, generating random keys and hashing them with both their local implementation and using Redis CLUSTER KEYSLOT command, then checking if the result is the same.
  2. Humans may use this command in order to check what is the hash slot, and then the associated Redis Cluster node, responsible for a given key.

Example

> CLUSTER KEYSLOT somekey
(integer) 11058
> CLUSTER KEYSLOT foo{hash_tag}
(integer) 2515
> CLUSTER KEYSLOT bar{hash_tag}
(integer) 2515

Note that the command implements the full hashing algorithm, including support for hash tags, that is the special property of Redis Cluster key hashing algorithm, of hashing just what is between { and } if such a pattern is found inside the key name, in order to force multiple keys to be handled by the same node.

Return

Integer reply: The hash slot number.

86 - CLUSTER LINKS

Returns a list of all TCP links to and from peer nodes in cluster

Each node in a Redis Cluster maintains a pair of long-lived TCP link with each peer in the cluster: One for sending outbound messages towards the peer and one for receiving inbound messages from the peer.

CLUSTER LINKS outputs information of all such peer links as an array, where each array element is a map that contains attributes and their values for an individual link.

Examples

The following is an example output:

> CLUSTER LINKS
1)  1) "direction"
    2) "to"
    3) "node"
    4) "8149d745fa551e40764fecaf7cab9dbdf6b659ae"
    5) "create-time"
    6) (integer) 1639442739375
    7) "events"
    8) "rw"
    9) "send-buffer-allocated"
   10) (integer) 4512
   11) "send-buffer-used"
   12) (integer) 0
2)  1) "direction"
    2) "from"
    3) "node"
    4) "8149d745fa551e40764fecaf7cab9dbdf6b659ae"
    5) "create-time"
    6) (integer) 1639442739411
    7) "events"
    8) "r"
    9) "send-buffer-allocated"
   10) (integer) 0
   11) "send-buffer-used"
   12) (integer) 0

Each map is composed of the following attributes of the corresponding cluster link and their values:

  1. direction: This link is established by the local node to the peer, or accepted by the local node from the peer.
  2. node: The node id of the peer.
  3. create-time: Creation time of the link. (In the case of a to link, this is the time when the TCP link is created by the local node, not the time when it is actually established.)
  4. events: Events currently registered for the link. r means readable event, w means writable event.
  5. send-buffer-allocated: Allocated size of the link's send buffer, which is used to buffer outgoing messages toward the peer.
  6. send-buffer-used: Size of the portion of the link's send buffer that is currently holding data(messages).

Return

Array reply: An array of maps where each map contains various attributes and their values of a cluster link.

87 - CLUSTER MEET

Force a node cluster to handshake with another node

CLUSTER MEET is used in order to connect different Redis nodes with cluster support enabled, into a working cluster.

The basic idea is that nodes by default don't trust each other, and are considered unknown, so that it is unlikely that different cluster nodes will mix into a single one because of system administration errors or network addresses modifications.

So in order for a given node to accept another one into the list of nodes composing a Redis Cluster, there are only two ways:

  1. The system administrator sends a CLUSTER MEET command to force a node to meet another one.
  2. An already known node sends a list of nodes in the gossip section that we are not aware of. If the receiving node trusts the sending node as a known node, it will process the gossip section and send an handshake to the nodes that are still not known.

Note that Redis Cluster needs to form a full mesh (each node is connected with each other node), but in order to create a cluster, there is no need to send all the CLUSTER MEET commands needed to form the full mesh. What matter is to send enough CLUSTER MEET messages so that each node can reach each other node through a chain of known nodes. Thanks to the exchange of gossip information in heartbeat packets, the missing links will be created.

So, if we link node A with node B via CLUSTER MEET, and B with C, A and C will find their ways to handshake and create a link.

Another example: if we imagine a cluster formed of the following four nodes called A, B, C and D, we may send just the following set of commands to A:

  1. CLUSTER MEET B-ip B-port
  2. CLUSTER MEET C-ip C-port
  3. CLUSTER MEET D-ip D-port

As a side effect of A knowing and being known by all the other nodes, it will send gossip sections in the heartbeat packets that will allow each other node to create a link with each other one, forming a full mesh in a matter of seconds, even if the cluster is large.

Moreover CLUSTER MEET does not need to be reciprocal. If I send the command to A in order to join B, I don't need to also send it to B in order to join A.

Implementation details: MEET and PING packets

When a given node receives a CLUSTER MEET message, the node specified in the command still does not know the node we sent the command to. So in order for the node to force the receiver to accept it as a trusted node, it sends a MEET packet instead of a PING packet. The two packets have exactly the same format, but the former forces the receiver to acknowledge the node as trusted.

Return

Simple string reply: OK if the command was successful. If the address or port specified are invalid an error is returned.

88 - CLUSTER MYID

Return the node id

Returns the node's id.

The CLUSTER MYID command returns the unique, auto-generated identifier that is associated with the connected cluster node.

Return

Bulk string reply: The node id.

89 - CLUSTER NODES

Get Cluster config for the node

Each node in a Redis Cluster has its view of the current cluster configuration, given by the set of known nodes, the state of the connection we have with such nodes, their flags, properties and assigned slots, and so forth.

CLUSTER NODES provides all this information, that is, the current cluster configuration of the node we are contacting, in a serialization format which happens to be exactly the same as the one used by Redis Cluster itself in order to store on disk the cluster state (however the on disk cluster state has a few additional info appended at the end).

Note that normally clients willing to fetch the map between Cluster hash slots and node addresses should use CLUSTER SLOTS instead. CLUSTER NODES, that provides more information, should be used for administrative tasks, debugging, and configuration inspections. It is also used by redis-cli in order to manage a cluster.

Serialization format

The output of the command is just a space-separated CSV string, where each line represents a node in the cluster. The following is an example of output:

07c37dfeb235213a872192d90877d0cd55635b91 127.0.0.1:30004@31004 slave e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 0 1426238317239 4 connected
67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 127.0.0.1:30002@31002 master - 0 1426238316232 2 connected 5461-10922
292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 127.0.0.1:30003@31003 master - 0 1426238318243 3 connected 10923-16383
6ec23923021cf3ffec47632106199cb7f496ce01 127.0.0.1:30005@31005 slave 67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 0 1426238316232 5 connected
824fe116063bc5fcf9f4ffd895bc17aee7731ac3 127.0.0.1:30006@31006 slave 292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 0 1426238317741 6 connected
e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 127.0.0.1:30001@31001 myself,master - 0 0 1 connected 0-5460

Each line is composed of the following fields:

<id> <ip:port@cport> <flags> <master> <ping-sent> <pong-recv> <config-epoch> <link-state> <slot> <slot> ... <slot>

The meaning of each filed is the following:

  1. id: The node ID, a 40 characters random string generated when a node is created and never changed again (unless CLUSTER RESET HARD is used).
  2. ip:port@cport: The node address where clients should contact the node to run queries.
  3. flags: A list of comma separated flags: myself, master, slave, fail?, fail, handshake, noaddr, nofailover, noflags. Flags are explained in detail in the next section.
  4. master: If the node is a replica, and the master is known, the master node ID, otherwise the "-" character.
  5. ping-sent: Milliseconds unix time at which the currently active ping was sent, or zero if there are no pending pings.
  6. pong-recv: Milliseconds unix time the last pong was received.
  7. config-epoch: The configuration epoch (or version) of the current node (or of the current master if the node is a replica). Each time there is a failover, a new, unique, monotonically increasing configuration epoch is created. If multiple nodes claim to serve the same hash slots, the one with higher configuration epoch wins.
  8. link-state: The state of the link used for the node-to-node cluster bus. We use this link to communicate with the node. Can be connected or disconnected.
  9. slot: A hash slot number or range. Starting from argument number 9, but there may be up to 16384 entries in total (limit never reached). This is the list of hash slots served by this node. If the entry is just a number, is parsed as such. If it is a range, it is in the form start-end, and means that the node is responsible for all the hash slots from start to end including the start and end values.

Meaning of the flags (field number 3):

  • myself: The node you are contacting.
  • master: Node is a master.
  • slave: Node is a replica.
  • fail?: Node is in PFAIL state. Not reachable for the node you are contacting, but still logically reachable (not in FAIL state).
  • fail: Node is in FAIL state. It was not reachable for multiple nodes that promoted the PFAIL state to FAIL.
  • handshake: Untrusted node, we are handshaking.
  • noaddr: No address known for this node.
  • nofailover: Replica will not try to failover.
  • noflags: No flags at all.

Notes on published config epochs

Replicas broadcast their master's config epochs (in order to get an UPDATE message if they are found to be stale), so the real config epoch of the replica (which is meaningless more or less, since they don't serve hash slots) can be only obtained checking the node flagged as myself, which is the entry of the node we are asking to generate CLUSTER NODES output. The other replicas epochs reflect what they publish in heartbeat packets, which is, the configuration epoch of the masters they are currently replicating.

Special slot entries

Normally hash slots associated to a given node are in one of the following formats, as already explained above:

  1. Single number: 3894
  2. Range: 3900-4000

However node hash slots can be in a special state, used in order to communicate errors after a node restart (mismatch between the keys in the AOF/RDB file, and the node hash slots configuration), or when there is a resharding operation in progress. This two states are importing and migrating.

The meaning of the two states is explained in the Redis Specification, however the gist of the two states is the following:

  • Importing slots are yet not part of the nodes hash slot, there is a migration in progress. The node will accept queries about these slots only if the ASK command is used.
  • Migrating slots are assigned to the node, but are being migrated to some other node. The node will accept queries if all the keys in the command exist already, otherwise it will emit what is called an ASK redirection, to force new keys creation directly in the importing node.

Importing and migrating slots are emitted in the CLUSTER NODES output as follows:

  • Importing slot: [slot_number-<-importing_from_node_id]
  • Migrating slot: [slot_number->-migrating_to_node_id]

The following are a few examples of importing and migrating slots:

  • [93-<-292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f]
  • [1002-<-67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1]
  • [77->-e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca]
  • [16311->-292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f]

Note that the format does not have any space, so CLUSTER NODES output format is plain CSV with space as separator even when this special slots are emitted. However a complete parser for the format should be able to handle them.

Note that:

  1. Migration and importing slots are only added to the node flagged as myself. This information is local to a node, for its own slots.
  2. Importing and migrating slots are provided as additional info. If the node has a given hash slot assigned, it will be also a plain number in the list of hash slots, so clients that don't have a clue about hash slots migrations can just skip this special fields.

Return

Bulk string reply: The serialized cluster configuration.

A note about the word slave used in this man page and command name: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated.

90 - CLUSTER REPLICAS

List replica nodes of the specified master node

The command provides a list of replica nodes replicating from the specified master node. The list is provided in the same format used by CLUSTER NODES (please refer to its documentation for the specification of the format).

The command will fail if the specified node is not known or if it is not a master according to the node table of the node receiving the command.

Note that if a replica is added, moved, or removed from a given master node, and we ask CLUSTER REPLICAS to a node that has not yet received the configuration update, it may show stale information. However eventually (in a matter of seconds if there are no network partitions) all the nodes will agree about the set of nodes associated with a given master.

Return

The command returns data in the same format as CLUSTER NODES.

91 - CLUSTER REPLICATE

Reconfigure a node as a replica of the specified master node

The command reconfigures a node as a replica of the specified master. If the node receiving the command is an empty master, as a side effect of the command, the node role is changed from master to replica.

Once a node is turned into the replica of another master node, there is no need to inform the other cluster nodes about the change: heartbeat packets exchanged between nodes will propagate the new configuration automatically.

A replica will always accept the command, assuming that:

  1. The specified node ID exists in its nodes table.
  2. The specified node ID does not identify the instance we are sending the command to.
  3. The specified node ID is a master.

If the node receiving the command is not already a replica, but is a master, the command will only succeed, and the node will be converted into a replica, only if the following additional conditions are met:

  1. The node is not serving any hash slots.
  2. The node is empty, no keys are stored at all in the key space.

If the command succeeds the new replica will immediately try to contact its master in order to replicate from it.

Return

Simple string reply: OK if the command was executed successfully, otherwise an error is returned.

92 - CLUSTER RESET

Reset a Redis Cluster node

Reset a Redis Cluster node, in a more or less drastic way depending on the reset type, that can be hard or soft. Note that this command does not work for masters if they hold one or more keys, in that case to completely reset a master node keys must be removed first, e.g. by using FLUSHALL first, and then CLUSTER RESET.

Effects on the node:

  1. All the other nodes in the cluster are forgotten.
  2. All the assigned / open slots are reset, so the slots-to-nodes mapping is totally cleared.
  3. If the node is a replica it is turned into an (empty) master. Its dataset is flushed, so at the end the node will be an empty master.
  4. Hard reset only: a new Node ID is generated.
  5. Hard reset only: currentEpoch and configEpoch vars are set to 0.
  6. The new configuration is persisted on disk in the node cluster configuration file.

This command is mainly useful to re-provision a Redis Cluster node in order to be used in the context of a new, different cluster. The command is also extensively used by the Redis Cluster testing framework in order to reset the state of the cluster every time a new test unit is executed.

If no reset type is specified, the default is soft.

Return

Simple string reply: OK if the command was successful. Otherwise an error is returned.

93 - CLUSTER SAVECONFIG

Forces the node to save cluster state on disk

Forces a node to save the nodes.conf configuration on disk. Before to return the command calls fsync(2) in order to make sure the configuration is flushed on the computer disk.

This command is mainly used in the event a nodes.conf node state file gets lost / deleted for some reason, and we want to generate it again from scratch. It can also be useful in case of mundane alterations of a node cluster configuration via the CLUSTER command in order to ensure the new configuration is persisted on disk, however all the commands should normally be able to auto schedule to persist the configuration on disk when it is important to do so for the correctness of the system in the event of a restart.

Return

Simple string reply: OK or an error if the operation fails.

94 - CLUSTER SET-CONFIG-EPOCH

Set the configuration epoch in a new node

This command sets a specific config epoch in a fresh node. It only works when:

  1. The nodes table of the node is empty.
  2. The node current config epoch is zero.

These prerequisites are needed since usually, manually altering the configuration epoch of a node is unsafe, we want to be sure that the node with the higher configuration epoch value (that is the last that failed over) wins over other nodes in claiming the hash slots ownership.

However there is an exception to this rule, and it is when a new cluster is created from scratch. Redis Cluster config epoch collision resolution algorithm can deal with new nodes all configured with the same configuration at startup, but this process is slow and should be the exception, only to make sure that whatever happens, two more nodes eventually always move away from the state of having the same configuration epoch.

So, using CLUSTER SET-CONFIG-EPOCH, when a new cluster is created, we can assign a different progressive configuration epoch to each node before joining the cluster together.

Return

Simple string reply: OK if the command was executed successfully, otherwise an error is returned.

95 - CLUSTER SETSLOT

Bind a hash slot to a specific node

CLUSTER SETSLOT is responsible of changing the state of a hash slot in the receiving node in different ways. It can, depending on the subcommand used:

  1. MIGRATING subcommand: Set a hash slot in migrating state.
  2. IMPORTING subcommand: Set a hash slot in importing state.
  3. STABLE subcommand: Clear any importing / migrating state from hash slot.
  4. NODE subcommand: Bind the hash slot to a different node.

The command with its set of subcommands is useful in order to start and end cluster live resharding operations, which are accomplished by setting a hash slot in migrating state in the source node, and importing state in the destination node.

Each subcommand is documented below. At the end you'll find a description of how live resharding is performed using this command and other related commands.

CLUSTER SETSLOT <slot> MIGRATING <destination-node-id>

This subcommand sets a slot to migrating state. In order to set a slot in this state, the node receiving the command must be the hash slot owner, otherwise an error is returned.

When a slot is set in migrating state, the node changes behavior in the following way:

  1. If a command is received about an existing key, the command is processed as usually.
  2. If a command is received about a key that does not exists, an ASK redirection is emitted by the node, asking the client to retry only that specific query into destination-node. In this case the client should not update its hash slot to node mapping.
  3. If the command contains multiple keys, in case none exist, the behavior is the same as point 2, if all exist, it is the same as point 1, however if only a partial number of keys exist, the command emits a TRYAGAIN error in order for the keys interested to finish being migrated to the target node, so that the multi keys command can be executed.

CLUSTER SETSLOT <slot> IMPORTING <source-node-id>

This subcommand is the reverse of MIGRATING, and prepares the destination node to import keys from the specified source node. The command only works if the node is not already owner of the specified hash slot.

When a slot is set in importing state, the node changes behavior in the following way:

  1. Commands about this hash slot are refused and a MOVED redirection is generated as usually, but in the case the command follows an ASKING command, in this case the command is executed.

In this way when a node in migrating state generates an ASK redirection, the client contacts the target node, sends ASKING, and immediately after sends the command. This way commands about non-existing keys in the old node or keys already migrated to the target node are executed in the target node, so that:

  1. New keys are always created in the target node. During a hash slot migration we'll have to move only old keys, not new ones.
  2. Commands about keys already migrated are correctly processed in the context of the node which is the target of the migration, the new hash slot owner, in order to guarantee consistency.
  3. Without ASKING the behavior is the same as usually. This guarantees that clients with a broken hash slots mapping will not write for error in the target node, creating a new version of a key that has yet to be migrated.

CLUSTER SETSLOT <slot> STABLE

This subcommand just clears migrating / importing state from the slot. It is mainly used to fix a cluster stuck in a wrong state by redis-cli --cluster fix. Normally the two states are cleared automatically at the end of the migration using the SETSLOT ... NODE ... subcommand as explained in the next section.

CLUSTER SETSLOT <slot> NODE <node-id>

The NODE subcommand is the one with the most complex semantics. It associates the hash slot with the specified node, however the command works only in specific situations and has different side effects depending on the slot state. The following is the set of pre-conditions and side effects of the command:

  1. If the current hash slot owner is the node receiving the command, but for effect of the command the slot would be assigned to a different node, the command will return an error if there are still keys for that hash slot in the node receiving the command.
  2. If the slot is in migrating state, the state gets cleared when the slot is assigned to another node.
  3. If the slot was in importing state in the node receiving the command, and the command assigns the slot to this node (which happens in the target node at the end of the resharding of a hash slot from one node to another), the command has the following side effects: A) the importing state is cleared. B) If the node config epoch is not already the greatest of the cluster, it generates a new one and assigns the new config epoch to itself. This way its new hash slot ownership will win over any past configuration created by previous failovers or slot migrations.

It is important to note that step 3 is the only time when a Redis Cluster node will create a new config epoch without agreement from other nodes. This only happens when a manual configuration is operated. However it is impossible that this creates a non-transient setup where two nodes have the same config epoch, since Redis Cluster uses a config epoch collision resolution algorithm.

Return

Simple string reply: All the subcommands return OK if the command was successful. Otherwise an error is returned.

Redis Cluster live resharding explained

The CLUSTER SETSLOT command is an important piece used by Redis Cluster in order to migrate all the keys contained in one hash slot from one node to another. This is how the migration is orchestrated, with the help of other commands as well. We'll call the node that has the current ownership of the hash slot the source node, and the node where we want to migrate the destination node.

  1. Set the destination node slot to importing state using CLUSTER SETSLOT <slot> IMPORTING <source-node-id>.
  2. Set the source node slot to migrating state using CLUSTER SETSLOT <slot> MIGRATING <destination-node-id>.
  3. Get keys from the source node with CLUSTER GETKEYSINSLOT command and move them into the destination node using the MIGRATE command.
  4. Send CLUSTER SETSLOT <slot> NODE <destination-node-id> to the destination node.
  5. Send CLUSTER SETSLOT <slot> NODE <destination-node-id> to the source node.
  6. Send CLUSTER SETSLOT <slot> NODE <destination-node-id> to the other master nodes (optional).

Notes:

  • The order of step 1 and 2 is important. We want the destination node to be ready to accept ASK redirections when the source node is configured to redirect.
  • The order of step 4 and 5 is important. The destination node is responsible for propagating the change to the rest of the cluster. If the source node is informed before the destination node and the destination node crashes before it is set as new slot owner, the slot is left with no owner, even after a successful failover.
  • Step 6, sending SETSLOT to the nodes not involved in the resharding, is not technically necessary since the configuration will eventually propagate itself. However, it is a good idea to do so in order to stop nodes from pointing to the wrong node for the hash slot moved as soon as possible, resulting in less redirections to find the right node.

96 - CLUSTER SHARDS

Get array of cluster slots to node mappings

CLUSTER SHARDS returns details about the shards of the cluster. A shard is defined as a collection of nodes that serve the same set of slots and that replicate from each other. A shard may only have a single master at a given time, but may have multiple or no replicas. It is possible for a shard to not be serving any slots while still having replicas.

This command replaces the CLUSTER SLOTS command, by providing a more efficient and extensible representation of the cluster.

The command is suitable to be used by Redis Cluster client libraries in order to understand the topology of the cluster. A client should issue this command on startup in order to retrieve the map associating cluster hash slots with actual node information. This map should be used to direct commands to the node that is likely serving the slot associated with a given command. In the event the command is sent to the wrong node, in that it received a '-MOVED' redirect, this command can then be used to update the topology of the cluster.

The command returns an array of shards, with each shard containing two fields, 'slots' and 'nodes'.

The 'slots' field is a list of slot ranges served by this shard, stored as pair of integers representing the inclusive start and end slots of the ranges. For example, if a node owns the slots 1, 2, 3, 5, 7, 8 and 9, the slots ranges would be stored as [1-3], [5-5], [7-9]. The slots field would therefor be represented by the following list of integers.

1) 1) "slots"
   2) 1) (integer) 1
      2) (integer) 3
      3) (integer) 5
      4) (integer) 5
      5) (integer) 7
      6) (integer) 9

The 'nodes' field contains a list of all nodes within the shard. Each individual node is a map of attributes that describe the node. Some attributes are optional and more attributes may be added in the future. The current list of attributes:

  • id: The unique node id for this particular node.
  • endpoint: The preferred endpoint to reach the node, see below for more information about the possible values of this field.
  • ip: The IP address to send requests to for this node.
  • hostname (optional): The announced hostname to send requests to for this node.
  • port (optional): The TCP (non-TLS) port of the node. At least one of port or tls-port will be present.
  • tls-port (optional): The TLS port of the node. At least one of port or tls-port will be present.
  • role: The replication role of this node.
  • replication-offset: The replication offset of this node. This information can be used to send commands to the most up to date replicas.
  • health: Either online, failed, or loading. This information should be used to determine which nodes should be sent traffic. The loading health state should be used to know that a node is not currently eligible to serve traffic, but may be eligible in the future.

The endpoint, along with the port, defines the location that clients should use to send requests for a given slot. A NULL value for the endpoint indicates the node has an unknown endpoint and the client should connect to the same endpoint it used to send the CLUSTER SHARDS command but with the port returned from the command. This unknown endpoint configuration is useful when the Redis nodes are behind a load balancer that Redis doesn't know the endpoint of. Which endpoint is set is determined by the cluster-preferred-endpoint-type config.

Return

Array reply: nested list of a map of hash ranges and shard nodes.

Examples

> CLUSTER SHARDS
1) 1) "slots"
   2) 1) (integer) 10923
      2) (integer) 11110
      3) (integer) 11113
      4) (integer) 16111
      5) (integer) 16113
      6) (integer) 16383
   3) "nodes"
   4) 1)  1) "id"
          2) "71f058078c142a73b94767a4e78e9033d195dfb4"
          3) "port"
          4) (integer) 6381
          5) "ip"
          6) "127.0.0.1"
          7) "role"
          8) "primary"
          9) "replication-offset"
         10) (integer) 1500
         11) "health"
         12) "online"
      2)  1) "id"
          2) "1461967c62eab0e821ed54f2c98e594fccfd8736"
          3) "port"
          4) (integer) 7381
          5) "ip"
          6) "127.0.0.1"
          7) "role"
          8) "replica"
          9) "replication-offset"
         10) (integer) 700
         11) "health"
         12) "fail"
2) 1) "slots"
   2) 1) (integer) 5461
      2) (integer) 10922
   3) "nodes"
   4) 1)  1) "id"
          2) "9215e30cd4a71070088778080565de6ef75fd459"
          3) "port"
          4) (integer) 6380
          5) "ip"
          6) "127.0.0.1"
          7) "role"
          8) "primary"
          9) "replication-offset"
         10) (integer) 1200
         11) "health"
         12) "online"
      2)  1) "id"
          2) "877fa59da72cb902d0563d3d8def3437fc3a0196"
          3) "port"
          4) (integer) 7380
          5) "ip"
          6) "127.0.0.1"
          7) "role"
          8) "replica"
          9) "replication-offset"
         10) (integer) 1100
         11) "health"
         12) "loading"
3) 1) "slots"
   2) 1) (integer) 0
      2) (integer) 5460
      3) (integer) 11111
      4) (integer) 11112
      3) (integer) 16112
      4) (integer) 16112
   3) "nodes"
   4) 1)  1) "id"
          2) "b7e9acc0def782aabe6b596f67f06c73c2ffff93"
          3) "port"
          4) (integer) 7379
          5) "ip"
          6) "127.0.0.1"
          7) "hostname"
          8) "example.com"
          9) "role"
         10) "replica"
         11) "replication-offset"
         12) "primary"
         13) "health"
         14) "online"
      2)  1) "id"
          2) "e2acf1a97c055fd09dcc2c0dcc62b19a6905dbc8"
          3) "port"
          4) (integer) 6379
          5) "ip"
          6) "127.0.0.1"
          7) "hostname"
          8) "example.com"
          9) "role"
         10) "replica"
         11) "replication-offset"
         12) (integer) 0
         13) "health"
         14) "loading"

97 - CLUSTER SLAVES

List replica nodes of the specified master node

A note about the word slave used in this man page and command name: starting with Redis version 5, if not for backward compatibility, the Redis project no longer uses the word slave. Please use the new command CLUSTER REPLICAS. The command CLUSTER SLAVES will continue to work for backward compatibility.

The command provides a list of replica nodes replicating from the specified master node. The list is provided in the same format used by CLUSTER NODES (please refer to its documentation for the specification of the format).

The command will fail if the specified node is not known or if it is not a master according to the node table of the node receiving the command.

Note that if a replica is added, moved, or removed from a given master node, and we ask CLUSTER SLAVES to a node that has not yet received the configuration update, it may show stale information. However eventually (in a matter of seconds if there are no network partitions) all the nodes will agree about the set of nodes associated with a given master.

Return

The command returns data in the same format as CLUSTER NODES.

98 - CLUSTER SLOTS

Get array of Cluster slot to node mappings

CLUSTER SLOTS returns details about which cluster slots map to which Redis instances. The command is suitable to be used by Redis Cluster client libraries implementations in order to retrieve (or update when a redirection is received) the map associating cluster hash slots with actual nodes network information, so that when a command is received, it can be sent to what is likely the right instance for the keys specified in the command.

The networking information for each node is an array containing the following elements:

  • Preferred endpoint (Either an IP address, hostname, or NULL)
  • Port number
  • The node ID
  • A map of additional networking metadata

The preferred endpoint, along with the port, defines the location that clients should use to send requests for a given slot. A NULL value for the endpoint indicates the node has an unknown endpoint and the client should connect to the same endpoint it used to send the CLUSTER SLOTS command but with the port returned from the command. This unknown endpoint configuration is useful when the Redis nodes are behind a load balancer that Redis doesn't know the endpoint of. Which endpoint is set as preferred is determined by the cluster-preferred-endpoint-type config.

Additional networking metadata is provided as a map on the fourth argument for each node. The following networking metadata may be returned:

  • IP: When the preferred endpoint is not set to IP.
  • Hostname: When a node has an announced hostname but the primary endpoint is not set to hostname.

Nested Result Array

Each nested result is:

  • Start slot range
  • End slot range
  • Master for slot range represented as nested networking information
  • First replica of master for slot range
  • Second replica
  • ...continues until all replicas for this master are returned.

Each result includes all active replicas of the master instance for the listed slot range. Failed replicas are not returned.

The third nested reply is guaranteed to be the networking information of the master instance for the slot range. All networking information after the third nested reply are replicas of the master.

If a cluster instance has non-contiguous slots (e.g. 1-400,900,1800-6000) then master and replica networking information results will be duplicated for each top-level slot range reply.

Return

Array reply: nested list of slot ranges with networking information.

Examples

> CLUSTER SLOTS
1) 1) (integer) 0
   2) (integer) 5460
   3) 1) "127.0.0.1"
      2) (integer) 30001
      3) "09dbe9720cda62f7865eabc5fd8857c5d2678366"
      4) 1) hostname
         2) "host-1.redis.example.com"
   4) 1) "127.0.0.1"
      2) (integer) 30004
      3) "821d8ca00d7ccf931ed3ffc7e3db0599d2271abf"
      4) 1) hostname
         2) "host-2.redis.example.com"
2) 1) (integer) 5461
   2) (integer) 10922
   3) 1) "127.0.0.1"
      2) (integer) 30002
      3) "c9d93d9f2c0c524ff34cc11838c2003d8c29e013"
      4) 1) hostname
         2) "host-3.redis.example.com"
   4) 1) "127.0.0.1"
      2) (integer) 30005
      3) "faadb3eb99009de4ab72ad6b6ed87634c7ee410f"
      4) 1) hostname
         2) "host-4.redis.example.com"
3) 1) (integer) 10923
   2) (integer) 16383
   3) 1) "127.0.0.1"
      2) (integer) 30003
      3) "044ec91f325b7595e76dbcb18cc688b6a5b434a1"
      4) 1) hostname
         2) "host-5.redis.example.com"
   4) 1) "127.0.0.1"
      2) (integer) 30006
      3) "58e6e48d41228013e5d9c1c37c5060693925e97e"
      4) 1) hostname
         2) "host-6.redis.example.com"

Warning: In future versions there could be more elements describing the node better. In general a client implementation should just rely on the fact that certain parameters are at fixed positions as specified, but more parameters may follow and should be ignored. Similarly a client library should try if possible to cope with the fact that older versions may just have the primary endpoint and port parameter.

Behavior change history

  • >= 7.0.0: Added support for hostnames and unknown endpoints in first field of node response.

99 - CMS.INCRBY

Increases the count of one or more items by increment

Increases the count of item by increment. Multiple items can be increased with one call.

Parameters:

  • key: The name of the sketch.
  • item: The item which counter is to be increased.
  • increment: Amount by which the item counter is to be increased.

Return

[] with an updated min-count of each of the items in the sketch.

Count of each item after increment.

Examples

redis> CMS.INCRBY test foo 10 bar 42
1) (integer) 10
2) (integer) 42

100 - CMS.INFO

Returns information about a sketch

Returns width, depth and total count of the sketch.

Parameters:

  • key: The name of the sketch.

Return

Array reply with information of the filter.

Examples

redis> CMS.INFO test
 1) width
 2) (integer) 2000
 3) depth
 4) (integer) 7
 5) count
 6) (integer) 0

101 - CMS.INITBYDIM

Initializes a Count-Min Sketch to dimensions specified by user

Initializes a Count-Min Sketch to dimensions specified by user.

Parameters:

  • key: The name of the sketch.
  • width: Number of counters in each array. Reduces the error size.
  • depth: Number of counter-arrays. Reduces the probability for an error of a certain size (percentage of total count).

Return

[] otherwise.

Examples

redis> CMS.INITBYDIM test 2000 5
OK

102 - CMS.INITBYPROB

Initializes a Count-Min Sketch to accommodate requested tolerances.

Initializes a Count-Min Sketch to accommodate requested tolerances.

Parameters:

  • key: The name of the sketch.
  • error: Estimate size of error. The error is a percent of total counted items. This effects the width of the sketch.
  • probability: The desired probability for inflated count. This should be a decimal value between 0 and 1. This effects the depth of the sketch. For example, for a desired false positive rate of 0.1% (1 in 1000), error_rate should be set to 0.001. The closer this number is to zero, the greater the memory consumption per item and the more CPU usage per operation.

Return

[] otherwise.

Examples

redis> CMS.INITBYPROB test 0.001 0.01
OK

103 - CMS.MERGE

Merges several sketches into one sketch

Merges several sketches into one sketch. All sketches must have identical width and depth. Weights can be used to multiply certain sketches. Default weight is 1.

Parameters:

  • dest: The name of destination sketch. Must be initialized.
  • numKeys: Number of sketches to be merged.
  • src: Names of source sketches to be merged.
  • weight: Multiple of each sketch. Default =1.

Return

[] otherwise.

Examples

redis> CMS.MERGE dest 2 test1 test2 WEIGHTS 1 3
OK

104 - CMS.QUERY

Returns the count for one or more items in a sketch

Returns the count for one or more items in a sketch.

Parameters:

  • key: The name of the sketch.
  • item: One or more items for which to return the count.

Return

Count of one or more items

[] with a min-count of each of the items in the sketch.

Examples

redis> CMS.QUERY test foo bar
1) (integer) 10
2) (integer) 42

105 - COMMAND

Get array of Redis command details

Return an array with details about every Redis command.

The COMMAND command is introspective. Its reply describes all commands that the server can process. Redis clients can call it to obtain the server's runtime capabilities during the handshake.

COMMAND also has several subcommands. Please refer to its subcommands for further details.

Cluster note: this command is especially beneficial for cluster-aware clients. Such clients must identify the names of keys in commands to route requests to the correct shard. Although most commands accept a single key as their first argument, there are many exceptions to this rule. You can call COMMAND and then keep the mapping between commands and their respective key specification rules cached in the client.

The reply it returns is an array with an element per command. Each element that describes a Redis command is represented as an array by itself.

The command's array consists of a fixed number of elements. The exact number of elements in the array depends on the server's version.

  1. Name
  2. Arity
  3. Flags
  4. First key
  5. Last key
  6. Step
  7. ACL categories (as of Redis 6.0)
  8. Tips (as of Redis 7.0)
  9. Key specifications (as of Redis 7.0)
  10. Subcommands (as of Redis 7.0)

Name

This is the command's name in lowercase.

Note: Redis command names are case-insensitive.

Arity

Arity is the number of arguments a command expects. It follows a simple pattern:

  • A positive integer means a fixed number of arguments.
  • A negative integer means a minimal number of arguments.

Command arity always includes the command's name itself (and the subcommand when applicable).

Examples:

  • GET's arity is 2 since the command only accepts one argument and always has the format GET _key_.
  • MGET's arity is -2 since the command accepts at least one argument, but possibly multiple ones: MGET _key1_ [key2] [key3] ....

Flags

Command flags are an array. It can contain the following simple strings (status reply):

  • admin: the command is an administrative command.
  • asking: the command is allowed even during hash slot migration. This flag is relevant in Redis Cluster deployments.
  • blocking: the command may block the requesting client.
  • denyoom: the command is rejected if the server's memory usage is too high (see the maxmemory configuration directive).
  • fast: the command operates in constant or log(N) time. This flag is used for monitoring latency with the LATENCY command.
  • loading: the command is allowed while the database is loading.
  • may_replicate: the command may be replicated to replicas and the AOF.
  • movablekeys: the first key, last key, and step values don't determine all key positions. Clients need to use COMMAND GETKEYS or key specifications in this case. See below for more details.
  • no_auth: executing the command doesn't require authentication.
  • no_async_loading: the command is denied during asynchronous loading (that is when a replica uses disk-less SWAPDB SYNC, and allows access to the old dataset).
  • no_mandatory_keys: the command may accept key name arguments, but these aren't mandatory.
  • no_multi: the command isn't allowed inside the context of a transaction.
  • noscript: the command can't be called from scripts or functions.
  • pubsub: the command is related to Redis Pub/Sub.
  • random: the command returns random results, which is a concern with verbatim script replication. As of Redis 7.0, this flag is a command tip.
  • readonly: the command doesn't modify data.
  • sort_for_script: the command's output is sorted when called from a script.
  • skip_monitor: the command is not shown in MONITOR's output.
  • skip_slowlog: the command is not shown in SLOWLOG's output. As of Redis 7.0, this flag is a command tip.
  • stale: the command is allowed while a replica has stale data.
  • write: the command may modify data.

Movablekeys

Consider SORT:

1) 1) "sort"
   2) (integer) -2
   3) 1) write
      2) denyoom
      3) movablekeys
   4) (integer) 1
   5) (integer) 1
   6) (integer) 1
   ...

Some Redis commands have no predetermined key locations or are not easy to find. For those commands, the movablekeys flag indicates that the first key, last key, and step values are insufficient to find all the keys.

Here are several examples of commands that have the movablekeys flag:

  • SORT: the optional STORE, BY, and GET modifiers are followed by names of keys.
  • ZUNION: the numkeys argument specifies the number key name arguments.
  • MIGRATE: the keys appear KEYS keyword and only when the second argument is the empty string.

Redis Cluster clients need to use other measures, as follows, to locate the keys for such commands.

You can use the COMMAND GETKEYS command and have your Redis server report all keys of a given command's invocation.

As of Redis 7.0, clients can use the key specifications to identify the positions of key names. The only commands that require using COMMAND GETKEYS are SORT and MIGRATE for clients that parse keys' specifications.

For more information, please refer to the key specifications page.

First key

This value identifies the position of the command's first key name argument. For most commands, the first key's position is 1. Position 0 is always the command name itself.

Last key

This value identifies the position of the command's last key name argument. Redis commands usually accept one, two or multiple number of keys.

Commands that accept a single key have both first key and last key set to 1.

Commands that accept two key name arguments, e.g. BRPOPLPUSH, SMOVE and RENAME, have this value set to the position of their second key.

Multi-key commands that accept an arbitrary number of keys, such as MSET, use the value -1.

Step

This value is the step, or increment, between the first key and last key values where the keys are.

Consider the following two examples:

1) 1) "mset"
   2) (integer) -3
   3) 1) write
      2) denyoom
   4) (integer) 1
   5) (integer) -1
   6) (integer) 2
   ...
1) 1) "mget"
   2) (integer) -2
   3) 1) readonly
      2) fast
   4) (integer) 1
   5) (integer) -1
   6) (integer) 1
   ...

The step count allows us to find keys' positions for commands like MSET. Its syntax is MSET _key1_ _val1_ [key2] [val2] [key3] [val3]..., and the keys are at every other position. Therefore, unlike MGET, which uses a step value of 1, MSET uses 2.

ACL categories

This is an array of simple strings that are the ACL categories to which the command belongs. Please refer to the Access Control List page for more information.

Command tips

Helpful information about the command. To be used by clients/proxies.

Please check the Command tips page for more information.

Key specifications

This is an array consisting of the command's key specifications. Each element in the array is a map describing a method for locating keys in the command's arguments.

For more information please check the key specifications page.

Subcommands

This is an array containing all of the command's subcommands, if any. Some Redis commands have subcommands (e.g., the REWRITE subcommand of CONFIG). Each element in the array represents one subcommand and follows the same specifications as those of COMMAND's reply.

Return

Array reply: a nested list of command details.

The order of commands in the array is random.

Examples

The following is COMMAND's output for the GET command:

1)  1) "get"
    2) (integer) 2
    3) 1) readonly
       2) fast
    4) (integer) 1
    5) (integer) 1
    6) (integer) 1
    7) 1) @read
       2) @string
       3) @fast
    8) (empty array)
    9) 1) 1) "flags"
          2) 1) read
          3) "begin_search"
          4) 1) "type"
             2) "index"
             3) "spec"
             4) 1) "index"
                2) (integer) 1
          5) "find_keys"
          6) 1) "type"
             2) "range"
             3) "spec"
             4) 1) "lastkey"
                2) (integer) 0
                3) "keystep"
                4) (integer) 1
                5) "limit"
                6) (integer) 0
   10) (empty array)
...

106 - COMMAND COUNT

Get total number of Redis commands

Returns Integer reply of number of total commands in this Redis server.

Return

Integer reply: number of commands returned by COMMAND

Examples

COMMAND COUNT

107 - COMMAND DOCS

Get array of specific Redis command documentation

Return documentary information about commands.

By default, the reply includes all of the server's commands. You can use the optional command-name argument to specify the names of one or more commands.

The reply includes a map for each returned command. The following keys may be included in the mapped reply:

  • summary: short command description.
  • since: the Redis version that added the command (or for module commands, the module version).
  • group: the functional group to which the command belongs. Possible values are:
    • bitmap
    • cluster
    • connection
    • generic
    • geo
    • hash
    • hyperloglog
    • list
    • module
    • pubsub
    • scripting
    • sentinel
    • server
    • set
    • sorted-set
    • stream
    • string
    • transactions
  • complexity: a short explanation about the command's time complexity.
  • doc_flags: an array of documentation flags. Possible values are:
    • deprecated: the command is deprecated.
    • syscmd: a system command that isn't meant to be called by users.
  • deprecated_since: the Redis version that deprecated the command (or for module commands, the module version)..
  • replaced_by: the alternative for a deprecated command.
  • history: an array of historical notes describing changes to the command's behavior or arguments. Each entry is an array itself, made up of two elements:
    1. The Redis version that the entry applies to.
    2. The description of the change.
  • arguments: an array of maps that describe the command's arguments. Please refer to the Redis command arguments page for more information.

Return

Array reply: a map as a flattened array as described above.

Examples

COMMAND DOCS SET

108 - COMMAND GETKEYS

Extract keys given a full Redis command

Returns Array reply of keys from a full Redis command.

COMMAND GETKEYS is a helper command to let you find the keys from a full Redis command.

COMMAND provides information on how to find the key names of each command (see firstkey, key specifications, and movablekeys), but in some cases it's not possible to find keys of certain commands and then the entire command must be parsed to discover some / all key names. You can use COMMAND GETKEYS or COMMAND GETKEYSANDFLAGS to discover key names directly from how Redis parses the commands.

Return

Array reply: list of keys from your command.

Examples

COMMAND GETKEYS MSET a b c d e f COMMAND GETKEYS EVAL "not consulted" 3 key1 key2 key3 arg1 arg2 arg3 argN COMMAND GETKEYS SORT mylist ALPHA STORE outlist

109 - COMMAND GETKEYSANDFLAGS

Extract keys and access flags given a full Redis command

Returns Array reply of keys from a full Redis command and their usage flags.

COMMAND GETKEYSANDFLAGS is a helper command to let you find the keys from a full Redis command together with flags indicating what each key is used for.

COMMAND provides information on how to find the key names of each command (see firstkey, key specifications, and movablekeys), but in some cases it's not possible to find keys of certain commands and then the entire command must be parsed to discover some / all key names. You can use COMMAND GETKEYS or COMMAND GETKEYSANDFLAGS to discover key names directly from how Redis parses the commands.

Refer to key specifications for information about the meaning of the key flags.

Return

Array reply: list of keys from your command. Each element of the array is an array containing key name in the first entry, and flags in the second.

Examples

COMMAND GETKEYS MSET a b c d e f COMMAND GETKEYS EVAL "not consulted" 3 key1 key2 key3 arg1 arg2 arg3 argN COMMAND GETKEYSANDFLAGS LMOST mylist1 mylist2 left left

110 - COMMAND HELP

Show helpful text about the different subcommands

The COMMAND HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

111 - COMMAND INFO

Get array of specific Redis command details, or all when no argument is given.

Returns Array reply of details about multiple Redis commands.

Same result format as COMMAND except you can specify which commands get returned.

If you request details about non-existing commands, their return position will be nil.

Return

Array reply: nested list of command details.

Examples

COMMAND INFO get set eval COMMAND INFO foo evalsha config bar

112 - COMMAND LIST

Get an array of Redis command names

Return an array of the server's command names.

You can use the optional FILTERBY modifier to apply one of the following filters:

  • MODULE module-name: get the commands that belong to the module specified by module-name.
  • ACLCAT category: get the commands in the ACL category specified by category.
  • PATTERN pattern: get the commands that match the given glob-like pattern.

Return

Array reply: a list of command names.

113 - CONFIG

A container for server configuration commands

This is a container command for runtime configuration commands.

To see the list of available commands you can call CONFIG HELP.

114 - CONFIG GET

Get the values of configuration parameters

The CONFIG GET command is used to read the configuration parameters of a running Redis server. Not all the configuration parameters are supported in Redis 2.4, while Redis 2.6 can read the whole configuration of a server using this command.

The symmetric command used to alter the configuration at run time is CONFIG SET.

CONFIG GET takes multiple arguments, which are glob-style patterns. Any configuration parameter matching any of the patterns are reported as a list of key-value pairs. Example:

redis> config get *max-*-entries* maxmemory
 1) "maxmemory"
 2) "0"
 3) "hash-max-listpack-entries"
 4) "512"
 5) "hash-max-ziplist-entries"
 6) "512"
 7) "set-max-intset-entries"
 8) "512"
 9) "zset-max-listpack-entries"
10) "128"
11) "zset-max-ziplist-entries"
12) "128"

You can obtain a list of all the supported configuration parameters by typing CONFIG GET * in an open redis-cli prompt.

All the supported parameters have the same meaning of the equivalent configuration parameter used in the redis.conf file:

Note that you should look at the redis.conf file relevant to the version you're working with as configuration options might change between versions. The link above is to the latest development version.

Return

The return type of the command is a Array reply.

115 - CONFIG HELP

Show helpful text about the different subcommands

The CONFIG HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

116 - CONFIG RESETSTAT

Reset the stats returned by INFO

Resets the statistics reported by Redis using the INFO command.

These are the counters that are reset:

  • Keyspace hits
  • Keyspace misses
  • Number of commands processed
  • Number of connections received
  • Number of expired keys
  • Number of rejected connections
  • Latest fork(2) time
  • The aof_delayed_fsync counter

Return

Simple string reply: always OK.

117 - CONFIG REWRITE

Rewrite the configuration file with the in memory configuration

The CONFIG REWRITE command rewrites the redis.conf file the server was started with, applying the minimal changes needed to make it reflect the configuration currently used by the server, which may be different compared to the original one because of the use of the CONFIG SET command.

The rewrite is performed in a very conservative way:

  • Comments and the overall structure of the original redis.conf are preserved as much as possible.
  • If an option already exists in the old redis.conf file, it will be rewritten at the same position (line number).
  • If an option was not already present, but it is set to its default value, it is not added by the rewrite process.
  • If an option was not already present, but it is set to a non-default value, it is appended at the end of the file.
  • Non used lines are blanked. For instance if you used to have multiple save directives, but the current configuration has fewer or none as you disabled RDB persistence, all the lines will be blanked.

CONFIG REWRITE is also able to rewrite the configuration file from scratch if the original one no longer exists for some reason. However if the server was started without a configuration file at all, the CONFIG REWRITE will just return an error.

Atomic rewrite process

In order to make sure the redis.conf file is always consistent, that is, on errors or crashes you always end with the old file, or the new one, the rewrite is performed with a single write(2) call that has enough content to be at least as big as the old file. Sometimes additional padding in the form of comments is added in order to make sure the resulting file is big enough, and later the file gets truncated to remove the padding at the end.

Return

Simple string reply: OK when the configuration was rewritten properly. Otherwise an error is returned.

118 - CONFIG SET

Set configuration parameters to the given values

The CONFIG SET command is used in order to reconfigure the server at run time without the need to restart Redis. You can change both trivial parameters or switch from one to another persistence option using this command.

The list of configuration parameters supported by CONFIG SET can be obtained issuing a CONFIG GET * command, that is the symmetrical command used to obtain information about the configuration of a running Redis instance.

All the configuration parameters set using CONFIG SET are immediately loaded by Redis and will take effect starting with the next command executed.

All the supported parameters have the same meaning of the equivalent configuration parameter used in the redis.conf file.

Note that you should look at the redis.conf file relevant to the version you're working with as configuration options might change between versions. The link above is to the latest development version.

It is possible to switch persistence from RDB snapshotting to append-only file (and the other way around) using the CONFIG SET command. For more information about how to do that please check the persistence page.

In general what you should know is that setting the appendonly parameter to yes will start a background process to save the initial append-only file (obtained from the in memory data set), and will append all the subsequent commands on the append-only file, thus obtaining exactly the same effect of a Redis server that started with AOF turned on since the start.

You can have both the AOF enabled with RDB snapshotting if you want, the two options are not mutually exclusive.

Return

Simple string reply: OK when the configuration was set properly. Otherwise an error is returned.

119 - COPY

Copy a key

This command copies the value stored at the source key to the destination key.

By default, the destination key is created in the logical database used by the connection. The DB option allows specifying an alternative logical database index for the destination key.

The command returns an error when the destination key already exists. The REPLACE option removes the destination key before copying the value to it.

Return

Integer reply, specifically:

  • 1 if source was copied.
  • 0 if source was not copied.

Examples

SET dolly "sheep"
COPY dolly clone
GET clone

120 - DBSIZE

Return the number of keys in the selected database

Return the number of keys in the currently-selected database.

Return

Integer reply

121 - DEBUG

A container for debugging commands

The DEBUG command is an internal command. It is meant to be used for developing and testing Redis.

122 - DECR

Decrement the integer value of a key by one

Decrements the number stored at key by one. If the key does not exist, it is set to 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers.

See INCR for extra information on increment/decrement operations.

Return

Integer reply: the value of key after the decrement

Examples

SET mykey "10" DECR mykey SET mykey "234293482390480948029348230948" DECR mykey

123 - DECRBY

Decrement the integer value of a key by the given number

Decrements the number stored at key by decrement. If the key does not exist, it is set to 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers.

See INCR for extra information on increment/decrement operations.

Return

Integer reply: the value of key after the decrement

Examples

SET mykey "10" DECRBY mykey 3

124 - DEL

Delete a key

Removes the specified keys. A key is ignored if it does not exist.

Return

Integer reply: The number of keys that were removed.

Examples

SET key1 "Hello" SET key2 "World" DEL key1 key2 key3

125 - DISCARD

Discard all commands issued after MULTI

Flushes all previously queued commands in a transaction and restores the connection state to normal.

If WATCH was used, DISCARD unwatches all keys watched by the connection.

Return

Simple string reply: always OK.

126 - DUMP

Return a serialized version of the value stored at the specified key.

Serialize the value stored at key in a Redis-specific format and return it to the user. The returned value can be synthesized back into a Redis key using the RESTORE command.

The serialization format is opaque and non-standard, however it has a few semantic characteristics:

  • It contains a 64-bit checksum that is used to make sure errors will be detected. The RESTORE command makes sure to check the checksum before synthesizing a key using the serialized value.
  • Values are encoded in the same format used by RDB.
  • An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value.

The serialized value does NOT contain expire information. In order to capture the time to live of the current value the PTTL command should be used.

If key does not exist a nil bulk reply is returned.

Return

Bulk string reply: the serialized value.

Examples

SET mykey 10 DUMP mykey

127 - ECHO

Echo the given string

Returns message.

Return

Bulk string reply

Examples

ECHO "Hello World!"

128 - EVAL

Execute a Lua script server side

Invoke the execution of a server-side Lua script.

The first argument is the script's source code. Scripts are written in Lua and executed by the embedded Lua 5.1 interpreter in Redis.

The second argument is the number of input key name arguments, followed by all the keys accessed by the script. These names of input keys are available to the script as the KEYS global runtime variable Any additional input arguments should not represent names of keys.

Important: to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a script accesses must be explicitly provided as input key arguments. The script should only access keys whose names are given as input arguments. Scripts should never access keys with programmatically-generated names or based on the contents of data structures stored in the database.

Please refer to the Redis Programmability and Introduction to Eval Scripts for more information about Lua scripts.

Examples

The following example will run a script that returns the first argument that it gets.

> EVAL "return ARGV[1]" 0 hello
"hello"

129 - EVAL_RO

Execute a read-only Lua script server side

This is a read-only variant of the EVAL command that cannot execute commands that modify data.

Unlike EVAL, scripts executed with this command can always be killed and never affect the replication stream. Because the script can only read data, this command can always be executed on a master or a replica.

For more information about EVAL scripts please refer to Introduction to Eval Scripts.

Examples

> SET mykey "Hello"
OK

> EVAL_RO "return redis.call('GET', KEYS[1])" 1 mykey
"Hello"

> EVAL_RO "return redis.call('DEL', KEYS[1])" 1 mykey
(error) ERR Error running script (call to b0d697da25b13e49157b2c214a4033546aba2104): @user_script:1: @user_script: 1: Write commands are not allowed from read-only scripts.

130 - EVALSHA

Execute a Lua script server side

Evaluate a script from the server's cache by its SHA1 digest.

The server caches scripts by using the SCRIPT LOAD command. The command is otherwise identical to EVAL.

Please refer to the Redis Programmability and Introduction to Eval Scripts for more information about Lua scripts.

131 - EVALSHA_RO

Execute a read-only Lua script server side

This is a read-only variant of the EVALSHA command that cannot execute commands that modify data.

Unlike EVALSHA, scripts executed with this command can always be killed and never affect the replication stream. Because it can only read data, this command can always be executed on a master or a replica.

For more information about EVALSHA scripts please refer to Introduction to Eval Scripts.

132 - EXEC

Execute all commands issued after MULTI

Executes all previously queued commands in a transaction and restores the connection state to normal.

When using WATCH, EXEC will execute commands only if the watched keys were not modified, allowing for a check-and-set mechanism.

Return

Array reply: each element being the reply to each of the commands in the atomic transaction.

When using WATCH, EXEC can return a Null reply if the execution was aborted.

133 - EXISTS

Determine if a key exists

Returns if key exists.

The user should be aware that if the same existing key is mentioned in the arguments multiple times, it will be counted multiple times. So if somekey exists, EXISTS somekey somekey will return 2.

Return

Integer reply, specifically the number of keys that exist from those specified as arguments.

Examples

SET key1 "Hello" EXISTS key1 EXISTS nosuchkey SET key2 "World" EXISTS key1 key2 nosuchkey

134 - EXPIRE

Set a key's time to live in seconds

Set a timeout on key. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be volatile in Redis terminology.

The timeout will only be cleared by commands that delete or overwrite the contents of the key, including DEL, SET, GETSET and all the *STORE commands. This means that all the operations that conceptually alter the value stored at the key without replacing it with a new one will leave the timeout untouched. For instance, incrementing the value of a key with INCR, pushing a new value into a list with LPUSH, or altering the field value of a hash with HSET are all operations that will leave the timeout untouched.

The timeout can also be cleared, turning the key back into a persistent key, using the PERSIST command.

If a key is renamed with RENAME, the associated time to live is transferred to the new key name.

If a key is overwritten by RENAME, like in the case of an existing key Key_A that is overwritten by a call like RENAME Key_B Key_A, it does not matter if the original Key_A had a timeout associated or not, the new key Key_A will inherit all the characteristics of Key_B.

Note that calling EXPIRE/PEXPIRE with a non-positive timeout or EXPIREAT/PEXPIREAT with a time in the past will result in the key being deleted rather than expired (accordingly, the emitted key event will be del, not expired).

Options

The EXPIRE command supports a set of options:

  • NX -- Set expiry only when the key has no expiry
  • XX -- Set expiry only when the key has an existing expiry
  • GT -- Set expiry only when the new expiry is greater than current one
  • LT -- Set expiry only when the new expiry is less than current one

A non-volatile key is treated as an infinite TTL for the purpose of GT and LT. The GT, LT and NX options are mutually exclusive.

Refreshing expires

It is possible to call EXPIRE using as argument a key that already has an existing expire set. In this case the time to live of a key is updated to the new value. There are many useful applications for this, an example is documented in the Navigation session pattern section below.

Differences in Redis prior 2.1.3

In Redis versions prior 2.1.3 altering a key with an expire set using a command altering its value had the effect of removing the key entirely. This semantics was needed because of limitations in the replication layer that are now fixed.

EXPIRE would return 0 and not alter the timeout for a key with a timeout set.

Return

Integer reply, specifically:

  • 1 if the timeout was set.
  • 0 if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments.

Examples

SET mykey "Hello" EXPIRE mykey 10 TTL mykey SET mykey "Hello World" TTL mykey EXPIRE mykey 10 XX TTL mykey EXPIRE mykey 10 NX TTL mykey

Pattern: Navigation session

Imagine you have a web service and you are interested in the latest N pages recently visited by your users, such that each adjacent page view was not performed more than 60 seconds after the previous. Conceptually you may consider this set of page views as a Navigation session of your user, that may contain interesting information about what kind of products he or she is looking for currently, so that you can recommend related products.

You can easily model this pattern in Redis using the following strategy: every time the user does a page view you call the following commands:

MULTI
RPUSH pagewviews.user:<userid> http://.....
EXPIRE pagewviews.user:<userid> 60
EXEC

If the user will be idle more than 60 seconds, the key will be deleted and only subsequent page views that have less than 60 seconds of difference will be recorded.

This pattern is easily modified to use counters using INCR instead of lists using RPUSH.

Appendix: Redis expires

Keys with an expire

Normally Redis keys are created without an associated time to live. The key will simply live forever, unless it is removed by the user in an explicit way, for instance using the DEL command.

The EXPIRE family of commands is able to associate an expire to a given key, at the cost of some additional memory used by the key. When a key has an expire set, Redis will make sure to remove the key when the specified amount of time elapsed.

The key time to live can be updated or entirely removed using the EXPIRE and PERSIST command (or other strictly related commands).

Expire accuracy

In Redis 2.4 the expire might not be pin-point accurate, and it could be between zero to one seconds out.

Since Redis 2.6 the expire error is from 0 to 1 milliseconds.

Expires and persistence

Keys expiring information is stored as absolute Unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active.

For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desync in their clocks, funny things may happen (like all the keys loaded to be expired at loading time).

Even running instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediately, instead of lasting for 1000 seconds.

How Redis expires keys

Redis keys are expired in two ways: a passive way, and an active way.

A key is passively expired simply when some client tries to access it, and the key is found to be timed out.

Of course this is not enough as there are expired keys that will never be accessed again. These keys should be expired anyway, so periodically Redis tests a few keys at random among keys with an expire set. All the keys that are already expired are deleted from the keyspace.

Specifically this is what Redis does 10 times per second:

  1. Test 20 random keys from the set of keys with an associated expire.
  2. Delete all the keys found expired.
  3. If more than 25% of keys were expired, start again from step 1.

This is a trivial probabilistic algorithm, basically the assumption is that our sample is representative of the whole key space, and we continue to expire until the percentage of keys that are likely to be expired is under 25%

This means that at any given moment the maximum amount of keys already expired that are using memory is at max equal to max amount of write operations per second divided by 4.

In order to obtain a correct behavior without sacrificing consistency, when a key expires, a DEL operation is synthesized in both the AOF file and gains all the attached replicas nodes. This way the expiration process is centralized in the master instance, and there is no chance of consistency errors.

However while the replicas connected to a master will not expire keys independently (but will wait for the DEL coming from the master), they'll still take the full state of the expires existing in the dataset, so when a replica is elected to master it will be able to expire the keys independently, fully acting as a master.

135 - EXPIREAT

Set the expiration for a key as a UNIX timestamp

EXPIREAT has the same effect and semantic as EXPIRE, but instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute Unix timestamp (seconds since January 1, 1970). A timestamp in the past will delete the key immediately.

Please for the specific semantics of the command refer to the documentation of EXPIRE.

Background

EXPIREAT was introduced in order to convert relative timeouts to absolute timeouts for the AOF persistence mode. Of course, it can be used directly to specify that a given key should expire at a given time in the future.

Options

The EXPIREAT command supports a set of options:

  • NX -- Set expiry only when the key has no expiry
  • XX -- Set expiry only when the key has an existing expiry
  • GT -- Set expiry only when the new expiry is greater than current one
  • LT -- Set expiry only when the new expiry is less than current one

A non-volatile key is treated as an infinite TTL for the purpose of GT and LT. The GT, LT and NX options are mutually exclusive.

Return

Integer reply, specifically:

  • 1 if the timeout was set.
  • 0 if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments.

Examples

SET mykey "Hello" EXISTS mykey EXPIREAT mykey 1293840000 EXISTS mykey

136 - EXPIRETIME

Get the expiration Unix timestamp for a key

Returns the absolute Unix timestamp (since January 1, 1970) in seconds at which the given key will expire.

See also the PEXPIRETIME command which returns the same information with milliseconds resolution.

Return

Integer reply: Expiration Unix timestamp in seconds, or a negative value in order to signal an error (see the description below).

  • The command returns -1 if the key exists but has no associated expiration time.
  • The command returns -2 if the key does not exist.

Examples

SET mykey "Hello" EXPIREAT mykey 33177117420 EXPIRETIME mykey

137 - FAILOVER

Start a coordinated failover between this server and one of its replicas.

This command will start a coordinated failover between the currently-connected-to master and one of its replicas. The failover is not synchronous, instead a background task will handle coordinating the failover. It is designed to limit data loss and unavailability of the cluster during the failover. This command is analogous to the CLUSTER FAILOVER command for non-clustered Redis and is similar to the failover support provided by sentinel.

The specific details of the default failover flow are as follows:

  1. The master will internally start a CLIENT PAUSE WRITE, which will pause incoming writes and prevent the accumulation of new data in the replication stream.
  2. The master will monitor its replicas, waiting for a replica to indicate that it has fully consumed the replication stream. If the master has multiple replicas, it will only wait for the first replica to catch up.
  3. The master will then demote itself to a replica. This is done to prevent any dual master scenarios. NOTE: The master will not discard its data, so it will be able to rollback if the replica rejects the failover request in the next step.
  4. The previous master will send a special PSYNC request to the target replica, PSYNC FAILOVER, instructing the target replica to become a master.
  5. Once the previous master receives acknowledgement the PSYNC FAILOVER was accepted it will unpause its clients. If the PSYNC request is rejected, the master will abort the failover and return to normal.

The field master_failover_state in INFO replication can be used to track the current state of the failover, which has the following values:

  • no-failover: There is no ongoing coordinated failover.
  • waiting-for-sync: The master is waiting for the replica to catch up to its replication offset.
  • failover-in-progress: The master has demoted itself, and is attempting to hand off ownership to a target replica.

If the previous master had additional replicas attached to it, they will continue replicating from it as chained replicas. You will need to manually execute a REPLICAOF on these replicas to start replicating directly from the new master.

Optional arguments

The following optional arguments exist to modify the behavior of the failover flow:

  • TIMEOUT milliseconds -- This option allows specifying a maximum time a master will wait in the waiting-for-sync state before aborting the failover attempt and rolling back. This is intended to set an upper bound on the write outage the Redis cluster can experience. Failovers typically happen in less than a second, but could take longer if there is a large amount of write traffic or the replica is already behind in consuming the replication stream. If this value is not specified, the timeout can be considered to be "infinite".

  • TO HOST PORT -- This option allows designating a specific replica, by its host and port, to failover to. The master will wait specifically for this replica to catch up to its replication offset, and then failover to it.

  • FORCE -- If both the TIMEOUT and TO options are set, the force flag can also be used to designate that that once the timeout has elapsed, the master should failover to the target replica instead of rolling back. This can be used for a best-effort attempt at a failover without data loss, but limiting write outage.

NOTE: The master will always rollback if the PSYNC FAILOVER request is rejected by the target replica.

Failover abort

The failover command is intended to be safe from data loss and corruption, but can encounter some scenarios it can not automatically remediate from and may get stuck. For this purpose, the FAILOVER ABORT command exists, which will abort an ongoing failover and return the master to its normal state. The command has no side effects if issued in the waiting-for-sync state but can introduce multi-master scenarios in the failover-in-progress state. If a multi-master scenario is encountered, you will need to manually identify which master has the latest data and designate it as the master and have the other replicas.

NOTE: REPLICAOF is disabled while a failover is in progress, this is to prevent unintended interactions with the failover that might cause data loss.

Return

Simple string reply: OK if the command was accepted and a coordinated failover is in progress. An error if the operation cannot be executed.

138 - FCALL

Invoke a function

Invoke a function.

Functions are loaded to the server with the FUNCTION LOAD command. The first argument is the name of a loaded function.

The second argument is the number of input key name arguments, followed by all the keys accessed by the function. In Lua, these names of input keys are available to the function as a table that is the callback's first argument.

Important: To ensure the correct execution of functions, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. The function should only access keys whose names are given as input arguments. Functions should never access keys with programmatically-generated names or based on the contents of data structures stored in the database.

Any additional input argument should not represent names of keys. These are regular arguments and are passed in a Lua table as the callback's second argument.

For more information please refer to the Redis Programmability and Introduction to Redis Functions pages.

Examples

The following example will create a library named mylib with a single function, myfunc, that returns the first argument it gets.

redis> FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return args[1] end)"
"mylib"
redis> FCALL myfunc 0 hello
"hello"

139 - FCALL_RO

Invoke a read-only function

This is a read-only variant of the FCALL command that cannot execute commands that modify data.

For more information please refer to Introduction to Redis Functions.

140 - FLUSHALL

Remove all keys from all databases

Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.

By default, FLUSHALL will synchronously flush all the databases. Starting with Redis 6.2, setting the lazyfree-lazy-user-flush configuration directive to "yes" changes the default flush mode to asynchronous.

It is possible to use one of the following modifiers to dictate the flushing mode explicitly:

  • ASYNC: flushes the databases asynchronously
  • SYNC: flushes the databases synchronously

Note: an asynchronous FLUSHALL command only deletes keys that were present at the time the command was invoked. Keys created during an asynchronous flush will be unaffected.

Return

Simple string reply

Behavior change history

  • >= 6.2.0: Default flush behavior now configurable by the lazyfree-lazy-user-flush configuration directive.

141 - FLUSHDB

Remove all keys from the current database

Delete all the keys of the currently selected DB. This command never fails.

By default, FLUSHDB will synchronously flush all keys from the database. Starting with Redis 6.2, setting the lazyfree-lazy-user-flush configuration directive to "yes" changes the default flush mode to asynchronous.

It is possible to use one of the following modifiers to dictate the flushing mode explicitly:

  • ASYNC: flushes the database asynchronously
  • SYNC: flushes the database synchronously

Note: an asynchronous FLUSHDB command only deletes keys that were present at the time the command was invoked. Keys created during an asynchronous flush will be unaffected.

Return

Simple string reply

Behavior change history

  • >= 6.2.0: Default flush behavior now configurable by the lazyfree-lazy-user-flush configuration directive.

142 - FT._LIST

Returns a list of all existing indexes

Returns a list of all existing indexes.

!!! note "Temporary command" The prefix _ in the command indicates, this is a temporary command.

In the future, a [`SCAN`](/commands/scan) type of command will be added, for use when a database
contains a large number of indices.

Return

Array reply with index names.

Examples

FT._LIST
1) "idx"
2) "movies"
3) "imdb"

143 - FT.AGGREGATE

Adds terms to a dictionary

Complexity

Non-deterministic. Depends on the query and aggregations performed, but it is usually linear to the number of results returned.


Runs a search query on an index, and performs aggregate transformations on the results, extracting statistics etc from them. See the full documentation on aggregations for further details.

Parameters

  • index_name: The index the query is executed against.

  • query: The base filtering query that retrieves the documents. It follows the exact same syntax as the search query, including filters, unions, not, optional, etc.

  • LOAD {nargs} {identifier} AS {property} …: Load document attributes from the source document. identifier is either an attribute name (for hashes and JSON) or a JSON Path expression for (JSON). property is the optional name used in the result. It is not provided, the identifier is used. This should be avoided as a general rule of thumb. If * is used as nargs, all attributes in a document are loaded. Attributes needed for aggregations should be stored as SORTABLE, where they are available to the aggregation pipeline with very low latency. LOAD hurts the performance of aggregate queries considerably, since every processed record needs to execute the equivalent of HMGET against a Redis key, which when executed over millions of keys, amounts to very high processing times.

  • GROUPBY {nargs} {property}: Group the results in the pipeline based on one or more properties. Each group should have at least one reducer (See below), a function that handles the group entries, either counting them, or performing multiple aggregate operations (see below).

    • REDUCE {func} {nargs} {arg} … [AS {name}]: Reduce the matching results in each group into a single record, using a reduction function. For example COUNT will count the number of records in the group. See the Reducers section below for more details on available reducers.

      The reducers can have their own property names using the `AS {name}` optional argument. If a name is not given, the resulting name will be the name of the reduce function and the group properties. For example, if a name is not given to COUNT_DISTINCT by property `@foo`, the resulting name will be `count_distinct(@foo)`.
      
  • SORTBY {nargs} {property} {ASC|DESC} [MAX {num}]: Sort the pipeline up until the point of SORTBY, using a list of properties. By default, sorting is ascending, but ASC or DESC can be added for each property. nargs is the number of sorting parameters, including ASC and DESC. for example: SORTBY 4 @foo ASC @bar DESC.

    Attributes needed for SORTBY should be stored as SORTABLE in order to be available with very low latency.

    MAX is used to optimized sorting, by sorting only for the n-largest elements. Although it is not connected to LIMIT, you usually need just SORTBY … MAX for common queries.

  • APPLY {expr} AS {name}: Apply a 1-to-1 transformation on one or more properties, and either store the result as a new property down the pipeline, or replace any property using this transformation. expr is an expression that can be used to perform arithmetic operations on numeric properties, or functions that can be applied on properties depending on their types (see below), or any combination thereof. For example: APPLY "sqrt(@foo)/log(@bar) + 5" AS baz will evaluate this expression dynamically for each record in the pipeline and store the result as a new property called baz, that can be referenced by further APPLY / SORTBY / GROUPBY / REDUCE operations down the pipeline.

  • LIMIT {offset} {num}. Limit the number of results to return just num results starting at index offset (zero-based). AS mentioned above, it is much more efficient to use SORTBY … MAX if you are interested in just limiting the output of a sort operation.

    However, limit can be used to limit results without sorting, or for paging the n-largest results as determined by SORTBY MAX. For example, getting results 50-100 of the top 100 results is most efficiently expressed as SORTBY 1 @foo MAX 100 LIMIT 50 50. Removing the MAX from SORTBY will result in the pipeline sorting all the records and then paging over results 50-100.

  • FILTER {expr}. Filter the results using predicate expressions relating to values in each result. They are is applied post-query and relate to the current state of the pipeline.

  • TIMEOUT {milliseconds}: If set, we will override the timeout parameter of the module.

  • PARAMS {nargs} {name} {value}. Define one or more value parameters. Each parameter has a name and a value. Parameters can be referenced in the query by a $, followed by the parameter name, e.g., $user, and each such reference in the search query to a parameter name is substituted by the corresponding parameter value. For example, with parameter definition PARAMS 4 lon 29.69465 lat 34.95126, the expression @loc:[$lon $lat 10 km] would be evaluated to @loc:[29.69465 34.95126 10 km]. Parameters cannot be referenced in the query string where concrete values are not allowed, such as in field names, e.g., @loc. To use PARAMS, DIALECT must be set to 2.

  • DIALECT {dialect_version}. Choose the dialect version to execute the query under. If not specified, the query will execute under the default dialect version set during module initial loading or via FT.CONFIG SET command.

Return

[] and represents a single aggregate result. The Integer reply at position 1 does not represent a valid value.

Examples

Finding visits to the page "about.html", grouping them by the day of the visit, counting the number of visits, and sorting them by day:

FT.AGGREGATE idx "@url:\"about.html\""
    APPLY "day(@timestamp)" AS day
    GROUPBY 2 @day @country
      REDUCE count 0 AS num_visits
    SORTBY 4 @day

Finding the most books ever published in a single year:

FT.AGGREGATE books-idx *
    GROUPBY 1 @published_year
      REDUCE COUNT 0 AS num_published
    GROUPBY 0
      REDUCE MAX 1 @num_published AS max_books_published_per_year

!!! tip "Reducing all results" The last example used GROUPBY 0. Use GROUPBY 0 to apply a REDUCE function over all results from the last step of an aggregation pipeline -- this works on both the initial query and subsequent GROUPBY operations.

Searching for libraries within 10 kilometers of the longitude -73.982254 and latitude 40.753181 then annotating them with the distance between their location and those coordinates:

 FT.AGGREGATE libraries-idx "@location:[-73.982254 40.753181 10 km]"
    LOAD 1 @location
    APPLY "geodistance(@location, -73.982254, 40.753181)"

Here, we needed to use LOAD to pre-load the @location attribute because it is a GEO attribute.

!!! tip "More examples" For more details on aggregations and detailed examples of aggregation queries, see aggregations.

Here we are counting GitHub events by user (actor), to produce the most active users:

127.0.0.1:6379> FT.AGGREGATE gh "*" GROUPBY 1 @actor REDUCE COUNT 0 AS num SORTBY 2 @num DESC MAX 10
 1) (integer) 284784
 2) 1) "actor"
    2) "lombiqbot"
    3) "num"
    4) "22197"
 3) 1) "actor"
    2) "codepipeline-test"
    3) "num"
    4) "17746"
 4) 1) "actor"
    2) "direwolf-github"
    3) "num"
    4) "10683"
 5) 1) "actor"
    2) "ogate"
    3) "num"
    4) "6449"
 6) 1) "actor"
    2) "openlocalizationtest"
    3) "num"
    4) "4759"
 7) 1) "actor"
    2) "digimatic"
    3) "num"
    4) "3809"
 8) 1) "actor"
    2) "gugod"
    3) "num"
    4) "3512"
 9) 1) "actor"
    2) "xdzou"
    3) "num"
    4) "3216"
[10](10)) 1) "actor"
    2) "opstest"
    3) "num"
    4) "2863"
11) 1) "actor"
    2) "jikker"
    3) "num"
    4) "2794"
(0.59s)

144 - FT.ALIASADD

Adds an alias to the index

Add an alias to an index. This allows an administrator to transparently redirect application queries to alternative indexes.

Indexes can have more than one alias, though an alias cannot refer to another alias.

@returns

[] otherwise.

Examples

redis> FT.ALIASADD alias idx
OK
redis> FT.ALIASADD alias idx
(error) Alias already exists

145 - FT.ALIASDEL

Deletes an alias from the index

Remove an alias from an index.

[] otherwise.

Examples

redis> FT.ALIASDEL alias
OK

146 - FT.ALIASUPDATE

Adds or updates an alias to the index

Add an alias an index. If the alias is already associated with another index, FT.ALIASUPDATE will remove the alias association with the previous index.

Return

[] otherwise.

redis> FT.ALIASADD alias idx
OK
redis> FT.ALIASADD alias idx
OK

147 - FT.ALTER

Adds a new field to the index

FT.ALTER SCHEMA ADD

Format

FT.ALTER {index} SCHEMA ADD {attribute} {options} ...

Description

Adds a new attribute to the index.

Adding an attribute to the index will cause any future document updates to use the new attribute when indexing and reindexing of existing documents.

!!! note Depending on how the index was created, you may be limited by the number of additional text attributes which can be added to an existing index. If the current index contains fewer than 32 text attributes, then SCHEMA ADD will only be able to add attributes up to 32 total attributes (meaning that the index will only ever be able to contain 32 total text attributes). If you wish for the index to contain more than 32 attributes, create it with the MAXTEXTFIELDS option.

Parameters

  • index: the index name.
  • attribute: the attribute name.
  • options: the attribute options - refer to FT.CREATE for more information.

Return

[] otherwise.

Examples

redis> FT.ALTER idx SCHEMA ADD id2 NUMERIC SORTABLE
OK

148 - FT.CONFIG GET

Retrieves runtime configuration options

Retrieves configuration options.

Parameters

  • option: the name of the configuration option, or '*' for all.

Return

Array reply of the configuration name and value.

Examples

redis> FT.CONFIG GET TIMEOUT
1) 1) TIMEOUT
   2) 42
redis> FT.CONFIG GET *
 1) 1) EXTLOAD
    2) (nil)
 2) 1) SAFEMODE
    2) true
 3) 1) CONCURRENT_WRITE_MODE
    2) false
 4) 1) NOGC
    2) false
 5) 1) MINPREFIX
    2) 2
 6) 1) FORKGC_SLEEP_BEFORE_EXIT
    2) 0
 7) 1) MAXDOCTABLESIZE
    2) 1000000
 8) 1) MAXSEARCHRESULTS
    2) 1000000
 9) 1) MAXAGGREGATERESULTS
    2) unlimited
10) 1) MAXEXPANSIONS
    2) 200
11) 1) MAXPREFIXEXPANSIONS
    2) 200
12) 1) TIMEOUT
    2) 42
13) 1) INDEX_THREADS
    2) 8
14) 1) SEARCH_THREADS
    2) 20
15) 1) FRISOINI
    2) (nil)
16) 1) ON_TIMEOUT
    2) return
17) 1) GCSCANSIZE
    2) 100
18) 1) MIN_PHONETIC_TERM_LEN
    2) 3
19) 1) GC_POLICY
    2) fork
20) 1) FORK_GC_RUN_INTERVAL
    2) 30
21) 1) FORK_GC_CLEAN_THRESHOLD
    2) 100
22) 1) FORK_GC_RETRY_INTERVAL
    2) 5
23) 1) FORK_GC_CLEAN_NUMERIC_EMPTY_NODES
    2) true
24) 1) _FORK_GC_CLEAN_NUMERIC_EMPTY_NODES
    2) true
25) 1) _MAX_RESULTS_TO_UNSORTED_MODE
    2) 1000
26) 1) UNION_ITERATOR_HEAP
    2) 20
27) 1) CURSOR_MAX_IDLE
    2) 300000
28) 1) NO_MEM_POOLS
    2) false
29) 1) PARTIAL_INDEXED_DOCS
    2) false
30) 1) UPGRADE_INDEX
    2) Upgrade config for upgrading
31) 1) _NUMERIC_COMPRESS
    2) false
32) 1) _FREE_RESOURCE_ON_THREAD
    2) true
33) 1) _PRINT_PROFILE_CLOCK
    2) true
34) 1) RAW_DOCID_ENCODING
    2) false
35) 1) _NUMERIC_RANGES_PARENTS
    2) 0

149 - FT.CONFIG HELP

Help description of runtime configuration options

Describes configuration options.

Parameters

  • option: the name of the configuration option, or '*' for all.

Return

Array reply of the configuration name and description and value.

Examples

redis> FT.CONFIG HELP TIMEOUT
1) 1) TIMEOUT
   2) Description
   3) Query (search) timeout
   4) Value
   5) "42"

150 - FT.CONFIG SET

Sets runtime configuration options

Sets runtime configuration options.

Parameters

  • option: the name of the configuration option.
  • value: a value for the configuration option.

Return

[] otherwise.

Examples

redis> FT.CONFIG SET TIMEOUT 42
OK

151 - FT.CREATE

Creates an index with the given spec

Creates an index with the given spec.

!!! warning "Note on attribute number limits" RediSearch supports up to 1024 attributes per schema, out of which at most 128 can be TEXT attributes. On 32 bit builds, at most 64 attributes can be TEXT attributes. Note that the more attributes you have, the larger your index will be, as each additional 8 attributes require one extra byte per index record to encode. You can always use the NOFIELDS option and not encode attribute information into the index, for saving space, if you do not need filtering by text attributes. This will still allow filtering by numeric and geo attributes.

!!! info "Note on running in clustered databases" When having several indices in a clustered database, you need to make sure the documents you want to index reside on the same shard as the index. You can achieve this by having your documents tagged by the index name.

HSET doc:1{idx} ...
FT.CREATE idx ... PREFIX 1 doc: ...

When Running RediSearch in a clustered database, there is the ability to span the index across shards with RSCoordinator. In this case the above does not apply.

Parameters

  • index: the index name to create. If it exists the old spec will be overwritten

  • ON {data_type} currently supports HASH (default) and JSON.

!!! info "ON JSON" To index JSON, you must have the RedisJSON module installed.

  • PREFIX {count} {prefix} tells the index which keys it should index. You can add several prefixes to index. Since the argument is optional, the default is * (all keys)

  • FILTER {filter} is a filter expression with the full RediSearch aggregation expression language. It is possible to use @__key to access the key that was just added/changed. A field can be used to set field name by passing 'FILTER @indexName=="myindexname"'

  • LANGUAGE {default_lang}: If set indicates the default language for documents in the index. Default to English.

  • LANGUAGE_FIELD {lang_attribute}: If set indicates the document attribute that should be used as the document language.

!!! info "Supported languages" A stemmer is used for the supplied language during indexing. If an unsupported language is sent, the command returns an error. The supported languages are:

Arabic, Basque, Catalan, Danish, Dutch, English, Finnish, French, German, Greek, Hungarian, Indonesian, Irish, Italian, Lithuanian, Nepali, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Turkish, Chinese

When adding Chinese-language documents, LANGUAGE chinese should be set in order for the indexer to properly tokenize the terms. If the default language is used then search terms will be extracted based on punctuation characters and whitespace. The Chinese language tokenizer makes use of a segmentation algorithm (via Friso) which segments texts and checks it against a predefined dictionary. See Stemming for more information.

  • SCORE {default_score}: If set indicates the default score for documents in the index. Default score is 1.0.

  • SCORE_FIELD {score_attribute}: If set indicates the document attribute that should be used as the document's rank based on the user's ranking. Ranking must be between 0.0 and 1.0. If not set the default score is 1.

  • PAYLOAD_FIELD {payload_attribute}: If set indicates the document attribute that should be used as a binary safe payload string to the document, that can be evaluated at query time by a custom scoring function, or retrieved to the client.

  • MAXTEXTFIELDS: For efficiency, RediSearch encodes indexes differently if they are created with less than 32 text attributes. This option forces RediSearch to encode indexes as if there were more than 32 text attributes, which allows you to add additional attributes (beyond 32) using FT.ALTER.

  • NOOFFSETS: If set, we do not store term offsets for documents (saves memory, does not allow exact searches or highlighting). Implies NOHL.

  • TEMPORARY: Create a lightweight temporary index which will expire after the specified period of inactivity. The internal idle timer is reset whenever the index is searched or added to. Because such indexes are lightweight, you can create thousands of such indexes without negative performance implications and therefore you should consider using SKIPINITIALSCAN to avoid costly scanning.

!!! warning "Note about deleting a temporary index" When dropped, a temporary index does not delete the hashes as they may have been indexed in several indexes. Adding the DD flag will delete the hashes as well.

  • NOHL: Conserves storage space and memory by disabling highlighting support. If set, we do not store corresponding byte offsets for term positions. NOHL is also implied by NOOFFSETS.

  • NOFIELDS: If set, we do not store attribute bits for each term. Saves memory, does not allow filtering by specific attributes.

  • NOFREQS: If set, we avoid saving the term frequencies in the index. This saves memory but does not allow sorting based on the frequencies of a given term within the document.

  • STOPWORDS: If set, we set the index with a custom stopword list, to be ignored during indexing and search time. {num} is the number of stopwords, followed by a list of stopword arguments exactly the length of {num}.

    If not set, we take the default list of stopwords.

    If {num} is set to 0, the index will not have stopwords.

  • SKIPINITIALSCAN: If set, we do not scan and index.

  • SCHEMA {identifier} AS {attribute} {attribute type} {options...}: After the SCHEMA keyword, we declare which fields to index:

    • {identifier}

      For hashes, the identifier is a field name within the hash. For JSON, the identifier is a JSON Path expression.

    • AS {attribute}

      This optional parameter defines the attribute associated to the identifier. For example, you can use this feature to alias a complex JSONPath expression with more memorable (and easier to type) name

    Field Types

    • TEXT

      Allows full-text search queries against the value in this attribute.

    • TAG

      Allows exact-match queries, such as categories or primary keys, against the value in this attribute. For more information, see Tag Fields.

    • NUMERIC

      Allows numeric range queries against the value in this attribute. See query syntax docs for details on how to use numeric ranges.

    • GEO

      Allows geographic range queries against the value in this attribute. The value of the attribute must be a string containing a longitude (first) and latitude separated by a comma.

    • VECTOR

      Allows vector similarity queries against the value in this attribute. For more information, see Vector Fields.

    Field Options

    • SORTABLE

      Numeric, tag (not supported with JSON) or text attributes can have the optional SORTABLE argument. As the user sorts the results by the value of this attribute, the results will be available with very low latency. (this adds memory overhead so consider not to declare it on large text attributes).

    • UNF

      By default, SORTABLE applies a normalization to the indexed value (characters set to lowercase, removal of diacritics). When using UNF (un-normalized form) it is possible to disable the normalization and keep the original form of the value.

    • NOSTEM

      Text attributes can have the NOSTEM argument which will disable stemming when indexing its values. This may be ideal for things like proper names.

    • NOINDEX

      Attributes can have the NOINDEX option, which means they will not be indexed. This is useful in conjunction with SORTABLE, to create attributes whose update using PARTIAL will not cause full reindexing of the document. If an attribute has NOINDEX and doesn't have SORTABLE, it will just be ignored by the index.

    • PHONETIC {matcher}

      Declaring a text attribute as PHONETIC will perform phonetic matching on it in searches by default. The obligatory {matcher} argument specifies the phonetic algorithm and language used. The following matchers are supported:

      • dm:en - Double Metaphone for English
      • dm:fr - Double Metaphone for French
      • dm:pt - Double Metaphone for Portuguese
      • dm:es - Double Metaphone for Spanish

      For more details see Phonetic Matching.

    • WEIGHT {weight}

      For TEXT attributes, declares the importance of this attribute when calculating result accuracy. This is a multiplication factor, and defaults to 1 if not specified.

    • SEPARATOR {sep}

      For TAG attributes, indicates how the text contained in the attribute is to be split into individual tags. The default is ,. The value must be a single character.

    • CASESENSITIVE

      For TAG attributes, keeps the original letter cases of the tags. If not specified, the characters are converted to lowercase.

Return

[] otherwise.

Examples

Creating an index that stores the title, publication date, and categories of blog post hashes whose keys start with blog:post: (e.g., blog:post:1):

FT.CREATE idx ON HASH PREFIX 1 blog:post: SCHEMA title TEXT SORTABLE published_at NUMERIC SORTABLE category TAG SORTABLE

Indexing the "sku" attribute from a hash as both a TAG and as TEXT:

FT.CREATE idx ON HASH PREFIX 1 blog:post: SCHEMA sku AS sku_text TEXT sku AS sku_tag TAG SORTABLE

Indexing two different hashes -- one containing author data and one containing books -- in the same index:

FT.CREATE author-books-idx ON HASH PREFIX 2 author:details: book:details: SCHEMA
author_id TAG SORTABLE author_ids TAG title TEXT name TEXT

!!! note In this example, keys for author data use the key pattern author:details:<id> while keys for book data use the pattern book:details:<id>.

Indexing only authors whose names start with "G":

FT.CREATE g-authors-idx ON HASH PREFIX 1 author:details FILTER 'startswith(@name, "G")' SCHEMA name TEXT

Indexing only books that have a subtitle:

FT.CREATE subtitled-books-idx ON HASH PREFIX 1 book:details FILTER '@subtitle != ""' SCHEMA title TEXT

Indexing books that have a "categories" attribute where each category is separated by a ; character:

FT.CREATE books-idx ON HASH PREFIX 1 book:details FILTER SCHEMA title TEXT categories TAG SEPARATOR ";"

Indexing a JSON document using a JSON Path expression:

FT.CREATE idx ON JSON SCHEMA $.title AS title TEXT $.categories AS categories TAG

152 - FT.CURSOR DEL

Deletes a cursor

Delete a cursor.

Parameters

  • index: the index name.
  • cursorId: the id of the cursor.

Return

[] otherwise.

redis> FT.CURSOR DEL idx 342459320
OK 
redis> FT.CURSOR DEL idx 342459320
(error) Cursor does not exist 

153 - FT.CURSOR READ

Reads from a cursor

Reads next results from an existing cursor.

Parameters

  • index: the index name.
  • cursorId: the id of the cursor.
  • readSize: number of result to read. This parameters override the COUNT specified in FT.AGGREGATE.

Return

[] and represents a single aggregate result.

redis> FT.CURSOR READ idx 342459320 COUNT 50 

154 - FT.DICTADD

Adds terms to a dictionary

Adds terms to a dictionary.

Parameters

  • dict: the dictionary name.

  • term: the term to add to the dictionary.

Return

Integer reply - the number of new terms that were added.

Examples

redis> FT.DICTADD dict foo bar "hello world"
(integer) 3

155 - FT.DICTDEL

Deletes terms from a dictionary

Deletes terms from a dictionary.

Parameters

  • dict: the dictionary name.

  • term: the term to delete from the dictionary.

Return

Integer reply - the number of terms that were deleted.

redis> FT.DICTDEL dict foo bar "hello world"
(integer) 3

156 - FT.DICTDUMP

Dumps all terms in the given dictionary

Dumps all terms in the given dictionary.

Parameters

  • dict: the dictionary name.

Return

Returns an array, where each element is term (string).

Examples

redis> FT.DICTDUMP dict
1) "foo"
2) "bar"
3) "hello world"

157 - FT.DROPINDEX

Deletes the index

Deletes the index.

By default, FT.DROPINDEX does not delete the document hashes associated with the index. Adding the DD option deletes the hashes as well.

Parameters

  • index: The Fulltext index name. The index must be first created with FT.CREATE
  • DD: If set, the drop operation will delete the actual document hashes.

Return

[] otherwise.

!!! note When using FT.DROPINDEX with the parameter DD, if an index creation is still running (FT.CREATE is running asynchronously), only the document hashes that have already been indexed are deleted. The document hashes left to be indexed will remain in the database. You can use FT.INFO to check the completion of the indexing.

Examples

redis> FT.DROPINDEX idx DD
OK

158 - FT.EXPLAIN

Returns the execution plan for a complex query

Returns the execution plan for a complex query.

In the returned response, a + on a term is an indication of stemming.

Parameters

  • index: The index name. The index must be first created with FT.CREATE
  • query: The query string, as if sent to FT.SEARCH
  • DIALECT {dialect_version}. Choose the dialect version to execute the query under. If not specified, the query will execute under the default dialect version set during module initial loading or via FT.CONFIG SET command.

!!! tip You should use redis-cli --raw to properly read line-breaks in the returned response.

Return

String Response. A string representing the execution plan (see above example).

Examples

$ redis-cli --raw

127.0.0.1:6379> FT.EXPLAIN rd "(foo bar)|(hello world) @date:[100 200]|@date:[500 +inf]"
INTERSECT {
  UNION {
    INTERSECT {
      foo
      bar
    }
    INTERSECT {
      hello
      world
    }
  }
  UNION {
    NUMERIC {100.000000 <= x <= 200.000000}
    NUMERIC {500.000000 <= x <= inf}
  }
}

159 - FT.EXPLAINCLI

Returns the execution plan for a complex query

Returns the execution plan for a complex query but formatted for easier reading without using redis-cli --raw.

In the returned response, a + on a term is an indication of stemming.

Parameters

  • index: The index name. The index must be first created with FT.CREATE
  • query: The query string, as if sent to FT.SEARCH
  • DIALECT {dialect_version}. Choose the dialect version to execute the query under. If not specified, the query will execute under the default dialect version set during module initial loading or via FT.CONFIG SET command.

Return

Array reply with a string represention the execution plan.

Examples

$ redis-cli

127.0.0.1:6379> FT.EXPLAINCLI rd "(foo bar)|(hello world) @date:[100 200]|@date:[500 +inf]"
 1) INTERSECT {
 2)   UNION {
 3)     INTERSECT {
 4)       UNION {
 5)         foo
 6)         +foo(expanded)
 7)       }
 8)       UNION {
 9)         bar
10)         +bar(expanded)
11)       }
12)     }
13)     INTERSECT {
14)       UNION {
15)         hello
16)         +hello(expanded)
17)       }
18)       UNION {
19)         world
20)         +world(expanded)
21)       }
22)     }
23)   }
24)   UNION {
25)     NUMERIC {100.000000 <= @date <= 200.000000}
26)     NUMERIC {500.000000 <= @date <= inf}
27)   }
28) }
29)

160 - FT.INFO

Returns information and statistics on the index

Returns information and statistics on the index. Returned values include:

  • index_definition: reflection of FT.CREATE command parameters.
  • fields: index schema - field names, types, and attributes.
  • Number of documents.
  • Number of distinct terms.
  • Average bytes per record.
  • Size and capacity of the index buffers.
  • Indexing state and percentage as well as failures:
    • indexing: whether of not the index is being scanned in the background,
    • percent_indexed: progress of background indexing (1 if complete),
    • hash_indexing_failures: number of failures due to operations not compatible with index schema.

Optional

  • Statistics about the garbage collector for all options other than NOGC.
  • Statistics about cursors if a cursor exists for the index.
  • Statistics about stopword lists if a custom stopword list is used.

Parameters

  • index: The Fulltext index name. The index must be first created with FT.CREATE

Return

Array reply - pairs of keys and values.

Examples

127.0.0.1:6379> ft.info idx
1) index_name
 2) wikipedia
 3) index_options
 4) (empty array)
    11) score_field
    12) __score
    13) payload_field
    14) __payload
 7) fields
 8) 1) 1) title
       2) type
       3) TEXT
       4) WEIGHT
       5) "1"
       6) SORTABLE
    2) 1) body
       2) type
       3) TEXT
       4) WEIGHT
       5) "1"
    3) 1) id
       2) type
       3) NUMERIC
    4) 1) subject location
       2) type
       3) GEO
 9) num_docs
10) "0"
11) max_doc_id
12) "345678"
13) num_terms
14) "691356"
15) num_records
16) "0"
17) inverted_sz_mb
18) "0"
19) vector_index_sz_mb
20) "0"
21) total_inverted_index_blocks
22) "933290"
23) offset_vectors_sz_mb
24) "0.65932846069335938"
25) doc_table_size_mb
26) "29.893482208251953"
27) sortable_values_size_mb
28) "11.432285308837891"
29) key_table_size_mb
30) "1.239776611328125e-05"
31) records_per_doc_avg
32) "-nan"
33) bytes_per_record_avg
34) "-nan"
35) offsets_per_term_avg
36) "inf"
37) offset_bits_per_record_avg
38) "8"
39) hash_indexing_failures
40) "0"
41) indexing
42) "0"
43) percent_indexed
44) "1"
45) gc_stats
46)  1) bytes_collected
     2) "4148136"
     3) total_ms_run
     4) "14796"
     5) total_cycles
     6) "1"
     7) average_cycle_time_ms
     8) "14796"
     9) last_run_time_ms
    10) "14796"
    11) gc_numeric_trees_missed
    12) "0"
    13) gc_blocks_denied
    14) "0"
47) cursor_stats
48) 1) global_idle
    2) (integer) 0
    3) global_total
    4) (integer) 0
    5) index_capacity
    6) (integer) 128
    7) index_total
    8) (integer) 0
49) stopwords_list
50) 1) "tlv"
    2) "summer"
    3) "2020"

161 - FT.PROFILE

Performs a FT.SEARCH or FT.AGGREGATE command and collects performance information

Performs a FT.SEARCH or FT.AGGREGATE command and collects performance information. Return value has an array with two elements:

  • Results - The normal reply from RediSearch, similar to a cursor.
  • Profile - The details in the profile are:
    • Total profile time - The total runtime of the query.
    • Parsing time - Parsing time of the query and parameters into an execution plan.
    • Pipeline creation time - Creation time of execution plan including iterators, result processors and reducers creation.
    • Iterators profile - Index iterators information including their type, term, count and time data. Inverted-index iterators have in addition the number of elements they contain. Hybrid vector iterators returning the top results from the vector index in batches, include the number of batches.
    • Result processors profile - Result processors chain with type, count and time data.

Parameters

  • index: The index name. The index must be first created with FT.CREATE
  • SEARCH,AGGREGATE: Differ between FT.SEARCH and FT.AGGREGATE
  • LIMITED: Removes details of reader iterator
  • QUERY {query}: The query string, as if sent to FT.SEARCH

Return

[] with information of time used to create the query and time and count of calls of iterators and result-processors.

!!! tip To reduce the size of the output, use NOCONTENT or LIMIT 0 0 to reduce results reply or LIMITED to not reply with details of reader iterators inside builtin-unions such as fuzzy or prefix.

Examples

FT.PROFILE idx SEARCH QUERY "hello world"
1) 1) (integer) 1
   2) "doc1"
   3) 1) "t"
      2) "hello world"
2) 1) 1) Total profile time
      2) "0.47199999999999998"
   2) 1) Parsing time
      2) "0.218"
   3) 1) Pipeline creation time
      2) "0.032000000000000001"
   4) 1) Iterators profile
      2) 1) Type
         2) INTERSECT
         3) Time
         4) "0.025000000000000001"
         5) Counter
         6) (integer) 1
         7) Child iterators
         8)  1) Type
             2) TEXT
             3) Term
             4) hello
             5) Time
             6) "0.0070000000000000001"
             7) Counter
             8) (integer) 1
             9) Size
            10) (integer) 1
         9)  1) Type
             2) TEXT
             3) Term
             4) world
             5) Time
             6) "0.0030000000000000001"
             7) Counter
             8) (integer) 1
             9) Size
            10) (integer) 1
   5) 1) Result processors profile
      2) 1) Type
         2) Index
         3) Time
         4) "0.036999999999999998"
         5) Counter
         6) (integer) 1
      3) 1) Type
         2) Scorer
         3) Time
         4) "0.025000000000000001"
         5) Counter
         6) (integer) 1
      4) 1) Type
         2) Sorter
         3) Time
         4) "0.013999999999999999"
         5) Counter
         6) (integer) 1
      5) 1) Type
         2) Loader
         3) Time
         4) "0.10299999999999999"
         5) Counter
         6) (integer) 1

162 - FT.SEARCH

Searches the index with a textual query, returning either documents or just ids

Complexity

O(n) for single word queries. n is the number of the results in the result set. Finding all the documents that have a specific term is O(1), however, a scan on all those documents is needed to load the documents data from redis hashes and return them.

The time complexity for more complex queries varies, but in general it's proportional to the number of words, the number of intersection points between them and the number of results in the result set.


Searches the index with a textual query, returning either documents or just ids.

Parameters

  • index: The index name. The index must be first created with FT.CREATE.

  • query: the text query to search. If it's more than a single word, put it in quotes. Refer to query syntax for more details.

  • NOCONTENT: If it appears after the query, we only return the document ids and not the content. This is useful if RediSearch is only an index on an external document collection

  • VERBATIM: if set, we do not try to use stemming for query expansion but search the query terms verbatim.

  • NOSTOPWORDS: If set, we do not filter stopwords from the query.

  • WITHSCORES: If set, we also return the relative internal score of each document. this can be used to merge results from multiple instances

  • WITHPAYLOADS: If set, we retrieve optional document payloads (see FT.ADD). the payloads follow the document id, and if WITHSCORES was set, follow the scores.

  • WITHSORTKEYS: Only relevant in conjunction with SORTBY. Returns the value of the sorting key, right after the id and score and /or payload if requested. This is usually not needed by users, and exists for distributed search coordination purposes.

  • FILTER numeric_attribute min max: If set, and numeric_attribute is defined as a numeric attribute in FT.CREATE, we will limit results to those having numeric values ranging between min and max. min and max follow ZRANGE syntax, and can be -inf, +inf and use ( for exclusive ranges. Multiple numeric filters for different attributes are supported in one query.

  • GEOFILTER {geo_attribute} {lon} {lat} {radius} m|km|mi|ft: If set, we filter the results to a given radius from lon and lat. Radius is given as a number and units. See GEORADIUS for more details.

  • INKEYS {num} {attribute} ...: If set, we limit the result to a given set of keys specified in the list. the first argument must be the length of the list, and greater than zero. Non-existent keys are ignored - unless all the keys are non-existent.

  • INFIELDS {num} {attribute} ...: If set, filter the results to ones appearing only in specific attributes of the document, like title or URL. You must include num, which is the number of attributes you're filtering by. For example, if you request title and URL, then num is 2.

  • RETURN {num} {identifier} AS {property} ...: Use this keyword to limit which attributes from the document are returned. num is the number of attributes following the keyword. If num is 0, it acts like NOCONTENT. identifier is either an attribute name (for hashes and JSON) or a JSON Path expression for (JSON). property is an optional name used in the result. If not provided, the identifier is used in the result.

  • SUMMARIZE ...: Use this option to return only the sections of the attribute which contain the matched text. See Highlighting for more details

  • HIGHLIGHT ...: Use this option to format occurrences of matched text. See Highlighting for more details

  • SLOP {slop}: If set, we allow a maximum of N intervening number of unmatched offsets between phrase terms. (i.e the slop for exact phrases is 0)

  • INORDER: If set, and usually used in conjunction with SLOP, we make sure the query terms appear in the same order in the document as in the query, regardless of the offsets between them.

  • LANGUAGE {language}: If set, we use a stemmer for the supplied language during search for query expansion. If querying documents in Chinese, this should be set to chinese in order to properly tokenize the query terms. Defaults to English. If an unsupported language is sent, the command returns an error. See FT.ADD for the list of languages.

  • EXPANDER {expander}: If set, we will use a custom query expander instead of the stemmer. See Extensions.

  • SCORER {scorer}: If set, we will use a custom scoring function defined by the user. See Extensions.

  • EXPLAINSCORE: If set, will return a textual description of how the scores were calculated. Using this options requires the WITHSCORES option.

  • PAYLOAD {payload}: Add an arbitrary, binary safe payload that will be exposed to custom scoring functions. See Extensions.

  • SORTBY {attribute} [ASC|DESC]: If specified, the results are ordered by the value of this attribute. This applies to both text and numeric attributes. Attributes needed for SORTBY should be declared as SORTABLE in the index, in order to be available with very low latency (notice this adds memory overhead)

  • LIMIT first num: Limit the results to the offset and number of results given. Note that the offset is zero-indexed. The default is 0 10, which returns 10 items starting from the first result.

!!! tip LIMIT 0 0 can be used to count the number of documents in the result set without actually returning them.

  • TIMEOUT {milliseconds}: If set, we will override the timeout parameter of the module.
  • PARAMS {nargs} {name} {value}. Define one or more value parameters. Each parameter has a name and a value. Parameters can be referenced in the query by a $, followed by the parameter name, e.g., $user, and each such reference in the search query to a parameter name is substituted by the corresponding parameter value. For example, with parameter definition PARAMS 4 lon 29.69465 lat 34.95126, the expression @loc:[$lon $lat 10 km] would be evaluated to @loc:[29.69465 34.95126 10 km]. Parameters cannot be referenced in the query string where concrete values are not allowed, such as in field names, e.g., @loc. To use PARAMS, DIALECT must be set to 2.
  • DIALECT {dialect_version}. Choose the dialect version to execute the query under. If not specified, the query will execute under the default dialect version set during module initial loading or via FT.CONFIG SET command.

Return

[] pairs of document IDs, and a @array-replies of attribute/value pairs.

If NOCONTENT was given, we return an array where the first element is the total number of results, and the rest of the members are document ids.

!!! note "Expiration of hashes during a search query" If a hash expiry time is reached after the start of the query process, the hash will be counted in the total number of results but name and content of the hash will not be returned.

Examples

Searching for the term "wizard" in every TEXT attribute of an index containing book data:

FT.SEARCH books-idx "wizard"

Searching for the term "dogs" in only the "title" attribute:

FT.SEARCH books-idx "@title:dogs"

Searching for books published in 2020 or 2021:

FT.SEARCH books-idx "@published_at:[2020 2021]"

Searching for Chinese restaurants within 5 kilometers of longitude -122.41, latitude 37.77 (San Francisco):

FT.SEARCH restaurants-idx "chinese @location:[-122.41 37.77 5 km]"

Searching for the term "dogs" or "cats" in the "title" attribute, but giving matches of "dogs" a higher relevance score (also known as boosting):

FT.SEARCH books-idx "(@title:dogs | @title:cats) | (@title:dogs) => { $weight: 5.0; }"

Searching for books with "dogs" in any TEXT attribute in the index and requesting an explanation of scoring for each result:

FT.SEARCH books-idx "dogs" WITHSCORES EXPLAINSCORE

Searching for books with "space" in the title that have "science" in the TAG attribute "categories":

FT.SEARCH books-idx "@title:space @categories:{science}"

Searching for books with "Python" in any TEXT attribute, returning ten results starting with the eleventh result in the entire result set (the offset parameter is zero-based), and returning only the "title" attribute for each result:

FT.SEARCH books-idx "python" LIMIT 10 10 RETURN 1 title

Searching for books with "Python" in any TEXT attribute, returning the price stored in the original JSON document.

FT.SEARCH books-idx "python" RETURN 3 $.book.price AS price

Searching for books with semantically similar "title" to "Planet Earth", Return top 10 results sorted by distance.

FT.SEARCH books-idx "*=>[KNN 10 @title_embedding $query_vec AS title_score]" PARAMS 2 query_vec <"Planet Earth" embedding BLOB> SORTBY title_score DIALECT 2

!!! tip "More examples" For more details and query examples, see query syntax.

163 - FT.SPELLCHECK

Performs spelling correction on a query, returning suggestions for misspelled terms

Performs spelling correction on a query, returning suggestions for misspelled terms.

See Query Spelling Correction for more details.

Parameters

  • index: the index with the indexed terms.

  • query: the search query.

  • TERMS: specifies an inclusion (INCLUDE) or exclusion (EXCLUDE) custom dictionary named {dict}. Refer to FT.DICTADD, FT.DICTDEL and FT.DICTDUMP for managing custom dictionaries.

  • DISTANCE: the maximal Levenshtein distance for spelling suggestions (default: 1, max: 4).

  • DIALECT {dialect_version}. Choose the dialect version to execute the query under. If not specified, the query will execute under the default dialect version set during module initial loading or via FT.CONFIG SET command.

Return

Array reply, in which each element represents a misspelled term from the query. The misspelled terms are ordered by their order of appearance in the query.

Each misspelled term, in turn, is a 3-element array consisting of the constant string "TERM", the term itself and an array of suggestions for spelling corrections.

Each element in the spelling corrections array consists of the score of the suggestion and the suggestion itself. The suggestions array, per misspelled term, is ordered in descending order by score.

The score is calculated by dividing the number of documents in which the suggested term exists, by the total number of documents in the index. Results can be normalized by dividing scores by the highest score.

Examples

redis> FT.SPELLCHECK idx held DISTANCE 2
1) 1) "TERM"
   2) "held"
   3) 1) 1) "0.66666666666666663"
         2) "hello"
      2) 1) "0.33333333333333331"
         2) "help"

164 - FT.SUGADD

Adds a suggestion string to an auto-complete suggestion dictionary

Adds a suggestion string to an auto-complete suggestion dictionary. This is disconnected from the index definitions, and leaves creating and updating suggestions dictionaries to the user.

Parameters

  • key: the suggestion dictionary key.
  • string: the suggestion string we index
  • score: a floating point number of the suggestion string's weight
  • INCR: if set, we increment the existing entry of the suggestion by the given score, instead of replacing the score. This is useful for updating the dictionary based on user queries in real time
  • PAYLOAD {payload}: If set, we save an extra payload with the suggestion, that can be fetched by adding the WITHPAYLOADS argument to FT.SUGGET.

@returns

Integer reply: the current size of the suggestion dictionary.

Examples

FT.SUGADD sug "hello world" 1
(integer) 3

165 - FT.SUGDEL

Deletes a string from a suggestion index

Deletes a string from a suggestion index.

Parameters

  • key: the suggestion dictionary key.
  • string: the string to delete

Returns

Integer reply: 1 if the string was found and deleted, 0 otherwise.

Examples

redis> FT.SUGDEL sug "hello"
(integer) 1
redis> FT.SUGDEL sug "hello"
(integer) 0

166 - FT.SUGGET

Gets completion suggestions for a prefix

Gets completion suggestions for a prefix.

Parameters

  • key: the suggestion dictionary key.
  • prefix: the prefix to complete on
  • FUZZY: if set, we do a fuzzy prefix search, including prefixes at Levenshtein distance of 1 from the prefix sent
  • MAX num: If set, we limit the results to a maximum of num (default: 5).
  • WITHSCORES: If set, we also return the score of each suggestion. this can be used to merge results from multiple instances
  • WITHPAYLOADS: If set, we return optional payloads saved along with the suggestions. If no payload is present for an entry, we return a Null Reply.

Returns

Array reply: a list of the top suggestions matching the prefix, optionally with score after each entry.

Examples

redis> FT.SUGGET sug hell FUZZY MAX 3 WITHSCORES
1) "hell"
2) "2147483648"
3) "hello"
4) "0.70710676908493042"

167 - FT.SUGLEN

Gets the size of an auto-complete suggestion dictionary

Gets the size of an auto-complete suggestion dictionary

Parameters

  • key: the suggestion dictionary key.

Return

Integer reply: the current size of the suggestion dictionary.

Examples

FT.SUGLEN sug
(integer) 2

168 - FT.SYNDUMP

Dumps the contents of a synonym group

Dumps the contents of a synonym group.

The command is used to dump the synonyms data structure. Returns a list of synonym terms and their synonym group ids.

Return

Array reply - with pair of term and an array of synonym groups.

Examples

127.0.0.1:6379> FT.SYNDUMP idx
1) "shalom"
2) 1) "synonym1"
   2) "synonym2"
3) "hi"
4) 1) "synonym1"
5) "hello"
6) 1) "synonym1"

169 - FT.SYNUPDATE

Creates or updates a synonym group with additional terms

Updates a synonym group.

The command is used to create or update a synonym group with additional terms. The command triggers a scan of all documents.

Parameters

  • SKIPINITIALSCAN: If set, we do not scan and index and only documents which were indexed after the update will be affected.

Return

[] otherwise.

Examples

redis> FT.SYNUPDATE idx synonym hello hi shalom
OK
redis> FT.SYNUPDATE idx synonym SKIPINITIALSCAN hello hi shalom
OK

170 - FT.TAGVALS

Returns the distinct tags indexed in a Tag field

Returns the distinct set of values indexed in a Tag field.

This is useful if your tag indexes things like cities, categories, etc.

!!! warning "Limitations" There is no paging or sorting, the tags are not alphabetically sorted. This command only operates on Tag fields. The strings return lower-cased and stripped of whitespaces, but otherwise unchanged.

Parameters

  • index: The Fulltext index name. The index must be first created with FT.CREATE
  • filed_name: The name of a Tag file defined in the schema.

Return

Array reply of all the distinct tags in the tag index.

Examples

FT.TAGVALS idx myTag
1) "Hello"
2) "World"

171 - FUNCTION

A container for function commands

This is a container command for function commands.

To see the list of available commands you can call FUNCTION HELP.

172 - FUNCTION DELETE

Delete a function by name

Delete a library and all its functions.

This command deletes the library called library-name and all functions in it. If the library doesn't exist, the server returns an error.

For more information please refer to Introduction to Redis Functions.

Return

Simple string reply

Examples

redis> FUNCTION LOAD Lua mylib "redis.register_function('myfunc', function(keys, args) return 'hello' end)"
OK
redis> FCALL myfunc 0
"hello"
redis> FUNCTION DELETE mylib
OK
redis> FCALL myfunc 0
(error) ERR Function not found

173 - FUNCTION DUMP

Dump all functions into a serialized binary payload

Return the serialized payload of loaded libraries. You can restore the serialized payload later with the FUNCTION RESTORE command.

For more information please refer to Introduction to Redis Functions.

Return

Bulk string reply: the serialized payload

Examples

The following example shows how to dump loaded libraries using FUNCTION DUMP and then it calls FUNCTION FLUSH deletes all the libraries. Then, it restores the original libraries from the serialized payload with FUNCTION RESTORE.

redis> FUNCTION DUMP
"\xf6\x05mylib\x03LUA\x00\xc3@D@J\x1aredis.register_function('my@\x0b\x02', @\x06`\x12\x11keys, args) return`\x0c\a[1] end)\n\x00@\n)\x11\xc8|\x9b\xe4"
redis> FUNCTION FLUSH
OK
redis> FUNCTION RESTORE "\xf6\x05mylib\x03LUA\x00\xc3@D@J\x1aredis.register_function('my@\x0b\x02', @\x06`\x12\x11keys, args) return`\x0c\a[1] end)\n\x00@\n)\x11\xc8|\x9b\xe4"
OK
redis> FUNCTION LIST
1) 1) "library_name"
   2) "mylib"
   3) "engine"
   4) "LUA"
   5) "description"
   6) (nil)
   7) "functions"
   8) 1) 1) "name"
         2) "myfunc"
         3) "description"
         4) (nil)

174 - FUNCTION FLUSH

Deleting all functions

Deletes all the libraries.

Unless called with the optional mode argument, the lazyfree-lazy-user-flush configuration directive sets the effective behavior. Valid modes are:

  • ASYNC: Asynchronously flush the libraries.
  • SYNC: Synchronously flush the libraries.

For more information please refer to Introduction to Redis Functions.

Return

Simple string reply

175 - FUNCTION HELP

Show helpful text about the different subcommands

The FUNCTION HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

176 - FUNCTION KILL

Kill the function currently in execution.

Kill a function that is currently executing.

The FUNCTION KILL command can be used only on functions that did not modify the dataset during their execution (since stopping a read-only function does not violate the scripting engine's guaranteed atomicity).

For more information please refer to Introduction to Redis Functions.

Return

Simple string reply

177 - FUNCTION LIST

List information about all the functions

Return information about the functions and libraries.

You can use the optional LIBRARYNAME argument to specify a pattern for matching library names. The optional WITHCODE modifier will cause the server to include the libraries source implementation in the reply.

The following information is provided for each of the libraries in the response:

  • library_name: the name of the library.
  • engine: the engine of the library.
  • functions: the list of functions in the library. Each function has the following fields:
    • name: the name of the function.
    • description: the function's description.
    • flags: an array of function flags.
  • library_code: the library's source code (when given the WITHCODE modifier).

For more information please refer to Introduction to Redis Functions.

Return

Array reply

178 - FUNCTION LOAD

Create a function with the given arguments (name, code, description)

Load a library to Redis.

The command's gets a single mandatory parameter which is the source code that implements the library. The library payload must start with Shebang statement that provides a metadata about the library (like the engine to use and the library name). Shebang format: #!<engine name> name=<library name>. Currently engine name must be lua.

For the Lua engine, the implementation should declare one or more entry points to the library with the redis.register_function() API. Once loaded, you can call the functions in the library with the FCALL (or FCALL_RO when applicable) command.

When attempting to load a library with a name that already exists, the Redis server returns an error. The REPLACE modifier changes this behavior and overwrites the existing library with the new contents.

The command will return an error in the following circumstances:

  • An invalid engine-name was provided.
  • The library's name already exists without the REPLACE modifier.
  • A function in the library is created with a name that already exists in another library (even when REPLACE is specified).
  • The engine failed in creating the library's functions (due to a compilation error, for example).
  • No functions were declared by the library.

For more information please refer to Introduction to Redis Functions.

Return

@string - the library name that was loaded

Examples

The following example will create a library named mylib with a single function, myfunc, that returns the first argument it gets.

redis> FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return args[1] end)"
mylib
redis> FCALL myfunc 0 hello
"hello"

179 - FUNCTION RESTORE

Restore all the functions on the given payload

Restore libraries from the serialized payload.

You can use the optional policy argument to provide a policy for handling existing libraries. The following policies are allowed:

  • APPEND: appends the restored libraries to the existing libraries and aborts on collision. This is the default policy.
  • FLUSH: deletes all existing libraries before restoring the payload.
  • REPLACE: appends the restored libraries to the existing libraries, replacing any existing ones in case of name collisions. Note that this policy doesn't prevent function name collisions, only libraries.

For more information please refer to Introduction to Redis Functions.

Return

Simple string reply

180 - FUNCTION STATS

Return information about the function currently running (name, description, duration)

Return information about the function that's currently running and information about the available execution engines.

The reply is map with two keys:

  1. running_script: information about the running script. If there's no in-flight function, the server replies with a nil. Otherwise, this is a map with the following keys:
  • name: the name of the function.
  • command: the command and arguments used for invoking the function.
  • duration_ms: the function's runtime duration in milliseconds.
  1. engines: this is a map of maps. Each entry in the map represent a single engine. Engine map contains statistics about the engine like number of functions and number of libraries.

You can use this command to inspect the invocation of a long-running function and decide whether kill it with the FUNCTION KILL command.

For more information please refer to Introduction to Redis Functions.

Return

Array reply

181 - GEOADD

Add one or more geospatial items in the geospatial index represented using a sorted set

Adds the specified geospatial items (longitude, latitude, name) to the specified key. Data is stored into the key as a sorted set, in a way that makes it possible to query the items with the GEOSEARCH command.

The command takes arguments in the standard format x,y so the longitude must be specified before the latitude. There are limits to the coordinates that can be indexed: areas very near to the poles are not indexable.

The exact limits, as specified by EPSG:900913 / EPSG:3785 / OSGEO:41001 are the following:

  • Valid longitudes are from -180 to 180 degrees.
  • Valid latitudes are from -85.05112878 to 85.05112878 degrees.

The command will report an error when the user attempts to index coordinates outside the specified ranges.

Note: there is no GEODEL command because you can use ZREM to remove elements. The Geo index structure is just a sorted set.

GEOADD options

GEOADD also provides the following options:

  • XX: Only update elements that already exist. Never add elements.
  • NX: Don't update already existing elements. Always add new elements.
  • CH: Modify the return value from the number of new elements added, to the total number of elements changed (CH is an abbreviation of changed). Changed elements are new elements added and elements already existing for which the coordinates was updated. So elements specified in the command line having the same score as they had in the past are not counted. Note: normally, the return value of GEOADD only counts the number of new elements added.

Note: The XX and NX options are mutually exclusive.

How does it work?

The way the sorted set is populated is using a technique called Geohash. Latitude and Longitude bits are interleaved to form a unique 52-bit integer. We know that a sorted set double score can represent a 52-bit integer without losing precision.

This format allows for bounding box and radius querying by checking the 1+8 areas needed to cover the whole shape and discarding elements outside it. The areas are checked by calculating the range of the box covered, removing enough bits from the less significant part of the sorted set score, and computing the score range to query in the sorted set for each area.

What Earth model does it use?

The model assumes that the Earth is a sphere since it uses the Haversine formula to calculate distance. This formula is only an approximation when applied to the Earth, which is not a perfect sphere. The introduced errors are not an issue when used, for example, by social networks and similar applications requiring this type of querying. However, in the worst case, the error may be up to 0.5%, so you may want to consider other systems for error-critical applications.

Return

Integer reply, specifically:

  • When used without optional arguments, the number of elements added to the sorted set (excluding score updates).
  • If the CH option is specified, the number of elements that were changed (added or updated).

Examples

GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEODIST Sicily Palermo Catania GEORADIUS Sicily 15 37 100 km GEORADIUS Sicily 15 37 200 km

182 - GEODIST

Returns the distance between two members of a geospatial index

Return the distance between two members in the geospatial index represented by the sorted set.

Given a sorted set representing a geospatial index, populated using the GEOADD command, the command returns the distance between the two specified members in the specified unit.

If one or both the members are missing, the command returns NULL.

The unit must be one of the following, and defaults to meters:

  • m for meters.
  • km for kilometers.
  • mi for miles.
  • ft for feet.

The distance is computed assuming that the Earth is a perfect sphere, so errors up to 0.5% are possible in edge cases.

Return

Bulk string reply, specifically:

The command returns the distance as a double (represented as a string) in the specified unit, or NULL if one or both the elements are missing.

Examples

GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEODIST Sicily Palermo Catania GEODIST Sicily Palermo Catania km GEODIST Sicily Palermo Catania mi GEODIST Sicily Foo Bar

183 - GEOHASH

Returns members of a geospatial index as standard geohash strings

Return valid Geohash strings representing the position of one or more elements in a sorted set value representing a geospatial index (where elements were added using GEOADD).

Normally Redis represents positions of elements using a variation of the Geohash technique where positions are encoded using 52 bit integers. The encoding is also different compared to the standard because the initial min and max coordinates used during the encoding and decoding process are different. This command however returns a standard Geohash in the form of a string as described in the Wikipedia article and compatible with the geohash.org web site.

Geohash string properties

The command returns 11 characters Geohash strings, so no precision is lost compared to the Redis internal 52 bit representation. The returned Geohashes have the following properties:

  1. They can be shortened removing characters from the right. It will lose precision but will still point to the same area.
  2. It is possible to use them in geohash.org URLs such as http://geohash.org/<geohash-string>. This is an example of such URL.
  3. Strings with a similar prefix are nearby, but the contrary is not true, it is possible that strings with different prefixes are nearby too.

Return

Array reply, specifically:

The command returns an array where each element is the Geohash corresponding to each member name passed as argument to the command.

Examples

GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOHASH Sicily Palermo Catania

184 - GEOPOS

Returns longitude and latitude of members of a geospatial index

Return the positions (longitude,latitude) of all the specified members of the geospatial index represented by the sorted set at key.

Given a sorted set representing a geospatial index, populated using the GEOADD command, it is often useful to obtain back the coordinates of specified members. When the geospatial index is populated via GEOADD the coordinates are converted into a 52 bit geohash, so the coordinates returned may not be exactly the ones used in order to add the elements, but small errors may be introduced.

The command can accept a variable number of arguments so it always returns an array of positions even when a single element is specified.

Return

Array reply, specifically:

The command returns an array where each element is a two elements array representing longitude and latitude (x,y) of each member name passed as argument to the command.

Non existing elements are reported as NULL elements of the array.

Examples

GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOPOS Sicily Palermo Catania NonExisting

185 - GEORADIUS

Query a sorted set representing a geospatial index to fetch members matching a given maximum distance from a point

Return the members of a sorted set populated with geospatial information using GEOADD, which are within the borders of the area specified with the center location and the maximum distance from the center (the radius).

This manual page also covers the GEORADIUS_RO and GEORADIUSBYMEMBER_RO variants (see the section below for more information).

The common use case for this command is to retrieve geospatial items near a specified point not farther than a given amount of meters (or other units). This allows, for example, to suggest mobile users of an application nearby places.

The radius is specified in one of the following units:

  • m for meters.
  • km for kilometers.
  • mi for miles.
  • ft for feet.

The command optionally returns additional information using the following options:

  • WITHDIST: Also return the distance of the returned items from the specified center. The distance is returned in the same unit as the unit specified as the radius argument of the command.
  • WITHCOORD: Also return the longitude,latitude coordinates of the matching items.
  • WITHHASH: Also return the raw geohash-encoded sorted set score of the item, in the form of a 52 bit unsigned integer. This is only useful for low level hacks or debugging and is otherwise of little interest for the general user.

The command default is to return unsorted items. Two different sorting methods can be invoked using the following two options:

  • ASC: Sort returned items from the nearest to the farthest, relative to the center.
  • DESC: Sort returned items from the farthest to the nearest, relative to the center.

By default all the matching items are returned. It is possible to limit the results to the first N matching items by using the COUNT <count> option. When ANY is provided the command will return as soon as enough matches are found, so the results may not be the ones closest to the specified point, but on the other hand, the effort invested by the server is significantly lower. When ANY is not provided, the command will perform an effort that is proportional to the number of items matching the specified area and sort them, so to query very large areas with a very small COUNT option may be slow even if just a few results are returned.

By default the command returns the items to the client. It is possible to store the results with one of these options:

  • STORE: Store the items in a sorted set populated with their geospatial information.
  • STOREDIST: Store the items in a sorted set populated with their distance from the center as a floating point number, in the same unit specified in the radius.

Return

Array reply, specifically:

  • Without any WITH option specified, the command just returns a linear array like ["New York","Milan","Paris"].
  • If WITHCOORD, WITHDIST or WITHHASH options are specified, the command returns an array of arrays, where each sub-array represents a single item.

When additional information is returned as an array of arrays for each item, the first item in the sub-array is always the name of the returned item. The other information is returned in the following order as successive elements of the sub-array.

  1. The distance from the center as a floating point number, in the same unit specified in the radius.
  2. The geohash integer.
  3. The coordinates as a two items x,y array (longitude,latitude).

So for example the command GEORADIUS Sicily 15 37 200 km WITHCOORD WITHDIST will return each item in the following way:

["Palermo","190.4424",["13.361389338970184","38.115556395496299"]]

Read-only variants

Since GEORADIUS and GEORADIUSBYMEMBER have a STORE and STOREDIST option they are technically flagged as writing commands in the Redis command table. For this reason read-only replicas will flag them, and Redis Cluster replicas will redirect them to the master instance even if the connection is in read-only mode (see the READONLY command of Redis Cluster).

Breaking the compatibility with the past was considered but rejected, at least for Redis 4.0, so instead two read-only variants of the commands were added. They are exactly like the original commands but refuse the STORE and STOREDIST options. The two variants are called GEORADIUS_RO and GEORADIUSBYMEMBER_RO, and can safely be used in replicas.

Examples

GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEORADIUS Sicily 15 37 200 km WITHDIST GEORADIUS Sicily 15 37 200 km WITHCOORD GEORADIUS Sicily 15 37 200 km WITHDIST WITHCOORD

186 - GEORADIUS_RO

A read-only variant for GEORADIUS

Read-only variant of the GEORADIUS command.

This command is identical to the GEORADIUS command, except that it doesn't support the optional STORE and STOREDIST parameters.

Return

Array reply: An array with each entry being the corresponding result of the subcommand given at the same position.

187 - GEORADIUSBYMEMBER

Query a sorted set representing a geospatial index to fetch members matching a given maximum distance from a member

This command is exactly like GEORADIUS with the sole difference that instead of taking, as the center of the area to query, a longitude and latitude value, it takes the name of a member already existing inside the geospatial index represented by the sorted set.

The position of the specified member is used as the center of the query.

Please check the example below and the GEORADIUS documentation for more information about the command and its options.

Note that GEORADIUSBYMEMBER_RO is also available since Redis 3.2.10 and Redis 4.0.0 in order to provide a read-only command that can be used in replicas. See the GEORADIUS page for more information.

Examples

GEOADD Sicily 13.583333 37.316667 "Agrigento" GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEORADIUSBYMEMBER Sicily Agrigento 100 km

188 - GEORADIUSBYMEMBER_RO

A read-only variant for GEORADIUSBYMEMBER

Read-only variant of the GEORADIUSBYMEMBER command.

This command is identical to the GEORADIUSBYMEMBER command, except that it doesn't support the optional STORE and STOREDIST parameters.

189 - GEOSEARCH

Query a sorted set representing a geospatial index to fetch members inside an area of a box or a circle.

Return the members of a sorted set populated with geospatial information using GEOADD, which are within the borders of the area specified by a given shape. This command extends the GEORADIUS command, so in addition to searching within circular areas, it supports searching within rectangular areas.

This command should be used in place of the deprecated GEORADIUS and GEORADIUSBYMEMBER commands.

The query's center point is provided by one of these mandatory options:

  • FROMMEMBER: Use the position of the given existing <member> in the sorted set.
  • FROMLONLAT: Use the given <longitude> and <latitude> position.

The query's shape is provided by one of these mandatory options:

  • BYRADIUS: Similar to GEORADIUS, search inside circular area according to given <radius>.
  • BYBOX: Search inside an axis-aligned rectangle, determined by <height> and <width>.

The command optionally returns additional information using the following options:

  • WITHDIST: Also return the distance of the returned items from the specified center point. The distance is returned in the same unit as specified for the radius or height and width arguments.
  • WITHCOORD: Also return the longitude and latitude of the matching items.
  • WITHHASH: Also return the raw geohash-encoded sorted set score of the item, in the form of a 52 bit unsigned integer. This is only useful for low level hacks or debugging and is otherwise of little interest for the general user.

Matching items are returned unsorted by default. To sort them, use one of the following two options:

  • ASC: Sort returned items from the nearest to the farthest, relative to the center point.
  • DESC: Sort returned items from the farthest to the nearest, relative to the center point.

All matching items are returned by default. To limit the results to the first N matching items, use the COUNT <count> option. When the ANY option is used, the command returns as soon as enough matches are found. This means that the results returned may not be the ones closest to the specified point, but the effort invested by the server to generate them is significantly less. When ANY is not provided, the command will perform an effort that is proportional to the number of items matching the specified area and sort them, so to query very large areas with a very small COUNT option may be slow even if just a few results are returned.

Return

Array reply, specifically:

  • Without any WITH option specified, the command just returns a linear array like ["New York","Milan","Paris"].
  • If WITHCOORD, WITHDIST or WITHHASH options are specified, the command returns an array of arrays, where each sub-array represents a single item.

When additional information is returned as an array of arrays for each item, the first item in the sub-array is always the name of the returned item. The other information is returned in the following order as successive elements of the sub-array.

  1. The distance from the center as a floating point number, in the same unit specified in the shape.
  2. The geohash integer.
  3. The coordinates as a two items x,y array (longitude,latitude).

Examples

GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOADD Sicily 12.758489 38.788135 "edge1" 17.241510 38.788135 "edge2" GEOSEARCH Sicily FROMLONLAT 15 37 BYRADIUS 200 km ASC GEOSEARCH Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC WITHCOORD WITHDIST

190 - GEOSEARCHSTORE

Query a sorted set representing a geospatial index to fetch members inside an area of a box or a circle, and store the result in another key.

This command is like GEOSEARCH, but stores the result in destination key.

This command comes in place of the now deprecated GEORADIUS and GEORADIUSBYMEMBER.

By default, it stores the results in the destination sorted set with their geospatial information.

When using the STOREDIST option, the command stores the items in a sorted set populated with their distance from the center of the circle or box, as a floating-point number, in the same unit specified for that shape.

Return

Integer reply: the number of elements in the resulting set.

Examples

GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOADD Sicily 12.758489 38.788135 "edge1" 17.241510 38.788135 "edge2" GEOSEARCHSTORE key1 Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC COUNT 3 GEOSEARCH key1 FROMLONLAT 15 37 BYBOX 400 400 km ASC WITHCOORD WITHDIST WITHHASH GEOSEARCHSTORE key2 Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC COUNT 3 STOREDIST ZRANGE key2 0 -1 WITHSCORES

191 - GET

Get the value of a key

Get the value of key. If the key does not exist the special value nil is returned. An error is returned if the value stored at key is not a string, because GET only handles string values.

Return

Bulk string reply: the value of key, or nil when key does not exist.

Examples

GET nonexisting SET mykey "Hello" GET mykey

192 - GETBIT

Returns the bit value at offset in the string value stored at key

Returns the bit value at offset in the string value stored at key.

When offset is beyond the string length, the string is assumed to be a contiguous space with 0 bits. When key does not exist it is assumed to be an empty string, so offset is always out of range and the value is also assumed to be a contiguous space with 0 bits.

Return

Integer reply: the bit value stored at offset.

Examples

SETBIT mykey 7 1 GETBIT mykey 0 GETBIT mykey 7 GETBIT mykey 100

193 - GETDEL

Get the value of a key and delete the key

Get the value of key and delete the key. This command is similar to GET, except for the fact that it also deletes the key on success (if and only if the key's value type is a string).

Return

Bulk string reply: the value of key, nil when key does not exist, or an error if the key's value type isn't a string.

Examples

SET mykey "Hello" GETDEL mykey GET mykey

194 - GETEX

Get the value of a key and optionally set its expiration

Get the value of key and optionally set its expiration. GETEX is similar to GET, but is a write command with additional options.

Options

The GETEX command supports a set of options that modify its behavior:

  • EX seconds -- Set the specified expire time, in seconds.
  • PX milliseconds -- Set the specified expire time, in milliseconds.
  • EXAT timestamp-seconds -- Set the specified Unix time at which the key will expire, in seconds.
  • PXAT timestamp-milliseconds -- Set the specified Unix time at which the key will expire, in milliseconds.
  • PERSIST -- Remove the time to live associated with the key.

Return

Bulk string reply: the value of key, or nil when key does not exist.

Examples

SET mykey "Hello" GETEX mykey TTL mykey GETEX mykey EX 60 TTL mykey

195 - GETRANGE

Get a substring of the string stored at a key

Returns the substring of the string value stored at key, determined by the offsets start and end (both are inclusive). Negative offsets can be used in order to provide an offset starting from the end of the string. So -1 means the last character, -2 the penultimate and so forth.

The function handles out of range requests by limiting the resulting range to the actual length of the string.

Return

Bulk string reply

Examples

SET mykey "This is a string" GETRANGE mykey 0 3 GETRANGE mykey -3 -1 GETRANGE mykey 0 -1 GETRANGE mykey 10 100

196 - GETSET

Set the string value of a key and return its old value

Atomically sets key to value and returns the old value stored at key. Returns an error when key exists but does not hold a string value. Any previous time to live associated with the key is discarded on successful SET operation.

Design pattern

GETSET can be used together with INCR for counting with atomic reset. For example: a process may call INCR against the key mycounter every time some event occurs, but from time to time we need to get the value of the counter and reset it to zero atomically. This can be done using GETSET mycounter "0":

INCR mycounter GETSET mycounter "0" GET mycounter

Return

Bulk string reply: the old value stored at key, or nil when key did not exist.

Examples

SET mykey "Hello" GETSET mykey "World" GET mykey

197 - GRAPH.CONFIG GET

Retrieves a RedisGraph configuration

Retrieves or updates a RedisGraph configuration. Arguments: GET/SET, <config name> [value] value should only be specified in SET contexts, while * may be substituted for an explict config name if all configurations should be returned. Only run-time configurations may be SET, though all configurations may be retrieved.

127.0.0.1:6379> GRAPH.CONFIG SET RESULTSET_SIZE 1000
OK
127.0.0.1:6379> GRAPH.CONFIG GET RESULTSET_SIZE
1) "RESULTSET_SIZE"
2) (integer) 1000

198 - GRAPH.CONFIG SET

Updates a RedisGraph configuration

Retrieves or updates a RedisGraph configuration. Arguments: GET/SET, <config name> [value] value should only be specified in SET contexts, while * may be substituted for an explict config name if all configurations should be returned. Only run-time configurations may be SET, though all configurations may be retrieved.

127.0.0.1:6379> GRAPH.CONFIG SET RESULTSET_SIZE 1000
OK
127.0.0.1:6379> GRAPH.CONFIG GET RESULTSET_SIZE
1) "RESULTSET_SIZE"
2) (integer) 1000

199 - GRAPH.DELETE

Completely removes the graph and all of its entities

Completely removes the graph and all of its entities.

Arguments: Graph name

Returns: String indicating if operation succeeded or failed.

GRAPH.DELETE us_government

Note: To delete a node from the graph (not the entire graph), execute a MATCH query and pass the alias to the DELETE clause:

GRAPH.QUERY DEMO_GRAPH "MATCH (x:Y {propname: propvalue}) DELETE x"

WARNING: When you delete a node, all of the node's incoming/outgoing relationships are also removed.

200 - GRAPH.EXPLAIN

Returns a query execution plan without running the query

Constructs a query execution plan but does not run it. Inspect this execution plan to better understand how your query will get executed.

Arguments: Graph name, Query

Returns: String representation of a query execution plan

GRAPH.EXPLAIN us_government "MATCH (p:President)-[:BORN]->(h:State {name:'Hawaii'}) RETURN p"

201 - GRAPH.LIST

Lists all graph keys in the keyspace

Lists all graph keys in the keyspace.

127.0.0.1:6379> GRAPH.LIST
2) G
3) resources
4) players

202 - GRAPH.PROFILE

Executes a query and returns an execution plan augmented with metrics for each operation's execution

Executes a query and produces an execution plan augmented with metrics for each operation's execution.

Arguments: Graph name, Query

Returns: String representation of a query execution plan, with details on results produced by and time spent in each operation.

GRAPH.PROFILE is a parallel entrypoint to GRAPH.QUERY. It accepts and executes the same queries, but it will not emit results, instead returning the operation tree structure alongside the number of records produced and total runtime of each operation.

It is important to note that this blends elements of GRAPH.QUERY and GRAPH.EXPLAIN. It is not a dry run and will perform all graph modifications expected of the query, but will not output results produced by a RETURN clause or query statistics.

GRAPH.PROFILE imdb
"MATCH (actor_a:Actor)-[:ACT]->(:Movie)<-[:ACT]-(actor_b:Actor)
WHERE actor_a <> actor_b
CREATE (actor_a)-[:COSTARRED_WITH]->(actor_b)"
1) "Create | Records produced: 11208, Execution time: 168.208661 ms"
2) "    Filter | Records produced: 11208, Execution time: 1.250565 ms"
3) "        Conditional Traverse | Records produced: 12506, Execution time: 7.705860 ms"
4) "            Node By Label Scan | (actor_a:Actor) | Records produced: 1317, Execution time: 0.104346 ms"

203 - GRAPH.QUERY

Executes the given query against a specified graph

Executes the given query against a specified graph.

Arguments: Graph name, Query, Timeout [optional]

Returns: Result set

GRAPH.QUERY us_government "MATCH (p:president)-[:born]->(:state {name:'Hawaii'}) RETURN p"

Query-level timeouts can be set as described in the configuration section.

Query language

The syntax is based on Cypher, and only a subset of the language currently supported.

  1. Clauses
  2. Functions

Query structure

  • MATCH
  • OPTIONAL MATCH
  • WHERE
  • RETURN
  • ORDER BY
  • SKIP
  • LIMIT
  • CREATE
  • MERGE
  • DELETE
  • SET
  • WITH
  • UNION

MATCH

Match describes the relationship between queried entities, using ascii art to represent pattern(s) to match against.

Nodes are represented by parentheses (), and Relationships are represented by brackets [].

Each graph entity node/relationship can contain an alias and a label/relationship type, but both can be left empty if necessary.

Entity structure: alias:label {filters}.

Alias, label/relationship type, and filters are all optional.

Example:

(a:Actor)-[:ACT]->(m:Movie {title:"straight outta compton"})

a is an alias for the source node, which we'll be able to refer to at different places within our query.

Actor is the label under which this node is marked.

ACT is the relationship type.

m is an alias for the destination node.

Movie destination node is of "type" movie.

{title:"straight outta compton"} requires the node's title attribute to equal "straight outta compton".

In this example, we're interested in actor entities which have the relation "act" with the entity representing the "straight outta compton" movie.

It is possible to describe broader relationships by composing a multi-hop query such as:

(me {name:'swilly'})-[:FRIENDS_WITH]->()-[:FRIENDS_WITH]->(foaf)

Here we're interested in finding out who my friends' friends are.

Nodes can have more than one relationship coming in or out of them, for instance:

(me {name:'swilly'})-[:VISITED]->(c:Country)<-[:VISITED]-(friend)<-[:FRIENDS_WITH]-(me)

Here we're interested in knowing which of my friends have visited at least one country I've been to.

Variable length relationships

Nodes that are a variable number of relationship→node hops away can be found using the following syntax:

-[:TYPE*minHops..maxHops]->

TYPE, minHops and maxHops are all optional and default to type agnostic, 1 and infinity, respectively.

When no bounds are given the dots may be omitted. The dots may also be omitted when setting only one bound and this implies a fixed length pattern.

Example:

GRAPH.QUERY DEMO_GRAPH
"MATCH (charlie:Actor { name: 'Charlie Sheen' })-[:PLAYED_WITH*1..3]->(colleague:Actor)
RETURN colleague"

Returns all actors related to 'Charlie Sheen' by 1 to 3 hops.

Bidirectional path traversal

If a relationship pattern does not specify a direction, it will match regardless of which node is the source and which is the destination:

-[:TYPE]-

Example:

GRAPH.QUERY DEMO_GRAPH
"MATCH (person_a:Person)-[:KNOWS]-(person_b:Person)
RETURN person_a, person_b"

Returns all pairs of people connected by a KNOWS relationship. Note that each pair will be returned twice, once with each node in the person_a field and once in the person_b field.

The syntactic sugar (person_a)<-[:KNOWS]->(person_b) will return the same results.

The bracketed edge description can be omitted if all relations should be considered: (person_a)--(person_b).

Named paths

Named path variables are created by assigning a path in a MATCH clause to a single alias with the syntax: MATCH named_path = (path)-[to]->(capture)

The named path includes all entities in the path, regardless of whether they have been explicitly aliased. Named paths can be accessed using designated built-in functions or returned directly if using a language-specific client.

Example:

GRAPH.QUERY DEMO_GRAPH
"MATCH p=(charlie:Actor { name: 'Charlie Sheen' })-[:PLAYED_WITH*1..3]->(:Actor)
RETURN nodes(p) as actors"

This query will produce all the paths matching the pattern contained in the named path p. All of these paths will share the same starting point, the actor node representing Charlie Sheen, but will otherwise vary in length and contents. Though the variable-length traversal and (:Actor) endpoint are not explicitly aliased, all nodes and edges traversed along the path will be included in p. In this case, we are only interested in the nodes of each path, which we'll collect using the built-in function nodes(). The returned value will contain, in order, Charlie Sheen, between 0 and 2 intermediate nodes, and the unaliased endpoint.

allShortestPaths()

allShortestPaths() is a MATCH mode in which only the shortest paths matching all criteria are captured. Both endpoints must be bound in an earlier WITH-demarcated scope to invoke allShortestPaths().

Example:

GRAPH.QUERY DEMO_GRAPH
"MATCH (charlie:Actor {name: 'Charlie Sheen'}), (kevin:Actor {name: 'Kevin Bacon'})
WITH charlie, kevin
MATCH p=allShortestPaths((charlie)-[:PLAYED_WITH*]->(kevin))
RETURN nodes(p) as actors"

This query will produce all paths of the minimum length connecting the actor node representing Charlie Sheen to the one representing Kevin Bacon. There are several 2-hop paths between the two actors, and all of these will be returned. The computation of paths then terminates, as we are not interested in any paths of length greater than 2.

OPTIONAL MATCH

The OPTIONAL MATCH clause is a MATCH variant that produces null values for elements that do not match successfully, rather than the all-or-nothing logic for patterns in MATCH clauses.

It can be considered to fill the same role as LEFT/RIGHT JOIN does in SQL, as MATCH entities must be resolved but nodes and edges introduced in OPTIONAL MATCH will be returned as nulls if they cannot be found.

OPTIONAL MATCH clauses accept the same patterns as standard MATCH clauses, and may similarly be modified by WHERE clauses.

Multiple MATCH and OPTIONAL MATCH clauses can be chained together, though a mandatory MATCH cannot follow an optional one.

GRAPH.QUERY DEMO_GRAPH
"MATCH (p:Person) OPTIONAL MATCH (p)-[w:WORKS_AT]->(c:Company)
WHERE w.start_date > 2016
RETURN p, w, c"

All Person nodes are returned, as well as any WORKS_AT relations and Company nodes that can be resolved and satisfy the start_date constraint. For each Person that does not resolve the optional pattern, the person will be returned as normal and the non-matching elements will be returned as null.

Cypher is lenient in its handling of null values, so actions like property accesses and function calls on null values will return null values rather than emit errors.

GRAPH.QUERY DEMO_GRAPH
"MATCH (p:Person) OPTIONAL MATCH (p)-[w:WORKS_AT]->(c:Company)
RETURN p, w.department, ID(c) as ID"

In this case, w.department and ID will be returned if the OPTIONAL MATCH was successful, and will be null otherwise.

Clauses like SET, CREATE, MERGE, and DELETE will ignore null inputs and perform the expected updates on real inputs. One exception to this is that attempting to create a relation with a null endpoint will cause an error:

GRAPH.QUERY DEMO_GRAPH
"MATCH (p:Person) OPTIONAL MATCH (p)-[w:WORKS_AT]->(c:Company)
CREATE (c)-[:NEW_RELATION]->(:NEW_NODE)"

If c is null for any record, this query will emit an error. In this case, no changes to the graph are committed, even if some values for c were resolved.

WHERE

This clause is not mandatory, but if you want to filter results, you can specify your predicates here.

Supported operations:

  • =
  • <>
  • <
  • <=
  • >
  • >=
  • CONTAINS
  • ENDS WITH
  • IN
  • STARTS WITH

Predicates can be combined using AND / OR / NOT.

Be sure to wrap predicates within parentheses to control precedence.

Examples:

WHERE (actor.name = "john doe" OR movie.rating > 8.8) AND movie.votes <= 250)
WHERE actor.age >= director.age AND actor.age > 32

It is also possible to specify equality predicates within nodes using the curly braces as such:

(:President {name:"Jed Bartlett"})-[:WON]->(:State)

Here we've required that the president node's name will have the value "Jed Bartlett".

There's no difference between inline predicates and predicates specified within the WHERE clause.

It is also possible to filter on graph patterns. The following queries, which return all presidents and the states they won in, produce the same results:

MATCH (p:President), (s:State) WHERE (p)-[:WON]->(s) RETURN p, s

and

MATCH (p:President)-[:WON]->(s:State) RETURN p, s

Pattern predicates can be also negated and combined with the logical operators AND, OR, and NOT. The following query returns all the presidents that did not win in the states where they were governors:

MATCH (p:President), (s:State) WHERE NOT (p)-[:WON]->(s) AND (p)->[:governor]->(s) RETURN p, s

Nodes can also be filtered by label:

MATCH (n)-[:R]->() WHERE n:L1 OR n:L2 RETURN n 

When possible, it is preferable to specify the label in the node pattern of the MATCH clause.

RETURN

In its simple form, Return defines which properties the returned result-set will contain.

Its structure is a list of alias.property separated by commas.

For convenience, it's possible to specify the alias only when you're interested in every attribute an entity possesses, and don't want to specify each attribute individually. For example:

RETURN movie.title, actor

Use the DISTINCT keyword to remove duplications within the result-set:

RETURN DISTINCT friend_of_friend.name

In the above example, suppose we have two friends, Joe and Miesha, and both know Dominick.

DISTINCT will make sure Dominick will only appear once in the final result set.

Return can also be used to aggregate data, similar to group by in SQL.

Once an aggregation function is added to the return list, all other "none" aggregated values are considered as group keys, for example:

RETURN movie.title, MAX(actor.age), MIN(actor.age)

Here we group data by movie title and for each movie, and we find its youngest and oldest actor age.

Aggregations

Supported aggregation functions include:

  • avg
  • collect
  • count
  • max
  • min
  • percentileCont
  • percentileDisc
  • stDev
  • sum

ORDER BY

Order by specifies that the output be sorted and how.

You can order by multiple properties by stating each variable in the ORDER BY clause.

Each property may specify its sort order with ASC/ASCENDING or DESC/DESCENDING. If no order is specified, it defaults to ascending.

The result will be sorted by the first variable listed.

For equal values, it will go to the next property in the ORDER BY clause, and so on.

ORDER BY <alias.property [ASC/DESC] list>

Below we sort our friends by height. For equal heights, weight is used to break ties.

ORDER BY friend.height, friend.weight DESC

SKIP

The optional skip clause allows a specified number of records to be omitted from the result set.

SKIP <number of records to skip>

This can be useful when processing results in batches. A query that would examine the second 100-element batch of nodes with the label Person, for example, would be:

GRAPH.QUERY DEMO_GRAPH "MATCH (p:Person) RETURN p ORDER BY p.name SKIP 100 LIMIT 100"

LIMIT

Although not mandatory, you can use the limit clause to limit the number of records returned by a query:

LIMIT <max records to return>

If not specified, there's no limit to the number of records returned by a query.

CREATE

CREATE is used to introduce new nodes and relationships.

The simplest example of CREATE would be a single node creation:

CREATE (n)

It's possible to create multiple entities by separating them with a comma.

CREATE (n),(m)
CREATE (:Person {name: 'Kurt', age: 27})

To add relations between nodes, in the following example we first find an existing source node. After it's found, we create a new relationship and destination node.

GRAPH.QUERY DEMO_GRAPH
"MATCH (a:Person)
WHERE a.name = 'Kurt'
CREATE (a)-[:MEMBER]->(:Band {name:'Nirvana'})"

Here the source node is a bounded node, while the destination node is unbounded.

As a result, a new node is created representing the band Nirvana and a new relation connects Kurt to the band.

Lastly we create a complete pattern.

All entities within the pattern which are not bounded will be created.

GRAPH.QUERY DEMO_GRAPH
"CREATE (jim:Person{name:'Jim', age:29})-[:FRIENDS]->(pam:Person {name:'Pam', age:27})-[:WORKS]->(:Employer {name:'Dunder Mifflin'})"

This query will create three nodes and two relationships.

DELETE

DELETE is used to remove both nodes and relationships.

Note that deleting a node also deletes all of its incoming and outgoing relationships.

To delete a node and all of its relationships:

GRAPH.QUERY DEMO_GRAPH "MATCH (p:Person {name:'Jim'}) DELETE p"

To delete relationship:

GRAPH.QUERY DEMO_GRAPH "MATCH (:Person {name:'Jim'})-[r:FRIENDS]->() DELETE r"

This query will delete all friend outgoing relationships from the node with the name 'Jim'.

SET

SET is used to create or update properties on nodes and relationships.

To set a property on a node, use SET.

GRAPH.QUERY DEMO_GRAPH "MATCH (n { name: 'Jim' }) SET n.name = 'Bob'"

If you want to set multiple properties in one go, simply separate them with a comma to set multiple properties using a single SET clause.

GRAPH.QUERY DEMO_GRAPH
"MATCH (n { name: 'Jim', age:32 })
SET n.age = 33, n.name = 'Bob'"

The same can be accomplished by setting the graph entity variable to a map:

GRAPH.QUERY DEMO_GRAPH
"MATCH (n { name: 'Jim', age:32 })
SET n = {age: 33, name: 'Bob'}"

Using = in this way replaces all of the entity's previous properties, while += will only set the properties it explicitly mentions.

In the same way, the full property set of a graph entity can be assigned or merged:

GRAPH.QUERY DEMO_GRAPH
"MATCH (jim {name: 'Jim'}), (pam {name: 'Pam'})
SET jim = pam"

After executing this query, the jim node will have the same property set as the pam node.

To remove a node's property, simply set property value to NULL.

GRAPH.QUERY DEMO_GRAPH "MATCH (n { name: 'Jim' }) SET n.name = NULL"

MERGE

The MERGE clause ensures that a path exists in the graph (either the path already exists, or it needs to be created).

MERGE either matches existing nodes and binds them, or it creates new data and binds that.

It’s like a combination of MATCH and CREATE that also allows you to specify what happens if the data was matched or created.

For example, you can specify that the graph must contain a node for a user with a certain name.

If there isn’t a node with the correct name, a new node will be created and its name property set.

Any aliases in the MERGE path that were introduced by earlier clauses can only be matched; MERGE will not create them.

When the MERGE path doesn't rely on earlier clauses, the whole path will always either be matched or created.

If all path elements are introduced by MERGE, a match failure will cause all elements to be created, even if part of the match succeeded.

The MERGE path can be followed by ON MATCH SET and ON CREATE SET directives to conditionally set properties depending on whether or not the match succeeded.

Merging nodes

To merge a single node with a label:

GRAPH.QUERY DEMO_GRAPH "MERGE (robert:Critic)"

To merge a single node with properties:

GRAPH.QUERY DEMO_GRAPH "MERGE (charlie { name: 'Charlie Sheen', age: 10 })"

To merge a single node, specifying both label and property:

GRAPH.QUERY DEMO_GRAPH "MERGE (michael:Person { name: 'Michael Douglas' })"

Merging paths

Because MERGE either matches or creates a full path, it is easy to accidentally create duplicate nodes.

For example, if we run the following query on our sample graph:

GRAPH.QUERY DEMO_GRAPH
"MERGE (charlie { name: 'Charlie Sheen '})-[r:ACTED_IN]->(wallStreet:Movie { name: 'Wall Street' })"

Even though a node with the name 'Charlie Sheen' already exists, the full pattern does not match, so 1 relation and 2 nodes - including a duplicate 'Charlie Sheen' node - will be created.

We should use multiple MERGE clauses to merge a relation and only create non-existent endpoints:

GRAPH.QUERY DEMO_GRAPH
"MERGE (charlie { name: 'Charlie Sheen' })
 MERGE (wallStreet:Movie { name: 'Wall Street' })
 MERGE (charlie)-[r:ACTED_IN]->(wallStreet)"

If we don't want to create anything if pattern elements don't exist, we can combine MATCH and MERGE clauses. The following query merges a relation only if both of its endpoints already exist:

GRAPH.QUERY DEMO_GRAPH
"MATCH (charlie { name: 'Charlie Sheen' })
 MATCH (wallStreet:Movie { name: 'Wall Street' })
 MERGE (charlie)-[r:ACTED_IN]->(wallStreet)"

On Match and On Create directives

Using ON MATCH and ON CREATE, MERGE can set properties differently depending on whether a pattern is matched or created.

In this query, we'll merge paths based on a list of properties and conditionally set a property when creating new entities:

GRAPH.QUERY DEMO_GRAPH
"UNWIND ['Charlie Sheen', 'Michael Douglas', 'Tamara Tunie'] AS actor_name
 MATCH (movie:Movie { name: 'Wall Street' })
 MERGE (person {name: actor_name})-[:ACTED_IN]->(movie)
 ON CREATE SET person.first_role = movie.name"

WITH

The WITH clause allows parts of queries to be independently executed and have their results handled uniquely.

This allows for more flexible query composition as well as data manipulations that would otherwise not be possible in a single query.

If, for example, we wanted to find all children in our graph who are above the average age of all people:

GRAPH.QUERY DEMO_GRAPH
"MATCH (p:Person) WITH AVG(p.age) AS average_age MATCH (:Person)-[:PARENT_OF]->(child:Person) WHERE child.age > average_age return child

This also allows us to use modifiers like DISTINCT, SKIP, LIMIT, and ORDER that otherwise require RETURN clauses.

GRAPH.QUERY DEMO_GRAPH
"MATCH (u:User)  WITH u AS nonrecent ORDER BY u.lastVisit LIMIT 3 SET nonrecent.should_contact = true"

UNWIND

The UNWIND clause breaks down a given list into a sequence of records; each contains a single element in the list.

The order of the records preserves the original list order.

GRAPH.QUERY DEMO_GRAPH
"CREATE (p {array:[1,2,3]})"
GRAPH.QUERY DEMO_GRAPH
"MATCH (p) UNWIND p.array AS y RETURN y"

UNION

The UNION clause is used to combine the result of multiple queries.

UNION combines the results of two or more queries into a single result set that includes all the rows that belong to all queries in the union.

The number and the names of the columns must be identical in all queries combined by using UNION.

To keep all the result rows, use UNION ALL.

Using just UNION will combine and remove duplicates from the result set.

GRAPH.QUERY DEMO_GRAPH
"MATCH (n:Actor) RETURN n.name AS name
UNION ALL
MATCH (n:Movie) RETURN n.title AS name"

Functions

This section contains information on all supported functions from the Cypher query language.

Predicate functions

FunctionDescription
exists()Returns true if the specified property exists in the node or relationship.
any()Returns true if the inner WHERE predicate holds true for any element in the input array.
all()Returns true if the inner WHERE predicate holds true for all elements in the input array.
none()Returns true if the inner WHERE predicate holds false for all elements in the input array.
single()Returns true if the inner WHERE predicate holds true for 1 element only in the input array.
single()Returns true if the inner WHERE predicate holds true for 1 element only in the input array.
CASE...WHENEvaluates the CASE expression and returns the value indicated by the matching WHEN statement.

Scalar functions

FunctionDescription
endNode()Returns the destination node of a relationship.
id()Returns the internal ID of a relationship or node (which is not immutable.)
hasLabels()Returns true if input node contains all specified labels, otherwise false.
keys()Returns the array of keys contained in the given map, node, or edge.
labels()Returns a string representation of the label of a node.
startNode()Returns the source node of a relationship.
timestamp()Returns the the amount of milliseconds since epoch.
type()Returns a string representation of the type of a relation.
list comprehensionsSee documentation
pattern comprehensionsSee documentation

Aggregating functions

FunctionDescription
avg()Returns the average of a set of numeric values
collect()Returns a list containing all elements which evaluated from a given expression
count()Returns the number of values or rows
max()Returns the maximum value in a set of values
min()Returns the minimum value in a set of values
sum()Returns the sum of a set of numeric values
percentileDisc()Returns the percentile of the given value over a group, with a percentile from 0.0 to 1.0
percentileCont()Returns the percentile of the given value over a group, with a percentile from 0.0 to 1.0
stDev()Returns the standard deviation for the given value over a group

List functions

FunctionDescription
head()Return the first member of a list
range()Create a new list of integers in the range of [start, end]. If an interval was given, the interval between two consecutive list members will be this interval.
size()Return a list size
tail()Return a sublist of a list, which contains all the values without the first value
reduce()Return a scalar produced by evaluating an expression against each list member

Mathematical functions

FunctionDescription
+Add two values
-Subtract second value from first
*Multiply two values
/Divide first value by the second
^Raise the first value to the power of the second
%Perform modulo division of the first value by the second
abs()Returns the absolute value of a number
ceil()Returns the smallest floating point number that is greater than or equal to a number and equal to a mathematical integer
floor()Returns the largest floating point number that is less than or equal to a number and equal to a mathematical integer
rand()Returns a random floating point number in the range from 0 to 1; i.e. [0,1]
round()Returns the value of a number rounded to the nearest integer
sign()Returns the signum of a number: 0 if the number is 0, -1 for any negative number, and 1 for any positive number
sqrt()Returns the square root of a number
pow()Returns base raised to the power of exponent, base^exponent
toInteger()Converts a floating point or string value to an integer value.

String functions

FunctionDescription
left()Returns a string containing the specified number of leftmost characters of the original string
lTrim()Returns the original string with leading whitespace removed
replace()Returns a string in which all occurrences of a specified substring are replaced with the specified replacement string
reverse()Returns a string in which the order of all characters in the original string are reversed
right()Returns a string containing the specified number of rightmost characters of the original string
rTrim()Returns the original string with trailing whitespace removed
substring()Returns a substring of the original string, beginning with a 0-based index start and length
toLower()Returns the original string in lowercase
toString()Returns a string representation of a value
toJSON()Returns a JSON representation of a value
toUpper()Returns the original string in uppercase
trim()Returns the original string with leading and trailing whitespace removed
size()Returns a string length

Point functions

FunctionDescription
point()Returns a Point type representing the given lat/lon coordinates
distance()Returns the distance in meters between the two given points

Node functions

FunctionDescription
indegree()Returns the number of node's incoming edges.
outdegree()Returns the number of node's outgoing edges.

Path functions

FunctionDescription
nodes()Return a new list of nodes, of a given path.
relationships()Return a new list of edges, of a given path.
length()Return the length (number of edges) of the path.
shortestPath()Return the shortest path that resolves the given pattern.

List comprehensions

List comprehensions are a syntactical construct that accepts an array and produces another based on the provided map and filter directives.

They are a common construct in functional languages and modern high-level languages. In Cypher, they use the syntax:

[element IN array WHERE condition | output elem]
  • array can be any expression that produces an array: a literal, a property reference, or a function call.
  • WHERE condition is an optional argument to only project elements that pass a certain criteria. If omitted, all elements in the array will be represented in the output.
  • | output elem is an optional argument that allows elements to be transformed in the output array. If omitted, the output elements will be the same as their corresponding inputs.

The following query collects all paths of any length, then for each produces an array containing the name property of every node with a rank property greater than 10:

MATCH p=()-[*]->() RETURN [node IN nodes(p) WHERE node.rank > 10 | node.name]

Existential comprehension functions

The functions any(), all(), single() and none() use a simplified form of the list comprehension syntax and return a boolean value.

any(element IN array WHERE condition)

They can operate on any form of input array, but are particularly useful for path filtering. The following query collects all paths of any length in which all traversed edges have a weight less than 3:

MATCH p=()-[*]->() WHERE all(edge IN relationships(p) WHERE edge.weight < 3) RETURN p

Pattern comprehensions

Pattern comprehensions are a method of producing a list composed of values found by performing the traversal of a given graph pattern.

The following query returns the name of a Person node and a list of all their friends' ages:

MATCH (n:Person)
RETURN
n.name,
[(n)-[:FRIEND_OF]->(f:Person) | f.age]

Optionally, a WHERE clause may be embedded in the pattern comprehension to filter results. In this query, all friends' ages will be gathered for friendships that started before 2010:

MATCH (n:Person)
RETURN
n.name,
[(n)-[e:FRIEND_OF]->(f:Person) WHERE e.since < 2010 | f.age]

CASE WHEN

The case statement comes in two variants. Both accept an input argument and evaluates it against one or more expressions. The first WHEN argument that specifies a value matching the result will be accepted, and the value specified by the corresponding THEN keyword will be returned.

Optionally, an ELSE argument may also be specified to indicate what to do if none of the WHEN arguments match successfully.

In its simple form, there is only one expression to evaluate and it immediately follows the CASE keyword:

MATCH (n)
RETURN
CASE n.title
WHEN 'Engineer' THEN 100
WHEN 'Scientist' THEN 80
ELSE n.privileges
END

In its generic form, no expression follows the CASE keyword. Instead, each WHEN statement specifies its own expression:

MATCH (n)
RETURN
CASE
WHEN n.age < 18 THEN '0-18'
WHEN n.age < 30 THEN '18-30'
ELSE '30+'
END

Reduce

The reduce() function accepts a starting value and updates it by evaluating an expression against each element of the list:

RETURN reduce(sum = 0, n IN [1,2,3] | sum + n)

sum will successively have the values 0, 1, 3, and 6, with 6 being the output of the function call.

Point

The point() function expects one map argument of the form:

RETURN point({latitude: lat_value, longitude: lon_val})

The key names latitude and longitude are case-sensitive.

The point constructed by this function can be saved as a node/relationship property or used within the query, such as in a distance function call.

shortestPath

The shortestPath() function is invoked with the form:

MATCH (a {v: 1}), (b {v: 4}) RETURN shortestPath((a)-[:L*]->(b))

The sole shortestPath argument is a traversal pattern. This pattern's endpoints must be resolved prior to the function call, and no property filters may be introduced in the pattern. The relationship pattern may specify any number of relationship types (including zero) to be considered. If a minimum number of hops is specified, it may only be 0 or 1, while any number may be used for the maximum number of hops. If no shortest path can be found, NULL is returned.

JSON format

toJSON() returns the input value in JSON formatting. For primitive data types and arrays, this conversion is conventional. Maps and map projections (toJSON(node { .prop} )) are converted to JSON objects, as are nodes and relationships.

The format for a node object in JSON is:

{
  "type": "node",
  "id": id(int),
  "labels": [label(string) X N],
  "properties": {
    property_key(string): property_value X N
  }
}

The format for a relationship object in JSON is:

{
  "type": "relationship",
  "id": id(int),
  "label": label(string),
  "properties": {
    property_key(string): property_value X N
  }
  "start": src_node(node),
  "end": dest_node(node)
}

Procedures

Procedures are invoked using the syntax:

GRAPH.QUERY social "CALL db.labels()"

Or the variant:

GRAPH.QUERY social "CALL db.labels() YIELD label"

YIELD modifiers are only required if explicitly specified; by default the value in the 'Yields' column will be emitted automatically.

ProcedureArgumentsYieldsDescription
db.labelsnonelabelYields all node labels in the graph.
db.relationshipTypesnonerelationshipTypeYields all relationship types in the graph.
db.propertyKeysnonepropertyKeyYields all property keys in the graph.
db.indexesnonetype, label, properties, language, stopwords, entityType, infoYield all indexes in the graph, denoting whether they are exact-match or full-text and which label and properties each covers and whether they are indexing node or relationship attributes.
db.idx.fulltext.createNodeIndexlabel, property [, property ...]noneBuilds a full-text searchable index on a label and the 1 or more specified properties.
db.idx.fulltext.droplabelnoneDeletes the full-text index associated with the given label.
db.idx.fulltext.queryNodeslabel, stringnode, scoreRetrieve all nodes that contain the specified string in the full-text indexes on the given label.
algo.pageRanklabel, relationship-typenode, scoreRuns the pagerank algorithm over nodes of given label, considering only edges of given relationship type.
algo.BFSsource-node, max-level, relationship-typenodes, edgesPerforms BFS to find all nodes connected to the source. A max level of 0 indicates unlimited and a non-NULL relationship-type defines the relationship type that may be traversed.
dbms.procedures()nonename, modeList all procedures in the DBMS, yields for every procedure its name and mode (read/write).

Algorithms

BFS

The breadth-first-search algorithm accepts 4 arguments:

source-node (node) - The root of the search.

max-level (integer) - If greater than zero, this argument indicates how many levels should be traversed by BFS. 1 would retrieve only the source's neighbors, 2 would retrieve all nodes within 2 hops, and so on.

relationship-type (string) - If this argument is NULL, all relationship types will be traversed. Otherwise, it specifies a single relationship type to perform BFS over.

It can yield two outputs:

nodes - An array of all nodes connected to the source without violating the input constraints.

edges - An array of all edges traversed during the search. This does not necessarily contain all edges connecting nodes in the tree, as cycles or multiple edges connecting the same source and destination do not have a bearing on the reachability this algorithm tests for. These can be used to construct the directed acyclic graph that represents the BFS tree. Emitting edges incurs a small performance penalty.

Indexing

RedisGraph supports single-property indexes for node labels.

String, numeric, and geospatial data types can be indexed.

The creation syntax is:

GRAPH.QUERY DEMO_GRAPH "CREATE INDEX ON :Person(age)"

On the master branch, a newer syntax is also supported. This will be the standard in future versions:

GRAPH.QUERY DEMO_GRAPH "CREATE INDEX FOR (p:Person) ON (p.age)"

After an index is explicitly created, it will automatically be used by queries that reference that label and any indexed property in a filter.

GRAPH.EXPLAIN DEMO_GRAPH "MATCH (p:Person) WHERE p.age > 80 RETURN p"
1) "Results"
2) "    Project"
3) "        Index Scan | (p:Person)"

This can significantly improve the runtime of queries with very specific filters. An index on :employer(name), for example, will dramatically benefit the query:

GRAPH.QUERY DEMO_GRAPH
"MATCH (:Employer {name: 'Dunder Mifflin'})-[:EMPLOYS]->(p:Person) RETURN p"

An example of utilizing a geospatial index to find Employer nodes within 5 kilometers of Scranton is:

GRAPH.QUERY DEMO_GRAPH
"WITH point({latitude:41.4045886, longitude:-75.6969532}) AS scranton MATCH (e:Employer) WHERE distance(e.location, scranton) < 5000 RETURN e"

Geospatial indexes can currently only be leveraged with < and <= filters; matching nodes outside of the given radius is performed using conventional matching.

Indexing relationship property

The creation syntax is:

GRAPH.QUERY DEMO_GRAPH "CREATE INDEX FOR ()-[f:FOLLOW]-() ON (f.created_at)"

Then the execution plan for using the index:

GRAPH.EXPLAIN DEMO_GRAPH "MATCH (p:Person {id: 0})-[f:FOLLOW]->(fp) WHERE 0 < f.created_at AND f.created_at < 1000 RETURN fp"
1) "Results"
2) "    Project"
3) "        Edge By Index Scan | [f:FOLLOW]"
4) "            Node By Index Scan | (p:Person)"

This can significantly improve the runtime of queries that traverse super nodes or when we want to start traverse from relationships.

Individual indexes can be deleted using the matching syntax:

GRAPH.QUERY DEMO_GRAPH "DROP INDEX ON :Person(age)"

Full-text indexes

RedisGraph leverages the indexing capabilities of RediSearch to provide full-text indices through procedure calls. To construct a full-text index on the title property of all nodes with label Movie, use the syntax:

GRAPH.QUERY DEMO_GRAPH "CALL db.idx.fulltext.createNodeIndex('Movie', 'title')"

(More properties can be added to this index by adding their names to the above set of arguments, or using this syntax again with the additional names.)

Now this index can be invoked to match any whole words contained within:

GRAPH.QUERY DEMO_GRAPH
"CALL db.idx.fulltext.queryNodes('Movie', 'Book') YIELD node RETURN node.title"
1) 1) "node.title"
2) 1) 1) "The Jungle Book"
   2) 1) "The Book of Life"
3) 1) "Query internal execution time: 0.927409 milliseconds"

This CALL clause can be interleaved with other Cypher clauses to perform more elaborate manipulations:

GRAPH.QUERY DEMO_GRAPH
"CALL db.idx.fulltext.queryNodes('Movie', 'Book') YIELD node AS m
WHERE m.genre = 'Adventure'
RETURN m ORDER BY m.rating"
1) 1) "m"
2) 1) 1) 1) 1) "id"
            2) (integer) 1168
         2) 1) "labels"
            2) 1) "Movie"
         3) 1) "properties"
            2) 1) 1) "genre"
                  2) "Adventure"
               2) 1) "rating"
                  2) "7.6"
               3) 1) "votes"
                  2) (integer) 151342
               4) 1) "year"
                  2) (integer) 2016
               5) 1) "title"
                  2) "The Jungle Book"
3) 1) "Query internal execution time: 0.226914 milliseconds"

In addition to yielding matching nodes, full-text index scans will return the score of each node. This is the TF-IDF score of the node, which is informed by how many times the search terms appear in the node and how closely grouped they are. This can be observed in the example:

GRAPH.QUERY DEMO_GRAPH
"CALL db.idx.fulltext.queryNodes('Node', 'hello world') YIELD node, score RETURN score, node.val"
1) 1) "score"
   2) "node.val"
2) 1) 1) "2"
      2) "hello world"
   2) 1) "1"
      2) "hello to a different world"
3) 1) "Cached execution: 1"
   2) "Query internal execution time: 0.335401 milliseconds"

RediSearch provide 2 additional index configuration options:

  1. Language - Define which language to use for stemming text which is adding the base form of a word to the index. This allows the query for "going" to also return results for "go" and "gone", for example.
  2. Stopwords - These are words that are usually so common that they do not add much information to search, but take up a lot of space and CPU time in the index.

To construct a full-text index on the title property using German language and using custom stopwords of all nodes with label Movie, use the syntax:

GRAPH.QUERY DEMO_GRAPH "CALL db.idx.fulltext.createNodeIndex({ label: 'Movie', language: 'German', stopwords: ['a', 'ab'] }, 'title')"

RediSearch provide 3 additional field configuration options:

  1. Weight - The importance of the text in the field
  2. Nostem - Skip setemming when indexing text
  3. Phonetic - Enable phonetic search on the text

To construct a full-text index on the title property with phonetic search of all nodes with label Movie, use the syntax:

GRAPH.QUERY DEMO_GRAPH "CALL db.idx.fulltext.createNodeIndex('Movie', {field: 'title', phonetic: 'dm:en'})"

204 - GRAPH.RO_QUERY

Executes a given read only query against a specified graph

Executes a given read only query against a specified graph.

Arguments: Graph name, Query, Timeout [optional]

Returns: Result set for a read only query or an error if a write query was given.

GRAPH.RO_QUERY us_government "MATCH (p:president)-[:born]->(:state {name:'Hawaii'}) RETURN p"

Query-level timeouts can be set as described in the configuration section.

205 - GRAPH.SLOWLOG

Returns a list containing up to 10 of the slowest queries issued against the given graph

Returns a list containing up to 10 of the slowest queries issued against the given graph ID.

Each item in the list has the following structure:

  1. A unix timestamp at which the log entry was processed.
  2. The issued command.
  3. The issued query.
  4. The amount of time needed for its execution, in milliseconds.
GRAPH.SLOWLOG graph_id
 1) 1) "1581932396"
    2) "GRAPH.QUERY"
    3) "MATCH (a:Person)-[:FRIEND]->(e) RETURN e.name"
    4) "0.831"
 2) 1) "1581932396"
    2) "GRAPH.QUERY"
    3) "MATCH (me:Person)-[:FRIEND]->(:Person)-[:FRIEND]->(fof:Person) RETURN fof.name"
    4) "0.288"

206 - HDEL

Delete one or more hash fields

Removes the specified fields from the hash stored at key. Specified fields that do not exist within this hash are ignored. If key does not exist, it is treated as an empty hash and this command returns 0.

Return

Integer reply: the number of fields that were removed from the hash, not including specified but non existing fields.

Examples

HSET myhash field1 "foo" HDEL myhash field1 HDEL myhash field2

207 - HELLO

Handshake with Redis

Switch to a different protocol, optionally authenticating and setting the connection's name, or provide a contextual client report.

Redis version 6 and above supports two protocols: the old protocol, RESP2, and a new one introduced with Redis 6, RESP3. RESP3 has certain advantages since when the connection is in this mode, Redis is able to reply with more semantical replies: for instance, HGETALL will return a map type, so a client library implementation no longer requires to know in advance to translate the array into a hash before returning it to the caller. For a full coverage of RESP3, please check this repository.

In Redis 6 connections start in RESP2 mode, so clients implementing RESP2 do not need to updated or changed. There are no short term plans to drop support for RESP2, although future version may default to RESP3.

HELLO always replies with a list of current server and connection properties, such as: versions, modules loaded, client ID, replication role and so forth. When called without any arguments in Redis 6.2 and its default use of RESP2 protocol, the reply looks like this:

> HELLO
 1) "server"
 2) "redis"
 3) "version"
 4) "255.255.255"
 5) "proto"
 6) (integer) 2
 7) "id"
 8) (integer) 5
 9) "mode"
10) "standalone"
11) "role"
12) "master"
13) "modules"
14) (empty array)

Clients that want to handshake using the RESP3 mode need to call the HELLO command and specify the value "3" as the protover argument, like so:

> HELLO 3
1# "server" => "redis"
2# "version" => "6.0.0"
3# "proto" => (integer) 3
4# "id" => (integer) 10
5# "mode" => "standalone"
6# "role" => "master"
7# "modules" => (empty array)

Because HELLO replies with useful information, and given that protover is optional or can be set to "2", client library authors may consider using this command instead of the canonical PING when setting up the connection.

When called with the optional protover argument, this command switches the protocol to the specified version and also accepts the following options:

  • AUTH <username> <password>: directly authenticate the connection in addition to switching to the specified protocol version. This makes calling AUTH before HELLO unnecessary when setting up a new connection. Note that the username can be set to "default" to authenticate against a server that does not use ACLs, but rather the simpler requirepass mechanism of Redis prior to version 6.
  • SETNAME <clientname>: this is the equivalent of calling CLIENT SETNAME.

Return

Array reply: a list of server properties. The reply is a map instead of an array when RESP3 is selected. The command returns an error if the protover requested does not exist.

208 - HEXISTS

Determine if a hash field exists

Returns if field is an existing field in the hash stored at key.

Return

Integer reply, specifically:

  • 1 if the hash contains field.
  • 0 if the hash does not contain field, or key does not exist.

Examples

HSET myhash field1 "foo" HEXISTS myhash field1 HEXISTS myhash field2

209 - HGET

Get the value of a hash field

Returns the value associated with field in the hash stored at key.

Return

Bulk string reply: the value associated with field, or nil when field is not present in the hash or key does not exist.

Examples

HSET myhash field1 "foo" HGET myhash field1 HGET myhash field2

210 - HGETALL

Get all the fields and values in a hash

Returns all fields and values of the hash stored at key. In the returned value, every field name is followed by its value, so the length of the reply is twice the size of the hash.

Return

Array reply: list of fields and their values stored in the hash, or an empty list when key does not exist.

Examples

HSET myhash field1 "Hello" HSET myhash field2 "World" HGETALL myhash

211 - HINCRBY

Increment the integer value of a hash field by the given number

Increments the number stored at field in the hash stored at key by increment. If key does not exist, a new key holding a hash is created. If field does not exist the value is set to 0 before the operation is performed.

The range of values supported by HINCRBY is limited to 64 bit signed integers.

Return

Integer reply: the value at field after the increment operation.

Examples

Since the increment argument is signed, both increment and decrement operations can be performed:

HSET myhash field 5 HINCRBY myhash field 1 HINCRBY myhash field -1 HINCRBY myhash field -10

212 - HINCRBYFLOAT

Increment the float value of a hash field by the given amount

Increment the specified field of a hash stored at key, and representing a floating point number, by the specified increment. If the increment value is negative, the result is to have the hash field value decremented instead of incremented. If the field does not exist, it is set to 0 before performing the operation. An error is returned if one of the following conditions occur:

  • The field contains a value of the wrong type (not a string).
  • The current field content or the specified increment are not parsable as a double precision floating point number.

The exact behavior of this command is identical to the one of the INCRBYFLOAT command, please refer to the documentation of INCRBYFLOAT for further information.

Return

Bulk string reply: the value of field after the increment.

Examples

HSET mykey field 10.50 HINCRBYFLOAT mykey field 0.1 HINCRBYFLOAT mykey field -5 HSET mykey field 5.0e3 HINCRBYFLOAT mykey field 2.0e2

Implementation details

The command is always propagated in the replication link and the Append Only File as a HSET operation, so that differences in the underlying floating point math implementation will not be sources of inconsistency.

213 - HKEYS

Get all the fields in a hash

Returns all field names in the hash stored at key.

Return

Array reply: list of fields in the hash, or an empty list when key does not exist.

Examples

HSET myhash field1 "Hello" HSET myhash field2 "World" HKEYS myhash

214 - HLEN

Get the number of fields in a hash

Returns the number of fields contained in the hash stored at key.

Return

Integer reply: number of fields in the hash, or 0 when key does not exist.

Examples

HSET myhash field1 "Hello" HSET myhash field2 "World" HLEN myhash

215 - HMGET

Get the values of all the given hash fields

Returns the values associated with the specified fields in the hash stored at key.

For every field that does not exist in the hash, a nil value is returned. Because non-existing keys are treated as empty hashes, running HMGET against a non-existing key will return a list of nil values.

Return

Array reply: list of values associated with the given fields, in the same order as they are requested.

HSET myhash field1 "Hello" HSET myhash field2 "World" HMGET myhash field1 field2 nofield

216 - HMSET

Set multiple hash fields to multiple values

Sets the specified fields to their respective values in the hash stored at key. This command overwrites any specified fields already existing in the hash. If key does not exist, a new key holding a hash is created.

Return

Simple string reply

Examples

HMSET myhash field1 "Hello" field2 "World" HGET myhash field1 HGET myhash field2

217 - HRANDFIELD

Get one or multiple random fields from a hash

When called with just the key argument, return a random field from the hash value stored at key.

If the provided count argument is positive, return an array of distinct fields. The array's length is either count or the hash's number of fields (HLEN), whichever is lower.

If called with a negative count, the behavior changes and the command is allowed to return the same field multiple times. In this case, the number of returned fields is the absolute value of the specified count.

The optional WITHVALUES modifier changes the reply so it includes the respective values of the randomly selected hash fields.

Return

Bulk string reply: without the additional count argument, the command returns a Bulk Reply with the randomly selected field, or nil when key does not exist.

Array reply: when the additional count argument is passed, the command returns an array of fields, or an empty array when key does not exist. If the WITHVALUES modifier is used, the reply is a list fields and their values from the hash.

Examples

HMSET coin heads obverse tails reverse edge null HRANDFIELD coin HRANDFIELD coin HRANDFIELD coin -5 WITHVALUES

Specification of the behavior when count is passed

When the count argument is a positive value this command behaves as follows:

  • No repeated fields are returned.
  • If count is bigger than the number of fields in the hash, the command will only return the whole hash without additional fields.
  • The order of fields in the reply is not truly random, so it is up to the client to shuffle them if needed.

When the count is a negative value, the behavior changes as follows:

  • Repeating fields are possible.
  • Exactly count fields, or an empty array if the hash is empty (non-existing key), are always returned.
  • The order of fields in the reply is truly random.

218 - HSCAN

Incrementally iterate hash fields and associated values

See SCAN for HSCAN documentation.

219 - HSET

Set the string value of a hash field

Sets field in the hash stored at key to value. If key does not exist, a new key holding a hash is created. If field already exists in the hash, it is overwritten.

Return

Integer reply: The number of fields that were added.

Examples

HSET myhash field1 "Hello" HGET myhash field1

220 - HSETNX

Set the value of a hash field, only if the field does not exist

Sets field in the hash stored at key to value, only if field does not yet exist. If key does not exist, a new key holding a hash is created. If field already exists, this operation has no effect.

Return

Integer reply, specifically:

  • 1 if field is a new field in the hash and value was set.
  • 0 if field already exists in the hash and no operation was performed.

Examples

HSETNX myhash field "Hello" HSETNX myhash field "World" HGET myhash field

221 - HSTRLEN

Get the length of the value of a hash field

Returns the string length of the value associated with field in the hash stored at key. If the key or the field do not exist, 0 is returned.

Return

Integer reply: the string length of the value associated with field, or zero when field is not present in the hash or key does not exist at all.

Examples

HMSET myhash f1 HelloWorld f2 99 f3 -256 HSTRLEN myhash f1 HSTRLEN myhash f2 HSTRLEN myhash f3

222 - HVALS

Get all the values in a hash

Returns all values in the hash stored at key.

Return

Array reply: list of values in the hash, or an empty list when key does not exist.

Examples

HSET myhash field1 "Hello" HSET myhash field2 "World" HVALS myhash

223 - INCR

Increment the integer value of a key by one

Increments the number stored at key by one. If the key does not exist, it is set to 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers.

Note: this is a string operation because Redis does not have a dedicated integer type. The string stored at the key is interpreted as a base-10 64 bit signed integer to execute the operation.

Redis stores integers in their integer representation, so for string values that actually hold an integer, there is no overhead for storing the string representation of the integer.

Return

Integer reply: the value of key after the increment

Examples

SET mykey "10" INCR mykey GET mykey

Pattern: Counter

The counter pattern is the most obvious thing you can do with Redis atomic increment operations. The idea is simply send an INCR command to Redis every time an operation occurs. For instance in a web application we may want to know how many page views this user did every day of the year.

To do so the web application may simply increment a key every time the user performs a page view, creating the key name concatenating the User ID and a string representing the current date.

This simple pattern can be extended in many ways:

  • It is possible to use INCR and EXPIRE together at every page view to have a counter counting only the latest N page views separated by less than the specified amount of seconds.
  • A client may use GETSET in order to atomically get the current counter value and reset it to zero.
  • Using other atomic increment/decrement commands like DECR or INCRBY it is possible to handle values that may get bigger or smaller depending on the operations performed by the user. Imagine for instance the score of different users in an online game.

Pattern: Rate limiter

The rate limiter pattern is a special counter that is used to limit the rate at which an operation can be performed. The classical materialization of this pattern involves limiting the number of requests that can be performed against a public API.

We provide two implementations of this pattern using INCR, where we assume that the problem to solve is limiting the number of API calls to a maximum of ten requests per second per IP address.

Pattern: Rate limiter 1

The more simple and direct implementation of this pattern is the following:

FUNCTION LIMIT_API_CALL(ip)
ts = CURRENT_UNIX_TIME()
keyname = ip+":"+ts
MULTI
    INCR(keyname)
    EXPIRE(keyname,10)
EXEC
current = RESPONSE_OF_INCR_WITHIN_MULTI
IF current > 10 THEN
    ERROR "too many requests per second"
ELSE
    PERFORM_API_CALL()
END

Basically we have a counter for every IP, for every different second. But this counters are always incremented setting an expire of 10 seconds so that they'll be removed by Redis automatically when the current second is a different one.

Note the used of MULTI and EXEC in order to make sure that we'll both increment and set the expire at every API call.

Pattern: Rate limiter 2

An alternative implementation uses a single counter, but is a bit more complex to get it right without race conditions. We'll examine different variants.

FUNCTION LIMIT_API_CALL(ip):
current = GET(ip)
IF current != NULL AND current > 10 THEN
    ERROR "too many requests per second"
ELSE
    value = INCR(ip)
    IF value == 1 THEN
        EXPIRE(ip,1)
    END
    PERFORM_API_CALL()
END

The counter is created in a way that it only will survive one second, starting from the first request performed in the current second. If there are more than 10 requests in the same second the counter will reach a value greater than 10, otherwise it will expire and start again from 0.

In the above code there is a race condition. If for some reason the client performs the INCR command but does not perform the EXPIRE the key will be leaked until we'll see the same IP address again.

This can be fixed easily turning the INCR with optional EXPIRE into a Lua script that is send using the EVAL command (only available since Redis version 2.6).

local current
current = redis.call("incr",KEYS[1])
if current == 1 then
    redis.call("expire",KEYS[1],1)
end

There is a different way to fix this issue without using scripting, by using Redis lists instead of counters. The implementation is more complex and uses more advanced features but has the advantage of remembering the IP addresses of the clients currently performing an API call, that may be useful or not depending on the application.

FUNCTION LIMIT_API_CALL(ip)
current = LLEN(ip)
IF current > 10 THEN
    ERROR "too many requests per second"
ELSE
    IF EXISTS(ip) == FALSE
        MULTI
            RPUSH(ip,ip)
            EXPIRE(ip,1)
        EXEC
    ELSE
        RPUSHX(ip,ip)
    END
    PERFORM_API_CALL()
END

The RPUSHX command only pushes the element if the key already exists.

Note that we have a race here, but it is not a problem: EXISTS may return false but the key may be created by another client before we create it inside the MULTI / EXEC block. However this race will just miss an API call under rare conditions, so the rate limiting will still work correctly.

224 - INCRBY

Increment the integer value of a key by the given amount

Increments the number stored at key by increment. If the key does not exist, it is set to 0 before performing the operation. An error is returned if the key contains a value of the wrong type or contains a string that can not be represented as integer. This operation is limited to 64 bit signed integers.

See INCR for extra information on increment/decrement operations.

Return

Integer reply: the value of key after the increment

Examples

SET mykey "10" INCRBY mykey 5

225 - INCRBYFLOAT

Increment the float value of a key by the given amount

Increment the string representing a floating point number stored at key by the specified increment. By using a negative increment value, the result is that the value stored at the key is decremented (by the obvious properties of addition). If the key does not exist, it is set to 0 before performing the operation. An error is returned if one of the following conditions occur:

  • The key contains a value of the wrong type (not a string).
  • The current key content or the specified increment are not parsable as a double precision floating point number.

If the command is successful the new incremented value is stored as the new value of the key (replacing the old one), and returned to the caller as a string.

Both the value already contained in the string key and the increment argument can be optionally provided in exponential notation, however the value computed after the increment is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits representing the decimal part of the number. Trailing zeroes are always removed.

The precision of the output is fixed at 17 digits after the decimal point regardless of the actual internal precision of the computation.

Return

Bulk string reply: the value of key after the increment.

Examples

SET mykey 10.50 INCRBYFLOAT mykey 0.1 INCRBYFLOAT mykey -5 SET mykey 5.0e3 INCRBYFLOAT mykey 2.0e2

Implementation details

The command is always propagated in the replication link and the Append Only File as a SET operation, so that differences in the underlying floating point math implementation will not be sources of inconsistency.

226 - INFO

Get information and statistics about the server

The INFO command returns information and statistics about the server in a format that is simple to parse by computers and easy to read by humans.

The optional parameter can be used to select a specific section of information:

  • server: General information about the Redis server
  • clients: Client connections section
  • memory: Memory consumption related information
  • persistence: RDB and AOF related information
  • stats: General statistics
  • replication: Master/replica replication information
  • cpu: CPU consumption statistics
  • commandstats: Redis command statistics
  • latencystats: Redis command latency percentile distribution statistics
  • cluster: Redis Cluster section
  • modules: Modules section
  • keyspace: Database related statistics
  • modules: Module related sections
  • errorstats: Redis error statistics

It can also take the following values:

  • all: Return all sections (excluding module generated ones)
  • default: Return only the default set of sections
  • everything: Includes all and modules

When no parameter is provided, the default option is assumed.

Return

Bulk string reply: as a collection of text lines.

Lines can contain a section name (starting with a # character) or a property. All the properties are in the form of field:value terminated by \r\n.

INFO

Notes

Please note depending on the version of Redis some of the fields have been added or removed. A robust client application should therefore parse the result of this command by skipping unknown properties, and gracefully handle missing fields.

Here is the description of fields for Redis >= 2.4.

Here is the meaning of all fields in the server section:

  • redis_version: Version of the Redis server
  • redis_git_sha1: Git SHA1
  • redis_git_dirty: Git dirty flag
  • redis_build_id: The build id
  • redis_mode: The server's mode ("standalone", "sentinel" or "cluster")
  • os: Operating system hosting the Redis server
  • arch_bits: Architecture (32 or 64 bits)
  • multiplexing_api: Event loop mechanism used by Redis
  • atomicvar_api: Atomicvar API used by Redis
  • gcc_version: Version of the GCC compiler used to compile the Redis server
  • process_id: PID of the server process
  • process_supervised: Supervised system ("upstart", "systemd", "unknown" or "no")
  • run_id: Random value identifying the Redis server (to be used by Sentinel and Cluster)
  • tcp_port: TCP/IP listen port
  • server_time_usec: Epoch-based system time with microsecond precision
  • uptime_in_seconds: Number of seconds since Redis server start
  • uptime_in_days: Same value expressed in days
  • hz: The server's current frequency setting
  • configured_hz: The server's configured frequency setting
  • lru_clock: Clock incrementing every minute, for LRU management
  • executable: The path to the server's executable
  • config_file: The path to the config file
  • io_threads_active: Flag indicating if I/O threads are active
  • shutdown_in_milliseconds: The maximum time remaining for replicas to catch up the replication before completing the shutdown sequence. This field is only present during shutdown.

Here is the meaning of all fields in the clients section:

  • connected_clients: Number of client connections (excluding connections from replicas)
  • cluster_connections: An approximation of the number of sockets used by the cluster's bus
  • maxclients: The value of the maxclients configuration directive. This is the upper limit for the sum of connected_clients, connected_slaves and cluster_connections.
  • client_recent_max_input_buffer: Biggest input buffer among current client connections
  • client_recent_max_output_buffer: Biggest output buffer among current client connections
  • blocked_clients: Number of clients pending on a blocking call (BLPOP, BRPOP, BRPOPLPUSH, BLMOVE, BZPOPMIN, BZPOPMAX)
  • tracking_clients: Number of clients being tracked (CLIENT TRACKING)
  • clients_in_timeout_table: Number of clients in the clients timeout table

Here is the meaning of all fields in the memory section:

  • used_memory: Total number of bytes allocated by Redis using its allocator (either standard libc, jemalloc, or an alternative allocator such as tcmalloc)
  • used_memory_human: Human readable representation of previous value
  • used_memory_rss: Number of bytes that Redis allocated as seen by the operating system (a.k.a resident set size). This is the number reported by tools such as top(1) and ps(1)
  • used_memory_rss_human: Human readable representation of previous value
  • used_memory_peak: Peak memory consumed by Redis (in bytes)
  • used_memory_peak_human: Human readable representation of previous value
  • used_memory_peak_perc: The percentage of used_memory_peak out of used_memory
  • used_memory_overhead: The sum in bytes of all overheads that the server allocated for managing its internal data structures
  • used_memory_startup: Initial amount of memory consumed by Redis at startup in bytes
  • used_memory_dataset: The size in bytes of the dataset (used_memory_overhead subtracted from used_memory)
  • used_memory_dataset_perc: The percentage of used_memory_dataset out of the net memory usage (used_memory minus used_memory_startup)
  • total_system_memory: The total amount of memory that the Redis host has
  • total_system_memory_human: Human readable representation of previous value
  • used_memory_lua: Number of bytes used by the Lua engine
  • used_memory_lua_human: Human readable representation of previous value
  • used_memory_scripts: Number of bytes used by cached Lua scripts
  • used_memory_scripts_human: Human readable representation of previous value
  • maxmemory: The value of the maxmemory configuration directive
  • maxmemory_human: Human readable representation of previous value
  • maxmemory_policy: The value of the maxmemory-policy configuration directive
  • mem_fragmentation_ratio: Ratio between used_memory_rss and used_memory. Note that this doesn't only includes fragmentation, but also other process overheads (see the allocator_* metrics), and also overheads like code, shared libraries, stack, etc.
  • mem_fragmentation_bytes: Delta between used_memory_rss and used_memory. Note that when the total fragmentation bytes is low (few megabytes), a high ratio (e.g. 1.5 and above) is not an indication of an issue.
  • allocator_frag_ratio:: Ratio between allocator_active and allocator_allocated. This is the true (external) fragmentation metric (not mem_fragmentation_ratio).
  • allocator_frag_bytes Delta between allocator_active and allocator_allocated. See note about mem_fragmentation_bytes.
  • allocator_rss_ratio: Ratio between allocator_resident and allocator_active. This usually indicates pages that the allocator can and probably will soon release back to the OS.
  • allocator_rss_bytes: Delta between allocator_resident and allocator_active
  • rss_overhead_ratio: Ratio between used_memory_rss (the process RSS) and allocator_resident. This includes RSS overheads that are not allocator or heap related.
  • rss_overhead_bytes: Delta between used_memory_rss (the process RSS) and allocator_resident
  • allocator_allocated: Total bytes allocated form the allocator, including internal-fragmentation. Normally the same as used_memory.
  • allocator_active: Total bytes in the allocator active pages, this includes external-fragmentation.
  • allocator_resident: Total bytes resident (RSS) in the allocator, this includes pages that can be released to the OS (by MEMORY PURGE, or just waiting).
  • mem_not_counted_for_evict: Used memory that's not counted for key eviction. This is basically transient replica and AOF buffers.
  • mem_clients_slaves: Memory used by replica clients - Starting Redis 7.0, replica buffers share memory with the replication backlog, so this field can show 0 when replicas don't trigger an increase of memory usage.
  • mem_clients_normal: Memory used by normal clients
  • mem_cluster_links: Memory used by links to peers on the cluster bus when cluster mode is enabled.
  • mem_aof_buffer: Transient memory used for AOF and AOF rewrite buffers
  • mem_replication_backlog: Memory used by replication backlog
  • mem_total_replication_buffers: Total memory consumed for replication buffers - Added in Redis 7.0.
  • mem_allocator: Memory allocator, chosen at compile time.
  • active_defrag_running: When activedefrag is enabled, this indicates whether defragmentation is currently active, and the CPU percentage it intends to utilize.
  • lazyfree_pending_objects: The number of objects waiting to be freed (as a result of calling UNLINK, or FLUSHDB and FLUSHALL with the ASYNC option)
  • lazyfreed_objects: The number of objects that have been lazy freed.

Ideally, the used_memory_rss value should be only slightly higher than used_memory. When rss >> used, a large difference may mean there is (external) memory fragmentation, which can be evaluated by checking allocator_frag_ratio, allocator_frag_bytes. When used >> rss, it means part of Redis memory has been swapped off by the operating system: expect some significant latencies.

Because Redis does not have control over how its allocations are mapped to memory pages, high used_memory_rss is often the result of a spike in memory usage.

When Redis frees memory, the memory is given back to the allocator, and the allocator may or may not give the memory back to the system. There may be a discrepancy between the used_memory value and memory consumption as reported by the operating system. It may be due to the fact memory has been used and released by Redis, but not given back to the system. The used_memory_peak value is generally useful to check this point.

Additional introspective information about the server's memory can be obtained by referring to the MEMORY STATS command and the MEMORY DOCTOR.

Here is the meaning of all fields in the persistence section:

  • loading: Flag indicating if the load of a dump file is on-going
  • async_loading: Currently loading replication data-set asynchronously while serving old data. This means repl-diskless-load is enabled and set to swapdb. Added in Redis 7.0.
  • current_cow_peak: The peak size in bytes of copy-on-write memory while a child fork is running
  • current_cow_size: The size in bytes of copy-on-write memory while a child fork is running
  • current_cow_size_age: The age, in seconds, of the current_cow_size value.
  • current_fork_perc: The percentage of progress of the current fork process. For AOF and RDB forks it is the percentage of current_save_keys_processed out of current_save_keys_total.
  • current_save_keys_processed: Number of keys processed by the current save operation
  • current_save_keys_total: Number of keys at the beginning of the current save operation
  • rdb_changes_since_last_save: Number of changes since the last dump
  • rdb_bgsave_in_progress: Flag indicating a RDB save is on-going
  • rdb_last_save_time: Epoch-based timestamp of last successful RDB save
  • rdb_last_bgsave_status: Status of the last RDB save operation
  • rdb_last_bgsave_time_sec: Duration of the last RDB save operation in seconds
  • rdb_current_bgsave_time_sec: Duration of the on-going RDB save operation if any
  • rdb_last_cow_size: The size in bytes of copy-on-write memory during the last RDB save operation
  • rdb_last_load_keys_expired: Number volatile keys deleted during the last RDB loading. Added in Redis 7.0.
  • rdb_last_load_keys_loaded: Number of keys loaded during the last RDB loading. Added in Redis 7.0.
  • aof_enabled: Flag indicating AOF logging is activated
  • aof_rewrite_in_progress: Flag indicating a AOF rewrite operation is on-going
  • aof_rewrite_scheduled: Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete.
  • aof_last_rewrite_time_sec: Duration of the last AOF rewrite operation in seconds
  • aof_current_rewrite_time_sec: Duration of the on-going AOF rewrite operation if any
  • aof_last_bgrewrite_status: Status of the last AOF rewrite operation
  • aof_last_write_status: Status of the last write operation to the AOF
  • aof_last_cow_size: The size in bytes of copy-on-write memory during the last AOF rewrite operation
  • module_fork_in_progress: Flag indicating a module fork is on-going
  • module_fork_last_cow_size: The size in bytes of copy-on-write memory during the last module fork operation
  • aof_rewrites: Number of AOF rewrites performed since startup
  • rdb_saves: Number of RDB snapshots performed since startup

rdb_changes_since_last_save refers to the number of operations that produced some kind of changes in the dataset since the last time either SAVE or BGSAVE was called.

If AOF is activated, these additional fields will be added:

  • aof_current_size: AOF current file size
  • aof_base_size: AOF file size on latest startup or rewrite
  • aof_pending_rewrite: Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete.
  • aof_buffer_length: Size of the AOF buffer
  • aof_rewrite_buffer_length: Size of the AOF rewrite buffer. Note this field was removed in Redis 7.0
  • aof_pending_bio_fsync: Number of fsync pending jobs in background I/O queue
  • aof_delayed_fsync: Delayed fsync counter

If a load operation is on-going, these additional fields will be added:

  • loading_start_time: Epoch-based timestamp of the start of the load operation
  • loading_total_bytes: Total file size
  • loading_rdb_used_mem: The memory usage of the server that had generated the RDB file at the time of the file's creation
  • loading_loaded_bytes: Number of bytes already loaded
  • loading_loaded_perc: Same value expressed as a percentage
  • loading_eta_seconds: ETA in seconds for the load to be complete

Here is the meaning of all fields in the stats section:

  • total_connections_received: Total number of connections accepted by the server
  • total_commands_processed: Total number of commands processed by the server
  • instantaneous_ops_per_sec: Number of commands processed per second
  • total_net_input_bytes: The total number of bytes read from the network
  • total_net_output_bytes: The total number of bytes written to the network
  • instantaneous_input_kbps: The network's read rate per second in KB/sec
  • instantaneous_output_kbps: The network's write rate per second in KB/sec
  • rejected_connections: Number of connections rejected because of maxclients limit
  • sync_full: The number of full resyncs with replicas
  • sync_partial_ok: The number of accepted partial resync requests
  • sync_partial_err: The number of denied partial resync requests
  • expired_keys: Total number of key expiration events
  • expired_stale_perc: The percentage of keys probably expired
  • expired_time_cap_reached_count: The count of times that active expiry cycles have stopped early
  • expire_cycle_cpu_milliseconds: The cumulative amount of time spend on active expiry cycles
  • evicted_keys: Number of evicted keys due to maxmemory limit
  • evicted_clients: Number of evicted clients due to maxmemory-clients limit. Added in Redis 7.0.
  • total_eviction_exceeded_time: Total time used_memory was greater than maxmemory since server startup, in milliseconds
  • current_eviction_exceeded_time: The time passed since used_memory last rose above maxmemory, in milliseconds
  • keyspace_hits: Number of successful lookup of keys in the main dictionary
  • keyspace_misses: Number of failed lookup of keys in the main dictionary
  • pubsub_channels: Global number of pub/sub channels with client subscriptions
  • pubsub_patterns: Global number of pub/sub pattern with client subscriptions
  • latest_fork_usec: Duration of the latest fork operation in microseconds
  • total_forks: Total number of fork operations since the server start
  • migrate_cached_sockets: The number of sockets open for MIGRATE purposes
  • slave_expires_tracked_keys: The number of keys tracked for expiry purposes (applicable only to writable replicas)
  • active_defrag_hits: Number of value reallocations performed by active the defragmentation process
  • active_defrag_misses: Number of aborted value reallocations started by the active defragmentation process
  • active_defrag_key_hits: Number of keys that were actively defragmented
  • active_defrag_key_misses: Number of keys that were skipped by the active defragmentation process
  • total_active_defrag_time: Total time memory fragmentation was over the limit, in milliseconds
  • current_active_defrag_time: The time passed since memory fragmentation last was over the limit, in milliseconds
  • tracking_total_keys: Number of keys being tracked by the server
  • tracking_total_items: Number of items, that is the sum of clients number for each key, that are being tracked
  • tracking_total_prefixes: Number of tracked prefixes in server's prefix table (only applicable for broadcast mode)
  • unexpected_error_replies: Number of unexpected error replies, that are types of errors from an AOF load or replication
  • total_error_replies: Total number of issued error replies, that is the sum of rejected commands (errors prior command execution) and failed commands (errors within the command execution)
  • dump_payload_sanitizations: Total number of dump payload deep integrity validations (see sanitize-dump-payload config).
  • total_reads_processed: Total number of read events processed
  • total_writes_processed: Total number of write events processed
  • io_threaded_reads_processed: Number of read events processed by the main and I/O threads
  • io_threaded_writes_processed: Number of write events processed by the main and I/O threads

Here is the meaning of all fields in the replication section:

  • role: Value is "master" if the instance is replica of no one, or "slave" if the instance is a replica of some master instance. Note that a replica can be master of another replica (chained replication).
  • master_failover_state: The state of an ongoing failover, if any.
  • master_replid: The replication ID of the Redis server.
  • master_replid2: The secondary replication ID, used for PSYNC after a failover.
  • master_repl_offset: The server's current replication offset
  • second_repl_offset: The offset up to which replication IDs are accepted
  • repl_backlog_active: Flag indicating replication backlog is active
  • repl_backlog_size: Total size in bytes of the replication backlog buffer
  • repl_backlog_first_byte_offset: The master offset of the replication backlog buffer
  • repl_backlog_histlen: Size in bytes of the data in the replication backlog buffer

If the instance is a replica, these additional fields are provided:

  • master_host: Host or IP address of the master
  • master_port: Master listening TCP port
  • master_link_status: Status of the link (up/down)
  • master_last_io_seconds_ago: Number of seconds since the last interaction with master
  • master_sync_in_progress: Indicate the master is syncing to the replica
  • slave_read_repl_offset: The read replication offset of the replica instance.
  • slave_repl_offset: The replication offset of the replica instance
  • slave_priority: The priority of the instance as a candidate for failover
  • slave_read_only: Flag indicating if the replica is read-only
  • replica_announced: Flag indicating if the replica is announced by Sentinel.

If a SYNC operation is on-going, these additional fields are provided:

  • master_sync_total_bytes: Total number of bytes that need to be transferred. this may be 0 when the size is unknown (for example, when the repl-diskless-sync configuration directive is used)
  • master_sync_read_bytes: Number of bytes already transferred
  • master_sync_left_bytes: Number of bytes left before syncing is complete (may be negative when master_sync_total_bytes is 0)
  • master_sync_perc: The percentage master_sync_read_bytes from master_sync_total_bytes, or an approximation that uses loading_rdb_used_mem when master_sync_total_bytes is 0
  • master_sync_last_io_seconds_ago: Number of seconds since last transfer I/O during a SYNC operation

If the link between master and replica is down, an additional field is provided:

  • master_link_down_since_seconds: Number of seconds since the link is down

The following field is always provided:

  • connected_slaves: Number of connected replicas

If the server is configured with the min-slaves-to-write (or starting with Redis 5 with the min-replicas-to-write) directive, an additional field is provided:

  • min_slaves_good_slaves: Number of replicas currently considered good

For each replica, the following line is added:

  • slaveXXX: id, IP address, port, state, offset, lag

Here is the meaning of all fields in the cpu section:

  • used_cpu_sys: System CPU consumed by the Redis server, which is the sum of system CPU consumed by all threads of the server process (main thread and background threads)
  • used_cpu_user: User CPU consumed by the Redis server, which is the sum of user CPU consumed by all threads of the server process (main thread and background threads)
  • used_cpu_sys_children: System CPU consumed by the background processes
  • used_cpu_user_children: User CPU consumed by the background processes
  • used_cpu_sys_main_thread: System CPU consumed by the Redis server main thread
  • used_cpu_user_main_thread: User CPU consumed by the Redis server main thread

The commandstats section provides statistics based on the command type, including the number of calls that reached command execution (not rejected), the total CPU time consumed by these commands, the average CPU consumed per command execution, the number of rejected calls (errors prior command execution), and the number of failed calls (errors within the command execution).

For each command type, the following line is added:

  • cmdstat_XXX: calls=XXX,usec=XXX,usec_per_call=XXX,rejected_calls=XXX,failed_calls=XXX

The latencystats section provides latency percentile distribution statistics based on the command type.

By default, the exported latency percentiles are the p50, p99, and p999. If you need to change the exported percentiles, use CONFIG SET latency-tracking-info-percentiles "50.0 99.0 99.9".

This section requires the extended latency monitoring feature to be enabled (by default it's enabled). If you need to enable it, use CONFIG SET latency-tracking yes.

For each command type, the following line is added:

  • latency_percentiles_usec_XXX: p<percentile 1>=<percentile 1 value>,p<percentile 2>=<percentile 2 value>,...

The errorstats section enables keeping track of the different errors that occurred within Redis, based upon the reply error prefix ( The first word after the "-", up to the first space. Example: ERR ).

For each error type, the following line is added:

  • errorstat_XXX: count=XXX

The cluster section currently only contains a unique field:

  • cluster_enabled: Indicate Redis cluster is enabled

The modules section contains additional information about loaded modules if the modules provide it. The field part of properties lines in this section is always prefixed with the module's name.

The keyspace section provides statistics on the main dictionary of each database. The statistics are the number of keys, and the number of keys with an expiration.

For each database, the following line is added:

  • dbXXX: keys=XXX,expires=XXX

A note about the word slave used in this man page: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated.

Modules generated sections: Starting with Redis 6, modules can inject their info into the INFO command, these are excluded by default even when the all argument is provided (it will include a list of loaded modules but not their generated info fields). To get these you must use either the modules argument or everything.,

227 - JSON.ARRAPPEND

Append one or more json values into the array at path after the last element in it.

Append the json values into the array at path after the last element in it.

Return

[] if the matching JSON value is not an array.

Examples

redis> JSON.SET doc $ '{"a":[1], "nested": {"a": [1,2]}, "nested2": {"a": 42}}'
OK
redis> JSON.ARRAPPEND doc $..a 3 4
1) (integer) 3
2) (integer) 4
3) (nil)
redis> JSON.GET doc $
"[{\"a\":[1,3,4],\"nested\":{\"a\":[1,2,3,4]},\"nested2\":{\"a\":42}}]"

228 - JSON.ARRINDEX

Returns the index of the first occurrence of a JSON scalar value in the array at path

Searches for the first occurrence of a scalar JSON value in an array.

The optional inclusive start (default 0) and exclusive stop (default 0, meaning that the last element is included) specify a slice of the array to search. Negative values are interpreted as starting from the end.

Note: out-of-range indexes round to the array's start and end. An inverse index range (such as the range from 1 to 0) will return unfound.

Return

[] if the matching JSON value is not an array.

Examples

redis> JSON.SET doc $ '{"a":[1,2,3,2], "nested": {"a": [3,4]}}'
OK
redis> JSON.ARRINDEX doc $..a 2
1) (integer) 1
2) (integer) -1
redis> JSON.SET doc $ '{"a":[1,2,3,2], "nested": {"a": false}}'
OK
redis> JSON.ARRINDEX doc $..a 2
1) (integer) 1
2) (nil)

229 - JSON.ARRINSERT

Inserts the JSON scalar(s) value at the specified index in the array at path

Inserts the json values into the array at path before the index (shifts to the right).

The index must be in the array's range. Inserting at index 0 prepends to the array. Negative index values start from the end of the array.

Return

[] if the matching JSON value is not an array.

Examples

redis> JSON.SET doc $ '{"a":[3], "nested": {"a": [3,4]}}'
OK
redis> JSON.ARRINSERT doc $..a 0 1 2
1) (integer) 3
2) (integer) 4
redis> JSON.GET doc $
"[{\"a\":[1,2,3],\"nested\":{\"a\":[1,2,3,4]}}]"
redis> JSON.SET doc $ '{"a":[1,2,3,2], "nested": {"a": false}}'
OK
redis> JSON.ARRINSERT doc $..a 0 1 2
1) (integer) 6
2) (nil)

230 - JSON.ARRLEN

Returns the length of the array at path

Reports the length of the JSON Array at path in key.

path defaults to root if not provided. Returns null if the key or path do not exist.

Return

[] if the matching JSON value is not an array.

Examples

redis> JSON.SET doc $ '{"a":[3], "nested": {"a": [3,4]}}'
OK
redis> JSON.ARRLEN doc $..a
1) (integer) 1
2) (integer) 2
redis> JSON.SET doc $ '{"a":[1,2,3,2], "nested": {"a": false}}'
OK
redis> JSON.ARRLEN doc $..a
1) (integer) 4
2) (nil)

231 - JSON.ARRPOP

Removes and returns the element at the specified index in the array at path

Removes and returns an element from the index in the array.

path defaults to root if not provided. index is the position in the array to start popping from (defaults to -1, meaning the last element). Out-of-range indexes round to their respective array ends. Popping an empty array returns null.

Return

[] if the matching JSON value is not an array.

Examples

redis> JSON.SET doc $ '{"a":[3], "nested": {"a": [3,4]}}'
OK
redis> JSON.ARRPOP doc $..a
1) "3"
2) "4"
redis> JSON.GET doc $
"[{\"a\":[],\"nested\":{\"a\":[3]}}]"
redis> JSON.SET doc $ '{"a":["foo", "bar"], "nested": {"a": false}, "nested2": {"a":[]}}'
OK
redis> JSON.ARRPOP doc $..a
1) "\"bar\""
2) (nil)
3) (nil)

232 - JSON.ARRTRIM

Trims the array at path to contain only the specified inclusive range of indices from start to stop

Trims an array so that it contains only the specified inclusive range of elements.

This command is extremely forgiving and using it with out-of-range indexes will not produce an error. There are a few differences between how RedisJSON v2.0 and legacy versions handle out-of-range indexes.

Behavior as of RedisJSON v2.0:

  • If start is larger than the array's size or start > stop, returns 0 and an empty array.
  • If start is < 0, then start from the end of the array.
  • If stop is larger than the end of the array, it will be treated like the last element.

Return

[] if the matching JSON value is not an array.

Examples

redis> JSON.SET doc $ '{"a":[], "nested": {"a": [1,4]}}'
OK
redis> JSON.ARRTRIM doc $..a 1 1
1) (integer) 0
2) (integer) 1
redis> JSON.GET doc $
"[{\"a\":[],\"nested\":{\"a\":[4]}}]"
redis> JSON.SET doc $ '{"a":[1,2,3,2], "nested": {"a": false}}'
OK
redis> JSON.ARRTRIM doc $..a 1 1
1) (integer) 1
2) (nil)
redis> JSON.GET doc $
"[{\"a\":[2],\"nested\":{\"a\":false}}]"

233 - JSON.CLEAR

Clears all values from an array or an object and sets numeric values to 0

Clears container values (Arrays/Objects), and sets numeric values to 0.

Already cleared values are ignored: empty containers, and zero numbers.

path defaults to root if not provided. Non-existing paths are ignored.

Return

Integer reply: specifically the number of values cleared.

Examples

redis> JSON.SET doc $ '{"obj":{"a":1, "b":2}, "arr":[1,2,3], "str": "foo", "bool": true, "int": 42, "float": 3.14}'
OK
redis> JSON.CLEAR doc $.*
(integer) 4
redis> JSON.GET doc $
"[{\"obj\":{},\"arr\":[],\"str\":\"foo\",\"bool\":true,\"int\":0,\"float\":0}]"

234 - JSON.DEBUG

Debugging container command

This is a container command for debugging related tasks.

235 - JSON.DEBUG HELP

Shows helpful information

Returns helpful information about the JSON.DEBUG command.

Return

Array reply with helpful messages

236 - JSON.DEBUG MEMORY

Reports the size in bytes of a key

Report a value's memory usage in bytes. path defaults to root if not provided.

Return

Integer reply: the value's size in bytes.

237 - JSON.DEL

Deletes a value

Deletes a value.

path defaults to root if not provided. Ignores nonexistent keys and paths. Deleting an object's root is equivalent to deleting the key from Redis.

Return

Integer reply - the number of paths deleted (0 or more).

Examples

redis> JSON.SET doc $ '{"a": 1, "nested": {"a": 2, "b": 3}}'
OK
redis> JSON.DEL doc $..a
(integer) 2

238 - JSON.FORGET

Deletes a value

See JSON.DEL.

239 - JSON.GET

Gets the value at one or more paths in JSON serialized form

Returns the value at path in JSON serialized form.

This command accepts multiple path arguments. If no path is given, it defaults to the value's root.

The following subcommands change the reply's format (all are empty string by default):

  • INDENT sets the indentation string for nested levels
  • NEWLINE sets the string that's printed at the end of each line
  • SPACE sets the string that's put between a key and a value

Produce pretty-formatted JSON with redis-cli by following this example:

~/$ redis-cli --raw
127.0.0.1:6379> JSON.GET myjsonkey INDENT "\t" NEWLINE "\n" SPACE " " path.to.value[1]

Return

[] - each string is the JSON serialization of each JSON value that matches a path.

When using a JSONPath, the root of the matching values is always an array. In contrast, the legacy path returns a single value.

If there are multiple paths that include both legacy path and JSONPath, the returned value conforms to the JSONPath version (an array of values).

Examples

redis> JSON.SET doc $ '{"a":2, "b": 3, "nested": {"a": 4, "b": null}}'
OK

With a single JSONPath (JSON array bulk string):

redis> JSON.GET doc $..b
"[3,null]"

Using multiple paths with at least one JSONPath (map with array of JSON values per path):

redis> JSON.GET doc ..a $..b
"{\"$..b\":[3,null],\"..a\":[2,4]}"

240 - JSON.MGET

Returns the values at a path from one or more keys

Returns the values at path from multiple key arguments. Returns null for nonexistent keys and nonexistent paths.

Return

[] - the JSON serialization of the value at each key's path.

Examples

Given the following documents:

redis> JSON.SET doc1 $ '{"a":1, "b": 2, "nested": {"a": 3}, "c": null}'
OK
redis> JSON.SET doc2 $ '{"a":4, "b": 5, "nested": {"a": 6}, "c": null}'
OK
redis> JSON.MGET doc1 doc2 $..a
1) "[1,3]"
2) "[4,6]"

241 - JSON.NUMINCRBY

Increments the numeric value at path by a value

Increments the number value stored at path by number.

Return

[] if the matching JSON value is not a number.

Examples

redis> JSON.SET doc . '{"a":"b","b":[{"a":2}, {"a":5}, {"a":"c"}]}'
OK
redis> JSON.NUMINCRBY doc $.a 2
"[null]"
redis> JSON.NUMINCRBY doc $..a 2
"[null,4,7,null]"

242 - JSON.NUMMULTBY

Multiplies the numeric value at path by a value

Multiplies the number value stored at path by number.

Return

[] element if the matching JSON value is not a number.

Examples

redis> JSON.SET doc . '{"a":"b","b":[{"a":2}, {"a":5}, {"a":"c"}]}'
OK
redis> JSON.NUMMULTBY doc $.a 2
"[null]"
redis> JSON.NUMMULTBY doc $..a 2
"[null,4,10,null]"

243 - JSON.OBJKEYS

Returns the JSON keys of the object at path

Returns the keys in the object that's referenced by path.

path defaults to root if not provided. Returns null if the object is empty or either key or path do not exist.

Return

[] if the matching JSON value is not an object.

Examples

redis> JSON.SET doc $ '{"a":[3], "nested": {"a": {"b":2, "c": 1}}}'
OK
redis> JSON.OBJKEYS doc $..a
1) (nil)
2) 1) "b"
   2) "c"

244 - JSON.OBJLEN

Returns the number of keys of the object at path

Reports the number of keys in the JSON Object at path in key.

path defaults to root if not provided. Returns null if the key or path do not exist.

Return

[] if the matching JSON value is not an object.

Examples

redis> JSON.SET doc $ '{"a":[3], "nested": {"a": {"b":2, "c": 1}}}'
OK
redis> JSON.OBJLEN doc $..a
1) (nil)
2) (integer) 2

245 - JSON.RESP

Returns the JSON value at path in Redis Serialization Protocol (RESP)

Returns the JSON in key in [Redis Serialization Protocol (RESP)][5] form.

path defaults to root if not provided. This command uses the following mapping from JSON to RESP:

  • JSON Null maps to the Bulk string reply
  • JSON false and true values map to Simple string reply
  • JSON Numbers map to [], depending on type
  • JSON Strings map to Bulk string reply
  • JSON Arrays are represented as [] [ followed by the array's elements
  • JSON Objects are represented as [].

Return

Array reply - the JSON's RESP form as detailed.

246 - JSON.SET

Sets or updates the JSON value at a path

Sets the JSON value at path in key.

For new Redis keys the path must be the root. For existing keys, when the entire path exists, the value that it contains is replaced with the json value. For existing keys, when the path exists, except for the last element, a new child is added with the json value.

Adds a key (with its respective value) to a JSON Object (in a RedisJSON data type key) only if it is the last child in the path, or it is the parent of a new child being added in the path. The optional subcommands modify this behavior for both new RedisJSON data type keys as well as the JSON Object keys in them:

  • NX - only set the key if it does not already exist
  • XX - only set the key if it already exists

Return

[] if the specified NX or XX conditions were not met.

Examples

Replacing an existing value

redis> JSON.SET doc $ '{"a":2}'
OK
redis> JSON.SET doc $.a '3'
OK
redis> JSON.GET doc $
"[{\"a\":3}]"

Adding a new value

redis> JSON.SET doc $ '{"a":2}'
OK
redis> JSON.SET doc $.b '8'
OK
redis> JSON.GET doc $
"[{\"a\":2,\"b\":8}]"

Updating multi paths

redis> JSON.SET doc $ '{"f1": {"a":1}, "f2":{"a":2}}'
OK
redis> JSON.SET doc $..a 3
OK
redis> json.get doc
"{\"f1\":{\"a\":3},\"f2\":{\"a\":3}}"

247 - JSON.STRAPPEND

Appends a string to a JSON string value at path

Appends the json-string values to the string at path.

path defaults to root if not provided.

Return

[] if the matching JSON value is not an array.

Examples

redis> JSON.SET doc $ '{"a":"foo", "nested": {"a": "hello"}, "nested2": {"a": 31}}'
OK
redis> JSON.STRAPPEND doc $..a '"baz"'
1) (integer) 6
2) (integer) 8
3) (nil)
redis> JSON.GET doc $
"[{\"a\":\"foobaz\",\"nested\":{\"a\":\"hellobaz\"},\"nested2\":{\"a\":31}}]"

248 - JSON.STRLEN

Returns the length of the JSON String at path in key

Reports the length of the JSON String at path in key.

path defaults to root if not provided. Returns null if the key or path do not exist.

Return

[] if the matching JSON value is not a string.

Examples

redis> JSON.SET doc $ '{"a":"foo", "nested": {"a": "hello"}, "nested2": {"a": 31}}'
OK
redis> JSON.STRLEN doc $..a
1) (integer) 3
2) (integer) 5
3) (nil)

249 - JSON.TOGGLE

Toggles a boolean value

Toggle a boolean value stored at path.

return

[] element for JSON values matching the path which are not boolean.

Examples

redis> JSON.SET doc $ '{"bool": true}'
OK
redis> JSON.TOGGLE doc $.bool
1) (integer) 0
redis> JSON.GET doc $
"[{\"bool\":false}]"
redis> JSON.TOGGLE doc $.bool
1) (integer) 1
redis> JSON.GET doc $
"[{\"bool\":true}]"

250 - JSON.TYPE

Returns the type of the JSON value at path

Reports the type of JSON value at path.

path defaults to root if not provided. Returns null if the key or path do not exist.

Return

[] - for each path, the value's type.

Examples

redis> JSON.SET doc $ '{"a":2, "nested": {"a": true}, "foo": "bar"}'
OK
redis> JSON.TYPE doc $..foo
1) "string"
redis> JSON.TYPE doc $..a
1) "integer"
2) "boolean"
redis> JSON.TYPE doc $..dummy
(empty array)

251 - KEYS

Find all keys matching the given pattern

Returns all keys matching pattern.

While the time complexity for this operation is O(N), the constant times are fairly low. For example, Redis running on an entry level laptop can scan a 1 million key database in 40 milliseconds.

Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use KEYS in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using SCAN or sets.

Supported glob-style patterns:

  • h?llo matches hello, hallo and hxllo
  • h*llo matches hllo and heeeello
  • h[ae]llo matches hello and hallo, but not hillo
  • h[^e]llo matches hallo, hbllo, ... but not hello
  • h[a-b]llo matches hallo and hbllo

Use \ to escape special characters if you want to match them verbatim.

Return

Array reply: list of keys matching pattern.

Examples

MSET firstname Jack lastname Stuntman age 35 KEYS *name* KEYS a?? KEYS *

252 - LASTSAVE

Get the UNIX time stamp of the last successful save to disk

Return the UNIX TIME of the last DB save executed with success. A client may check if a BGSAVE command succeeded reading the LASTSAVE value, then issuing a BGSAVE command and checking at regular intervals every N seconds if LASTSAVE changed.

Return

Integer reply: an UNIX time stamp.

253 - LATENCY

A container for latency diagnostics commands

This is a container command for latency diagnostics commands.

To see the list of available commands you can call LATENCY HELP.

254 - LATENCY DOCTOR

Return a human readable latency analysis report.

The LATENCY DOCTOR command reports about different latency-related issues and advises about possible remedies.

This command is the most powerful analysis tool in the latency monitoring framework, and is able to provide additional statistical data like the average period between latency spikes, the median deviation, and a human-readable analysis of the event. For certain events, like fork, additional information is provided, like the rate at which the system forks processes.

This is the output you should post in the Redis mailing list if you are looking for help about Latency related issues.

Examples

127.0.0.1:6379> latency doctor

Dave, I have observed latency spikes in this Redis instance.
You don't mind talking about it, do you Dave?

1. command: 5 latency spikes (average 300ms, mean deviation 120ms,
    period 73.40 sec). Worst all time event 500ms.

I have a few advices for you:

- Your current Slow Log configuration only logs events that are
    slower than your configured latency monitor threshold. Please
    use 'CONFIG SET slowlog-log-slower-than 1000'.
- Check your Slow Log to understand what are the commands you are
    running which are too slow to execute. Please check
    http://redis.io/commands/slowlog for more information.
- Deleting, expiring or evicting (because of maxmemory policy)
    large objects is a blocking operation. If you have very large
    objects that are often deleted, expired, or evicted, try to
    fragment those objects into multiple smaller objects.

Note: the doctor has erratic psychological behaviors, so we recommend interacting with it carefully.

For more information refer to the Latency Monitoring Framework page.

Return

Bulk string reply

255 - LATENCY GRAPH

Return a latency graph for the event.

Produces an ASCII-art style graph for the specified event.

LATENCY GRAPH lets you intuitively understand the latency trend of an event via state-of-the-art visualization. It can be used for quickly grasping the situation before resorting to means such parsing the raw data from LATENCY HISTORY or external tooling.

Valid values for event are:

  • active-defrag-cycle
  • aof-fsync-always
  • aof-stat
  • aof-rewrite-diff-write
  • aof-rename
  • aof-write
  • aof-write-active-child
  • aof-write-alone
  • aof-write-pending-fsync
  • command
  • expire-cycle
  • eviction-cycle
  • eviction-del
  • fast-command
  • fork
  • rdb-unlink-temp-file

Examples

127.0.0.1:6379> latency reset command
(integer) 0
127.0.0.1:6379> debug sleep .1
OK
127.0.0.1:6379> debug sleep .2
OK
127.0.0.1:6379> debug sleep .3
OK
127.0.0.1:6379> debug sleep .5
OK
127.0.0.1:6379> debug sleep .4
OK
127.0.0.1:6379> latency graph command
command - high 500 ms, low 101 ms (all time high 500 ms)
--------------------------------------------------------------------------------
   #_
  _||
 _|||
_||||

11186
542ss
sss

The vertical labels under each graph column represent the amount of seconds, minutes, hours or days ago the event happened. For example "15s" means that the first graphed event happened 15 seconds ago.

The graph is normalized in the min-max scale so that the zero (the underscore in the lower row) is the minimum, and a # in the higher row is the maximum.

For more information refer to the Latency Monitoring Framework page.

Return

Bulk string reply

256 - LATENCY HELP

Show helpful text about the different subcommands.

The LATENCY HELP command returns a helpful text describing the different subcommands.

For more information refer to the Latency Monitoring Framework page.

Return

Array reply: a list of subcommands and their descriptions

257 - LATENCY HISTOGRAM

Return the cumulative distribution of latencies of a subset of commands or all.

The LATENCY HISTOGRAM command reports a cumulative distribution of latencies in the format of a histogram for each of the specified command names. If no command names are specified then all commands that contain latency information will be replied.

Each reported histogram has the following fields:

  • Command name.
  • The total calls for that command.
  • A map of time buckets:
    • Each bucket represents a latency range.
    • Each bucket covers twice the previous bucket's range.
    • Empty buckets are not printed.
    • The tracked latencies are between 1 microsecond and roughly 1 second.
    • Everything above 1 sec is considered +Inf.
    • At max there will be log2(1000000000)=30 buckets.

This command requires the extended latency monitoring feature to be enabled (by default it's enabled). If you need to enable it, use CONFIG SET latency-tracking yes.

Examples

127.0.0.1:6379> LATENCY HISTOGRAM set
1# "set" =>
   1# "calls" => (integer) 100000
   2# "histogram_usec" =>
      1# (integer) 1 => (integer) 99583
      2# (integer) 2 => (integer) 99852
      3# (integer) 4 => (integer) 99914
      4# (integer) 8 => (integer) 99940
      5# (integer) 16 => (integer) 99968
      6# (integer) 33 => (integer) 100000

Return

Array reply: specifically:

The command returns a map where each key is a command name, and each value is a map with the total calls, and an inner map of the histogram time buckets.

258 - LATENCY HISTORY

Return timestamp-latency samples for the event.

The LATENCY HISTORY command returns the raw data of the event's latency spikes time series.

This is useful to an application that wants to fetch raw data in order to perform monitoring, display graphs, and so forth.

The command will return up to 160 timestamp-latency pairs for the event.

Valid values for event are:

  • active-defrag-cycle
  • aof-fsync-always
  • aof-stat
  • aof-rewrite-diff-write
  • aof-rename
  • aof-write
  • aof-write-active-child
  • aof-write-alone
  • aof-write-pending-fsync
  • command
  • expire-cycle
  • eviction-cycle
  • eviction-del
  • fast-command
  • fork
  • rdb-unlink-temp-file

Examples

127.0.0.1:6379> latency history command
1) 1) (integer) 1405067822
   2) (integer) 251
2) 1) (integer) 1405067941
   2) (integer) 1001

For more information refer to the Latency Monitoring Framework page.

Return

Array reply: specifically:

The command returns an array where each element is a two elements array representing the timestamp and the latency of the event.

259 - LATENCY LATEST

Return the latest latency samples for all events.

The LATENCY LATEST command reports the latest latency events logged.

Each reported event has the following fields:

  • Event name.
  • Unix timestamp of the latest latency spike for the event.
  • Latest event latency in millisecond.
  • All-time maximum latency for this event.

"All-time" means the maximum latency since the Redis instance was started, or the time that events were reset LATENCY RESET.

Examples

127.0.0.1:6379> debug sleep 1
OK
(1.00s)
127.0.0.1:6379> debug sleep .25
OK
127.0.0.1:6379> latency latest
1) 1) "command"
   2) (integer) 1405067976
   3) (integer) 251
   4) (integer) 1001

For more information refer to the Latency Monitoring Framework page.

Return

Array reply: specifically:

The command returns an array where each element is a four elements array representing the event's name, timestamp, latest and all-time latency measurements.

260 - LATENCY RESET

Reset latency data for one or more events.

The LATENCY RESET command resets the latency spikes time series of all, or only some, events.

When the command is called without arguments, it resets all the events, discarding the currently logged latency spike events, and resetting the maximum event time register.

It is possible to reset only specific events by providing the event names as arguments.

Valid values for event are:

  • active-defrag-cycle
  • aof-fsync-always
  • aof-stat
  • aof-rewrite-diff-write
  • aof-rename
  • aof-write
  • aof-write-active-child
  • aof-write-alone
  • aof-write-pending-fsync
  • command
  • expire-cycle
  • eviction-cycle
  • eviction-del
  • fast-command
  • fork
  • rdb-unlink-temp-file

For more information refer to the Latency Monitoring Framework page.

Return

Integer reply: the number of event time series that were reset.

261 - LCS

Find longest common substring

The LCS command implements the longest common subsequence algorithm. Note that this is different than the longest common string algorithm, since matching characters in the string does not need to be contiguous.

For instance the LCS between "foo" and "fao" is "fo", since scanning the two strings from left to right, the longest common set of characters is composed of the first "f" and then the "o".

LCS is very useful in order to evaluate how similar two strings are. Strings can represent many things. For instance if two strings are DNA sequences, the LCS will provide a measure of similarity between the two DNA sequences. If the strings represent some text edited by some user, the LCS could represent how different the new text is compared to the old one, and so forth.

Note that this algorithm runs in O(N*M) time, where N is the length of the first string and M is the length of the second string. So either spin a different Redis instance in order to run this algorithm, or make sure to run it against very small strings.

> MSET key1 ohmytext key2 mynewtext
OK
> LCS key1 key2
"mytext"

Sometimes we need just the length of the match:

> LCS key1 key2 LEN
6

However what is often very useful, is to know the match position in each strings:

> LCS key1 key2 IDX
1) "matches"
2) 1) 1) 1) (integer) 4
         2) (integer) 7
      2) 1) (integer) 5
         2) (integer) 8
   2) 1) 1) (integer) 2
         2) (integer) 3
      2) 1) (integer) 0
         2) (integer) 1
3) "len"
4) (integer) 6

Matches are produced from the last one to the first one, since this is how the algorithm works, and it more efficient to emit things in the same order. The above array means that the first match (second element of the array) is between positions 2-3 of the first string and 0-1 of the second. Then there is another match between 4-7 and 5-8.

To restrict the list of matches to the ones of a given minimal length:

> LCS key1 key2 IDX MINMATCHLEN 4
1) "matches"
2) 1) 1) 1) (integer) 4
         2) (integer) 7
      2) 1) (integer) 5
         2) (integer) 8
3) "len"
4) (integer) 6

Finally to also have the match len:

> LCS key1 key2 IDX MINMATCHLEN 4 WITHMATCHLEN
1) "matches"
2) 1) 1) 1) (integer) 4
         2) (integer) 7
      2) 1) (integer) 5
         2) (integer) 8
      3) (integer) 4
3) "len"
4) (integer) 6

Return

  • Without modifiers the string representing the longest common substring is returned.
  • When LEN is given the command returns the length of the longest common substring.
  • When IDX is given the command returns an array with the LCS length and all the ranges in both the strings, start and end offset for each string, where there are matches. When WITHMATCHLEN is given each array representing a match will also have the length of the match (see examples).

262 - LINDEX

Get an element from a list by its index

Returns the element at index index in the list stored at key. The index is zero-based, so 0 means the first element, 1 the second element and so on. Negative indices can be used to designate elements starting at the tail of the list. Here, -1 means the last element, -2 means the penultimate and so forth.

When the value at key is not a list, an error is returned.

Return

Bulk string reply: the requested element, or nil when index is out of range.

Examples

LPUSH mylist "World" LPUSH mylist "Hello" LINDEX mylist 0 LINDEX mylist -1 LINDEX mylist 3

263 - LINSERT

Insert an element before or after another element in a list

Inserts element in the list stored at key either before or after the reference value pivot.

When key does not exist, it is considered an empty list and no operation is performed.

An error is returned when key exists but does not hold a list value.

Return

Integer reply: the length of the list after the insert operation, or -1 when the value pivot was not found.

Examples

RPUSH mylist "Hello" RPUSH mylist "World" LINSERT mylist BEFORE "World" "There" LRANGE mylist 0 -1

264 - LLEN

Get the length of a list

Returns the length of the list stored at key. If key does not exist, it is interpreted as an empty list and 0 is returned. An error is returned when the value stored at key is not a list.

Return

Integer reply: the length of the list at key.

Examples

LPUSH mylist "World" LPUSH mylist "Hello" LLEN mylist

265 - LMOVE

Pop an element from a list, push it to another list and return it

Atomically returns and removes the first/last element (head/tail depending on the wherefrom argument) of the list stored at source, and pushes the element at the first/last element (head/tail depending on the whereto argument) of the list stored at destination.

For example: consider source holding the list a,b,c, and destination holding the list x,y,z. Executing LMOVE source destination RIGHT LEFT results in source holding a,b and destination holding c,x,y,z.

If source does not exist, the value nil is returned and no operation is performed. If source and destination are the same, the operation is equivalent to removing the first/last element from the list and pushing it as first/last element of the list, so it can be considered as a list rotation command (or a no-op if wherefrom is the same as whereto).

This command comes in place of the now deprecated RPOPLPUSH. Doing LMOVE RIGHT LEFT is equivalent.

Return

Bulk string reply: the element being popped and pushed.

Examples

RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" LMOVE mylist myotherlist RIGHT LEFT LMOVE mylist myotherlist LEFT RIGHT LRANGE mylist 0 -1 LRANGE myotherlist 0 -1

Pattern: Reliable queue

Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue is often obtained pushing values into a list in the producer side, and waiting for this values in the consumer side using RPOP (using polling), or BRPOP if the client is better served by a blocking operation.

However in this context the obtained queue is not reliable as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but it is still to process.

LMOVE (or BLMOVE for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it into a processing list. It will use the LREM command in order to remove the message from the processing list once the message has been processed.

An additional client may monitor the processing list for items that remain there for too much time, and will push those timed out items into the queue again if needed.

Pattern: Circular list

Using LMOVE with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) without transferring the full list from the server to the client using a single LRANGE operation.

The above pattern works even if the following two conditions:

  • There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts.
  • Even if other clients are actively pushing new items at the end of the list.

The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. An example is a monitoring system that must check that a set of web sites are reachable, with the smallest delay possible, using a number of parallel workers.

Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be processed at the next iteration.

266 - LMPOP

Pop elements from a list

Pops one or more elements from the first non-empty list key from the list of provided key names.

LMPOP and BLMPOP are similar to the following, more limited, commands:

  • LPOP or RPOP which take only one key, and can return multiple elements.
  • BLPOP or BRPOP which take multiple keys, but return only one element from just one key.

See BLMPOP for the blocking variant of this command.

Elements are popped from either the left or right of the first non-empty list based on the passed argument. The number of returned elements is limited to the lower between the non-empty list's length, and the count argument (which defaults to 1).

Return

Array reply: specifically:

  • A nil when no element could be popped.
  • A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of elements.

Examples

LMPOP 2 non1 non2 LEFT COUNT 10 LPUSH mylist "one" "two" "three" "four" "five" LMPOP 1 mylist LEFT LRANGE mylist 0 -1 LMPOP 1 mylist RIGHT COUNT 10 LPUSH mylist "one" "two" "three" "four" "five" LPUSH mylist2 "a" "b" "c" "d" "e" LMPOP 2 mylist mylist2 right count 3 LRANGE mylist 0 -1 LMPOP 2 mylist mylist2 right count 5 LMPOP 2 mylist mylist2 right count 10 EXISTS mylist mylist2

267 - LOLWUT

Display some computer art and the Redis version

The LOLWUT command displays the Redis version: however as a side effect of doing so, it also creates a piece of generative computer art that is different with each version of Redis. The command was introduced in Redis 5 and announced with this blog post.

By default the LOLWUT command will display the piece corresponding to the current Redis version, however it is possible to display a specific version using the following form:

LOLWUT VERSION 5 ... other optional arguments ...

Of course the "5" above is an example. Each LOLWUT version takes a different set of arguments in order to change the output. The user is encouraged to play with it to discover how the output changes adding more numerical arguments.

LOLWUT wants to be a reminder that there is more in programming than just putting some code together in order to create something useful. Every LOLWUT version should have the following properties:

  1. It should display some computer art. There are no limits as long as the output works well in a normal terminal display. However the output should not be limited to graphics (like LOLWUT 5 and 6 actually do), but can be generative poetry and other non graphical things.
  2. LOLWUT output should be completely useless. Displaying some useful Redis internal metrics does not count as a valid LOLWUT.
  3. LOLWUT output should be fast to generate so that the command can be called in production instances without issues. It should remain fast even when the user experiments with odd parameters.
  4. LOLWUT implementations should be safe and carefully checked for security, and resist to untrusted inputs if they take arguments.
  5. LOLWUT must always display the Redis version at the end.

Return

Bulk string reply (or verbatim reply when using the RESP3 protocol): the string containing the generative computer art, and a text with the Redis version.

268 - LPOP

Remove and get the first elements in a list

Removes and returns the first elements of the list stored at key.

By default, the command pops a single element from the beginning of the list. When provided with the optional count argument, the reply will consist of up to count elements, depending on the list's length.

Return

When called without the count argument:

Bulk string reply: the value of the first element, or nil when key does not exist.

When called with the count argument:

Array reply: list of popped elements, or nil when key does not exist.

Examples

RPUSH mylist "one" "two" "three" "four" "five" LPOP mylist LPOP mylist 2 LRANGE mylist 0 -1

269 - LPOS

Return the index of matching elements on a list

The command returns the index of matching elements inside a Redis list. By default, when no options are given, it will scan the list from head to tail, looking for the first match of "element". If the element is found, its index (the zero-based position in the list) is returned. Otherwise, if no match is found, nil is returned.

> RPUSH mylist a b c 1 2 3 c c
> LPOS mylist c
2

The optional arguments and options can modify the command's behavior. The RANK option specifies the "rank" of the first element to return, in case there are multiple matches. A rank of 1 means to return the first match, 2 to return the second match, and so forth.

For instance, in the above example the element "c" is present multiple times, if I want the index of the second match, I'll write:

> LPOS mylist c RANK 2
6

That is, the second occurrence of "c" is at position 6. A negative "rank" as the RANK argument tells LPOS to invert the search direction, starting from the tail to the head.

So, we want to say, give me the first element starting from the tail of the list:

> LPOS mylist c RANK -1
7

Note that the indexes are still reported in the "natural" way, that is, considering the first element starting from the head of the list at index 0, the next element at index 1, and so forth. This basically means that the returned indexes are stable whatever the rank is positive or negative.

Sometimes we want to return not just the Nth matching element, but the position of all the first N matching elements. This can be achieved using the COUNT option.

> LPOS mylist c COUNT 2
[2,6]

We can combine COUNT and RANK, so that COUNT will try to return up to the specified number of matches, but starting from the Nth match, as specified by the RANK option.

> LPOS mylist c RANK -1 COUNT 2
[7,6]

When COUNT is used, it is possible to specify 0 as the number of matches, as a way to tell the command we want all the matches found returned as an array of indexes. This is better than giving a very large COUNT option because it is more general.

> LPOS mylist c COUNT 0
[2,6,7]

When COUNT is used and no match is found, an empty array is returned. However when COUNT is not used and there are no matches, the command returns nil.

Finally, the MAXLEN option tells the command to compare the provided element only with a given maximum number of list items. So for instance specifying MAXLEN 1000 will make sure that the command performs only 1000 comparisons, effectively running the algorithm on a subset of the list (the first part or the last part depending on the fact we use a positive or negative rank). This is useful to limit the maximum complexity of the command. It is also useful when we expect the match to be found very early, but want to be sure that in case this is not true, the command does not take too much time to run.

When MAXLEN is used, it is possible to specify 0 as the maximum number of comparisons, as a way to tell the command we want unlimited comparisons. This is better than giving a very large MAXLEN option because it is more general.

Return

The command returns the integer representing the matching element, or nil if there is no match. However, if the COUNT option is given the command returns an array (empty if there are no matches).

Examples

RPUSH mylist a b c d 1 2 3 4 3 3 3 LPOS mylist 3 LPOS mylist 3 COUNT 0 RANK 2

270 - LPUSH

Prepend one or multiple elements to a list

Insert all the specified values at the head of the list stored at key. If key does not exist, it is created as empty list before performing the push operations. When key holds a value that is not a list, an error is returned.

It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the head of the list, from the leftmost element to the rightmost element. So for instance the command LPUSH mylist a b c will result into a list containing c as first element, b as second element and a as third element.

Return

Integer reply: the length of the list after the push operations.

Examples

LPUSH mylist "world" LPUSH mylist "hello" LRANGE mylist 0 -1

271 - LPUSHX

Prepend an element to a list, only if the list exists

Inserts specified values at the head of the list stored at key, only if key already exists and holds a list. In contrary to LPUSH, no operation will be performed when key does not yet exist.

Return

Integer reply: the length of the list after the push operation.

Examples

LPUSH mylist "World" LPUSHX mylist "Hello" LPUSHX myotherlist "Hello" LRANGE mylist 0 -1 LRANGE myotherlist 0 -1

272 - LRANGE

Get a range of elements from a list

Returns the specified elements of the list stored at key. The offsets start and stop are zero-based indexes, with 0 being the first element of the list (the head of the list), 1 being the next element and so on.

These offsets can also be negative numbers indicating offsets starting at the end of the list. For example, -1 is the last element of the list, -2 the penultimate, and so on.

Consistency with range functions in various programming languages

Note that if you have a list of numbers from 0 to 100, LRANGE list 0 10 will return 11 elements, that is, the rightmost item is included. This may or may not be consistent with behavior of range-related functions in your programming language of choice (think Ruby's Range.new, Array#slice or Python's range() function).

Out-of-range indexes

Out of range indexes will not produce an error. If start is larger than the end of the list, an empty list is returned. If stop is larger than the actual end of the list, Redis will treat it like the last element of the list.

Return

Array reply: list of elements in the specified range.

Examples

RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" LRANGE mylist 0 0 LRANGE mylist -3 2 LRANGE mylist -100 100 LRANGE mylist 5 10

273 - LREM

Remove elements from a list

Removes the first count occurrences of elements equal to element from the list stored at key. The count argument influences the operation in the following ways:

  • count > 0: Remove elements equal to element moving from head to tail.
  • count < 0: Remove elements equal to element moving from tail to head.
  • count = 0: Remove all elements equal to element.

For example, LREM list -2 "hello" will remove the last two occurrences of "hello" in the list stored at list.

Note that non-existing keys are treated like empty lists, so when key does not exist, the command will always return 0.

Return

Integer reply: the number of removed elements.

Examples

RPUSH mylist "hello" RPUSH mylist "hello" RPUSH mylist "foo" RPUSH mylist "hello" LREM mylist -2 "hello" LRANGE mylist 0 -1

274 - LSET

Set the value of an element in a list by its index

Sets the list element at index to element. For more information on the index argument, see LINDEX.

An error is returned for out of range indexes.

Return

Simple string reply

Examples

RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" LSET mylist 0 "four" LSET mylist -2 "five" LRANGE mylist 0 -1

275 - LTRIM

Trim a list to the specified range

Trim an existing list so that it will contain only the specified range of elements specified. Both start and stop are zero-based indexes, where 0 is the first element of the list (the head), 1 the next element and so on.

For example: LTRIM foobar 0 2 will modify the list stored at foobar so that only the first three elements of the list will remain.

start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.

Out of range indexes will not produce an error: if start is larger than the end of the list, or start > end, the result will be an empty list (which causes key to be removed). If end is larger than the end of the list, Redis will treat it like the last element of the list.

A common use of LTRIM is together with LPUSH / RPUSH. For example:

LPUSH mylist someelement
LTRIM mylist 0 99

This pair of commands will push a new element on the list, while making sure that the list will not grow larger than 100 elements. This is very useful when using Redis to store logs for example. It is important to note that when used in this way LTRIM is an O(1) operation because in the average case just one element is removed from the tail of the list.

Return

Simple string reply

Examples

RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" LTRIM mylist 1 -1 LRANGE mylist 0 -1

276 - MEMORY

A container for memory diagnostics commands

This is a container command for memory introspection and management commands.

To see the list of available commands you can call MEMORY HELP.

277 - MEMORY DOCTOR

Outputs memory problems report

The MEMORY DOCTOR command reports about different memory-related issues that the Redis server experiences, and advises about possible remedies.

Return

Bulk string reply

278 - MEMORY HELP

Show helpful text about the different subcommands

The MEMORY HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

279 - MEMORY MALLOC-STATS

Show allocator internal stats

The MEMORY MALLOC-STATS command provides an internal statistics report from the memory allocator.

This command is currently implemented only when using jemalloc as an allocator, and evaluates to a benign NOOP for all others.

Return

Bulk string reply: the memory allocator's internal statistics report

280 - MEMORY PURGE

Ask the allocator to release memory

The MEMORY PURGE command attempts to purge dirty pages so these can be reclaimed by the allocator.

This command is currently implemented only when using jemalloc as an allocator, and evaluates to a benign NOOP for all others.

Return

Simple string reply

281 - MEMORY STATS

Show memory usage details

The MEMORY STATS command returns an Array reply about the memory usage of the server.

The information about memory usage is provided as metrics and their respective values. The following metrics are reported:

  • peak.allocated: Peak memory consumed by Redis in bytes (see INFO's used_memory_peak)
  • total.allocated: Total number of bytes allocated by Redis using its allocator (see INFO's used_memory)
  • startup.allocated: Initial amount of memory consumed by Redis at startup in bytes (see INFO's used_memory_startup)
  • replication.backlog: Size in bytes of the replication backlog (see INFO's repl_backlog_active)
  • clients.slaves: The total size in bytes of all replicas overheads (output and query buffers, connection contexts)
  • clients.normal: The total size in bytes of all clients overheads (output and query buffers, connection contexts)
  • cluster.links: Memory usage by cluster links (Added in Redis 7.0, see INFO's mem_cluster_links).
  • aof.buffer: The summed size in bytes of AOF related buffers.
  • lua.caches: the summed size in bytes of the overheads of the Lua scripts' caches
  • dbXXX: For each of the server's databases, the overheads of the main and expiry dictionaries (overhead.hashtable.main and overhead.hashtable.expires, respectively) are reported in bytes
  • overhead.total: The sum of all overheads, i.e. startup.allocated, replication.backlog, clients.slaves, clients.normal, aof.buffer and those of the internal data structures that are used in managing the Redis keyspace (see INFO's used_memory_overhead)
  • keys.count: The total number of keys stored across all databases in the server
  • keys.bytes-per-key: The ratio between net memory usage (total.allocated minus startup.allocated) and keys.count
  • dataset.bytes: The size in bytes of the dataset, i.e. overhead.total subtracted from total.allocated (see INFO's used_memory_dataset)
  • dataset.percentage: The percentage of dataset.bytes out of the net memory usage
  • peak.percentage: The percentage of peak.allocated out of total.allocated
  • fragmentation: See INFO's mem_fragmentation_ratio

Return

Array reply: nested list of memory usage metrics and their values

A note about the word slave used in this man page: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated.

282 - MEMORY USAGE

Estimate the memory usage of a key

The MEMORY USAGE command reports the number of bytes that a key and its value require to be stored in RAM.

The reported usage is the total of memory allocations for data and administrative overheads that a key its value require.

For nested data types, the optional SAMPLES option can be provided, where count is the number of sampled nested values. By default, this option is set to 5. To sample the all of the nested values, use SAMPLES 0.

Examples

With Redis v4.0.1 64-bit and jemalloc, the empty string measures as follows:

> SET "" ""
OK
> MEMORY USAGE ""
(integer) 51

These bytes are pure overhead at the moment as no actual data is stored, and are used for maintaining the internal data structures of the server. Longer keys and values show asymptotically linear usage.

> SET foo bar
OK
> MEMORY USAGE foo
(integer) 54
> SET cento 01234567890123456789012345678901234567890123
45678901234567890123456789012345678901234567890123456789
OK
127.0.0.1:6379> MEMORY USAGE cento
(integer) 153

Return

Integer reply: the memory usage in bytes, or nil when the key does not exist.

283 - MGET

Get the values of all the given keys

Returns the values of all specified keys. For every key that does not hold a string value or does not exist, the special value nil is returned. Because of this, the operation never fails.

Return

Array reply: list of values at the specified keys.

Examples

SET key1 "Hello" SET key2 "World" MGET key1 key2 nonexisting

284 - MIGRATE

Atomically transfer a key from a Redis instance to another one.

Atomically transfer a key from a source Redis instance to a destination Redis instance. On success the key is deleted from the original instance and is guaranteed to exist in the target instance.

The command is atomic and blocks the two instances for the time required to transfer the key, at any given time the key will appear to exist in a given instance or in the other instance, unless a timeout error occurs. In 3.2 and above, multiple keys can be pipelined in a single call to MIGRATE by passing the empty string ("") as key and adding the KEYS clause.

The command internally uses DUMP to generate the serialized version of the key value, and RESTORE in order to synthesize the key in the target instance. The source instance acts as a client for the target instance. If the target instance returns OK to the RESTORE command, the source instance deletes the key using DEL.

The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. This means that the operation does not need to be completed within the specified amount of milliseconds, but that the transfer should make progresses without blocking for more than the specified amount of milliseconds.

MIGRATE needs to perform I/O operations and to honor the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error - IOERR returned. When this happens the following two cases are possible:

  • The key may be on both the instances.
  • The key may be only in the source instance.

It is not possible for the key to get lost in the event of a timeout, but the client calling MIGRATE, in the event of a timeout error, should check if the key is also present in the target instance and act accordingly.

When any other error is returned (starting with ERR) MIGRATE guarantees that the key is still only present in the originating instance (unless a key with the same name was also already present on the target instance).

If there are no keys to migrate in the source instance NOKEY is returned. Because missing keys are possible in normal conditions, from expiry for example, NOKEY isn't an error.

Migrating multiple keys with a single command call

Starting with Redis 3.0.6 MIGRATE supports a new bulk-migration mode that uses pipelining in order to migrate multiple keys between instances without incurring in the round trip time latency and other overheads that there are when moving each key with a single MIGRATE call.

In order to enable this form, the KEYS option is used, and the normal key argument is set to an empty string. The actual key names will be provided after the KEYS argument itself, like in the following example:

MIGRATE 192.168.1.34 6379 "" 0 5000 KEYS key1 key2 key3

When this form is used the NOKEY status code is only returned when none of the keys is present in the instance, otherwise the command is executed, even if just a single key exists.

Options

  • COPY -- Do not remove the key from the local instance.
  • REPLACE -- Replace existing key on the remote instance.
  • KEYS -- If the key argument is an empty string, the command will instead migrate all the keys that follow the KEYS option (see the above section for more info).
  • AUTH -- Authenticate with the given password to the remote instance.
  • AUTH2 -- Authenticate with the given username and password pair (Redis 6 or greater ACL auth style).

Return

Simple string reply: The command returns OK on success, or NOKEY if no keys were found in the source instance.

285 - MODULE

A container for module commands

This is a container command for module management commands.

To see the list of available commands you can call MODULE HELP.

286 - MODULE HELP

Show helpful text about the different subcommands

The MODULE HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

287 - MODULE LIST

List all modules loaded by the server

Returns information about the modules loaded to the server.

Return

Array reply: list of loaded modules. Each element in the list represents a module, and is in itself a list of property names and their values. The following properties is reported for each loaded module:

  • name: Name of the module
  • ver: Version of the module

288 - MODULE LOAD

Load a module

Loads a module from a dynamic library at runtime.

This command loads and initializes the Redis module from the dynamic library specified by the path argument. The path should be the absolute path of the library, including the full filename. Any additional arguments are passed unmodified to the module.

Note: modules can also be loaded at server startup with loadmodule configuration directive in redis.conf.

Return

Simple string reply: OK if module was loaded.

289 - MODULE LOADEX

Load a module with extended parameters

Loads a module from a dynamic library at runtime with configuration directives.

This is an extended version of the MODULE LOAD command.

It loads and initializes the Redis module from the dynamic library specified by the path argument. The path should be the absolute path of the library, including the full filename.

You can use the optional CONFIG argument to provide the module with configuration directives. Any additional arguments that follow the ARGS keyword are passed unmodified to the module.

Note: modules can also be loaded at server startup with loadmodule configuration directive in redis.conf.

Return

Simple string reply: OK if module was loaded.

290 - MODULE UNLOAD

Unload a module

Unloads a module.

This command unloads the module specified by name. Note that the module's name is reported by the MODULE LIST command, and may differ from the dynamic library's filename.

Known limitations:

  • Modules that register custom data types can not be unloaded.

Return

Simple string reply: OK if module was unloaded.

291 - MONITOR

Listen for all requests received by the server in real time

MONITOR is a debugging command that streams back every command processed by the Redis server. It can help in understanding what is happening to the database. This command can both be used via redis-cli and via telnet.

The ability to see all the requests processed by the server is useful in order to spot bugs in an application both when using Redis as a database and as a distributed caching system.

$ redis-cli monitor
1339518083.107412 [0 127.0.0.1:60866] "keys" "*"
1339518087.877697 [0 127.0.0.1:60866] "dbsize"
1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6"
1339518096.506257 [0 127.0.0.1:60866] "get" "x"
1339518099.363765 [0 127.0.0.1:60866] "eval" "return redis.call('set','x','7')" "0"
1339518100.363799 [0 lua] "set" "x" "7"
1339518100.544926 [0 127.0.0.1:60866] "del" "x"

Use SIGINT (Ctrl-C) to stop a MONITOR stream running via redis-cli.

$ telnet localhost 6379
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
MONITOR
+OK
+1339518083.107412 [0 127.0.0.1:60866] "keys" "*"
+1339518087.877697 [0 127.0.0.1:60866] "dbsize"
+1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6"
+1339518096.506257 [0 127.0.0.1:60866] "get" "x"
+1339518099.363765 [0 127.0.0.1:60866] "del" "x"
+1339518100.544926 [0 127.0.0.1:60866] "get" "x"
QUIT
+OK
Connection closed by foreign host.

Manually issue the QUIT or RESET commands to stop a MONITOR stream running via telnet.

Commands not logged by MONITOR

Because of security concerns, no administrative commands are logged by MONITOR's output and sensitive data is redacted in the command AUTH.

Furthermore, the command QUIT is also not logged.

Cost of running MONITOR

Because MONITOR streams back all commands, its use comes at a cost. The following (totally unscientific) benchmark numbers illustrate what the cost of running MONITOR can be.

Benchmark result without MONITOR running:

$ src/redis-benchmark -c 10 -n 100000 -q
PING_INLINE: 101936.80 requests per second
PING_BULK: 102880.66 requests per second
SET: 95419.85 requests per second
GET: 104275.29 requests per second
INCR: 93283.58 requests per second

Benchmark result with MONITOR running (redis-cli monitor > /dev/null):

$ src/redis-benchmark -c 10 -n 100000 -q
PING_INLINE: 58479.53 requests per second
PING_BULK: 59136.61 requests per second
SET: 41823.50 requests per second
GET: 45330.91 requests per second
INCR: 41771.09 requests per second

In this particular case, running a single MONITOR client can reduce the throughput by more than 50%. Running more MONITOR clients will reduce throughput even more.

Return

Non standard return value, just dumps the received commands in an infinite flow.

Behavior change history

292 - MOVE

Move a key to another database

Move key from the currently selected database (see SELECT) to the specified destination database. When key already exists in the destination database, or it does not exist in the source database, it does nothing. It is possible to use MOVE as a locking primitive because of this.

Return

Integer reply, specifically:

  • 1 if key was moved.
  • 0 if key was not moved.

293 - MSET

Set multiple keys to multiple values

Sets the given keys to their respective values. MSET replaces existing values with new values, just as regular SET. See MSETNX if you don't want to overwrite existing values.

MSET is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged.

Return

Simple string reply: always OK since MSET can't fail.

Examples

MSET key1 "Hello" key2 "World" GET key1 GET key2

294 - MSETNX

Set multiple keys to multiple values, only if none of the keys exist

Sets the given keys to their respective values. MSETNX will not perform any operation at all even if just a single key already exists.

Because of this semantic MSETNX can be used in order to set different keys representing different fields of an unique logic object in a way that ensures that either all the fields or none at all are set.

MSETNX is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged.

Return

Integer reply, specifically:

  • 1 if the all the keys were set.
  • 0 if no key was set (at least one key already existed).

Examples

MSETNX key1 "Hello" key2 "there" MSETNX key2 "new" key3 "world" MGET key1 key2 key3

295 - MULTI

Mark the start of a transaction block

Marks the start of a transaction block. Subsequent commands will be queued for atomic execution using EXEC.

Return

Simple string reply: always OK.

296 - OBJECT

A container for object introspection commands

This is a container command for object introspection commands.

To see the list of available commands you can call OBJECT HELP.

297 - OBJECT ENCODING

Inspect the internal encoding of a Redis object

Returns the internal encoding for the Redis object stored at <key>

Redis objects can be encoded in different ways:

  • Strings can be encoded as raw (normal string encoding) or int (strings representing integers in a 64 bit signed interval are encoded in this way in order to save space).
  • Lists can be encoded as ziplist or linkedlist. The ziplist is the special representation that is used to save space for small lists.
  • Sets can be encoded as intset or hashtable. The intset is a special encoding used for small sets composed solely of integers.
  • Hashes can be encoded as ziplist or hashtable. The ziplist is a special encoding used for small hashes.
  • Sorted Sets can be encoded as ziplist or skiplist format. As for the List type small sorted sets can be specially encoded using ziplist, while the skiplist encoding is the one that works with sorted sets of any size.

All the specially encoded types are automatically converted to the general type once you perform an operation that makes it impossible for Redis to retain the space saving encoding.

Return

Bulk string reply: the encoding of the object, or nil if the key doesn't exist

298 - OBJECT FREQ

Get the logarithmic access frequency counter of a Redis object

This command returns the logarithmic access frequency counter of a Redis object stored at <key>.

The command is only available when the maxmemory-policy configuration directive is set to one of the LFU policies.

Return

Integer reply

The counter's value.

299 - OBJECT HELP

Show helpful text about the different subcommands

The OBJECT HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

300 - OBJECT IDLETIME

Get the time since a Redis object was last accessed

This command returns the time in seconds since the last access to the value stored at <key>.

The command is only available when the maxmemory-policy configuration directive is not set to one of the LFU policies.

Return

Integer reply

The idle time in seconds.

301 - OBJECT REFCOUNT

Get the number of references to the value of the key

This command returns the reference count of the stored at <key>.

Return

Integer reply

The number of references.

302 - PERSIST

Remove the expiration from a key

Remove the existing timeout on key, turning the key from volatile (a key with an expire set) to persistent (a key that will never expire as no timeout is associated).

Return

Integer reply, specifically:

  • 1 if the timeout was removed.
  • 0 if key does not exist or does not have an associated timeout.

Examples

SET mykey "Hello" EXPIRE mykey 10 TTL mykey PERSIST mykey TTL mykey

303 - PEXPIRE

Set a key's time to live in milliseconds

This command works exactly like EXPIRE but the time to live of the key is specified in milliseconds instead of seconds.

Options

The PEXPIRE command supports a set of options since Redis 7.0:

  • NX -- Set expiry only when the key has no expiry
  • XX -- Set expiry only when the key has an existing expiry
  • GT -- Set expiry only when the new expiry is greater than current one
  • LT -- Set expiry only when the new expiry is less than current one

A non-volatile key is treated as an infinite TTL for the purpose of GT and LT. The GT, LT and NX options are mutually exclusive.

Return

Integer reply, specifically:

  • 1 if the timeout was set.
  • 0 if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments.

Examples

SET mykey "Hello" PEXPIRE mykey 1500 TTL mykey PTTL mykey PEXPIRE mykey 1000 XX TTL mykey PEXPIRE mykey 1000 NX TTL mykey

304 - PEXPIREAT

Set the expiration for a key as a UNIX timestamp specified in milliseconds

PEXPIREAT has the same effect and semantic as EXPIREAT, but the Unix time at which the key will expire is specified in milliseconds instead of seconds.

Options

The PEXPIREAT command supports a set of options since Redis 7.0:

  • NX -- Set expiry only when the key has no expiry
  • XX -- Set expiry only when the key has an existing expiry
  • GT -- Set expiry only when the new expiry is greater than current one
  • LT -- Set expiry only when the new expiry is less than current one

A non-volatile key is treated as an infinite TTL for the purpose of GT and LT. The GT, LT and NX options are mutually exclusive.

Return

Integer reply, specifically:

  • 1 if the timeout was set.
  • 0 if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments.

Examples

SET mykey "Hello" PEXPIREAT mykey 1555555555005 TTL mykey PTTL mykey

305 - PEXPIRETIME

Get the expiration Unix timestamp for a key in milliseconds

PEXPIRETIME has the same semantic as EXPIRETIME, but returns the absolute Unix expiration timestamp in milliseconds instead of seconds.

Return

Integer reply: Expiration Unix timestamp in milliseconds, or a negative value in order to signal an error (see the description below).

  • The command returns -1 if the key exists but has no associated expiration time.
  • The command returns -2 if the key does not exist.

Examples

SET mykey "Hello" PEXPIREAT mykey 33177117420000 PEXPIRETIME mykey

306 - PFADD

Adds the specified elements to the specified HyperLogLog.

Adds all the element arguments to the HyperLogLog data structure stored at the variable name specified as first argument.

As a side effect of this command the HyperLogLog internals may be updated to reflect a different estimation of the number of unique items added so far (the cardinality of the set).

If the approximated cardinality estimated by the HyperLogLog changed after executing the command, PFADD returns 1, otherwise 0 is returned. The command automatically creates an empty HyperLogLog structure (that is, a Redis String of a specified length and with a given encoding) if the specified key does not exist.

To call the command without elements but just the variable name is valid, this will result into no operation performed if the variable already exists, or just the creation of the data structure if the key does not exist (in the latter case 1 is returned).

For an introduction to HyperLogLog data structure check the PFCOUNT command page.

Return

Integer reply, specifically:

  • 1 if at least 1 HyperLogLog internal register was altered. 0 otherwise.

Examples

PFADD hll a b c d e f g PFCOUNT hll

307 - PFCOUNT

Return the approximated cardinality of the set(s) observed by the HyperLogLog at key(s).

When called with a single key, returns the approximated cardinality computed by the HyperLogLog data structure stored at the specified variable, which is 0 if the variable does not exist.

When called with multiple keys, returns the approximated cardinality of the union of the HyperLogLogs passed, by internally merging the HyperLogLogs stored at the provided keys into a temporary HyperLogLog.

The HyperLogLog data structure can be used in order to count unique elements in a set using just a small constant amount of memory, specifically 12k bytes for every HyperLogLog (plus a few bytes for the key itself).

The returned cardinality of the observed set is not exact, but approximated with a standard error of 0.81%.

For example in order to take the count of all the unique search queries performed in a day, a program needs to call PFADD every time a query is processed. The estimated number of unique queries can be retrieved with PFCOUNT at any time.

Note: as a side effect of calling this function, it is possible that the HyperLogLog is modified, since the last 8 bytes encode the latest computed cardinality for caching purposes. So PFCOUNT is technically a write command.

Return

Integer reply, specifically:

  • The approximated number of unique elements observed via PFADD.

Examples

PFADD hll foo bar zap PFADD hll zap zap zap PFADD hll foo bar PFCOUNT hll PFADD some-other-hll 1 2 3 PFCOUNT hll some-other-hll

Performances

When PFCOUNT is called with a single key, performances are excellent even if in theory constant times to process a dense HyperLogLog are high. This is possible because the PFCOUNT uses caching in order to remember the cardinality previously computed, that rarely changes because most PFADD operations will not update any register. Hundreds of operations per second are possible.

When PFCOUNT is called with multiple keys, an on-the-fly merge of the HyperLogLogs is performed, which is slow, moreover the cardinality of the union can't be cached, so when used with multiple keys PFCOUNT may take a time in the order of magnitude of the millisecond, and should be not abused.

The user should take in mind that single-key and multiple-keys executions of this command are semantically different and have different performances.

HyperLogLog representation

Redis HyperLogLogs are represented using a double representation: the sparse representation suitable for HLLs counting a small number of elements (resulting in a small number of registers set to non-zero value), and a dense representation suitable for higher cardinalities. Redis automatically switches from the sparse to the dense representation when needed.

The sparse representation uses a run-length encoding optimized to store efficiently a big number of registers set to zero. The dense representation is a Redis string of 12288 bytes in order to store 16384 6-bit counters. The need for the double representation comes from the fact that using 12k (which is the dense representation memory requirement) to encode just a few registers for smaller cardinalities is extremely suboptimal.

Both representations are prefixed with a 16 bytes header, that includes a magic, an encoding / version field, and the cached cardinality estimation computed, stored in little endian format (the most significant bit is 1 if the estimation is invalid since the HyperLogLog was updated since the cardinality was computed).

The HyperLogLog, being a Redis string, can be retrieved with GET and restored with SET. Calling PFADD, PFCOUNT or PFMERGE commands with a corrupted HyperLogLog is never a problem, it may return random values but does not affect the stability of the server. Most of the times when corrupting a sparse representation, the server recognizes the corruption and returns an error.

The representation is neutral from the point of view of the processor word size and endianness, so the same representation is used by 32 bit and 64 bit processor, big endian or little endian.

More details about the Redis HyperLogLog implementation can be found in this blog post. The source code of the implementation in the hyperloglog.c file is also easy to read and understand, and includes a full specification for the exact encoding used for the sparse and dense representations.

308 - PFDEBUG

Internal commands for debugging HyperLogLog values

The PFDEBUG command is an internal command. It is meant to be used for developing and testing Redis.

309 - PFMERGE

Merge N different HyperLogLogs into a single one.

Merge multiple HyperLogLog values into an unique value that will approximate the cardinality of the union of the observed Sets of the source HyperLogLog structures.

The computed merged HyperLogLog is set to the destination variable, which is created if does not exist (defaulting to an empty HyperLogLog).

If the destination variable exists, it is treated as one of the source sets and its cardinality will be included in the cardinality of the computed HyperLogLog.

Return

Simple string reply: The command just returns OK.

Examples

PFADD hll1 foo bar zap a PFADD hll2 a b c foo PFMERGE hll3 hll1 hll2 PFCOUNT hll3

310 - PFSELFTEST

An internal command for testing HyperLogLog values

The PFSELFTEST command is an internal command. It is meant to be used for developing and testing Redis.

311 - PING

Ping the server

Returns PONG if no argument is provided, otherwise return a copy of the argument as a bulk. This command is often used to test if a connection is still alive, or to measure latency.

If the client is subscribed to a channel or a pattern, it will instead return a multi-bulk with a "pong" in the first position and an empty bulk in the second position, unless an argument is provided in which case it returns a copy of the argument.

Return

Simple string reply, and specifically PONG, when no argument is provided.

Bulk string reply the argument provided, when applicable.

Examples

PING PING "hello world"

312 - PSETEX

Set the value and expiration in milliseconds of a key

PSETEX works exactly like SETEX with the sole difference that the expire time is specified in milliseconds instead of seconds.

Examples

PSETEX mykey 1000 "Hello" PTTL mykey GET mykey

313 - PSUBSCRIBE

Listen for messages published to channels matching the given patterns

Subscribes the client to the given patterns.

Supported glob-style patterns:

  • h?llo subscribes to hello, hallo and hxllo
  • h*llo subscribes to hllo and heeeello
  • h[ae]llo subscribes to hello and hallo, but not hillo

Use \ to escape special characters if you want to match them verbatim.

314 - PSYNC

Internal command used for replication

Initiates a replication stream from the master.

The PSYNC command is called by Redis replicas for initiating a replication stream from the master.

For more information about replication in Redis please check the replication page.

Return

Non standard return value, a bulk transfer of the data followed by PING and write requests from the master.

315 - PTTL

Get the time to live for a key in milliseconds

Like TTL this command returns the remaining time to live of a key that has an expire set, with the sole difference that TTL returns the amount of remaining time in seconds while PTTL returns it in milliseconds.

In Redis 2.6 or older the command returns -1 if the key does not exist or if the key exist but has no associated expire.

Starting with Redis 2.8 the return value in case of error changed:

  • The command returns -2 if the key does not exist.
  • The command returns -1 if the key exists but has no associated expire.

Return

Integer reply: TTL in milliseconds, or a negative value in order to signal an error (see the description above).

Examples

SET mykey "Hello" EXPIRE mykey 1 PTTL mykey

316 - PUBLISH

Post a message to a channel

Posts a message to the given channel.

In a Redis Cluster clients can publish to every node. The cluster makes sure that published messages are forwarded as needed, so clients can subscribe to any channel by connecting to any one of the nodes.

Return

Integer reply: the number of clients that received the message. Note that in a Redis Cluster, only clients that are connected to the same node as the publishing client are included in the count.

317 - PUBSUB

A container for Pub/Sub commands

This is a container command for Pub/Sub introspection commands.

To see the list of available commands you can call PUBSUB HELP.

318 - PUBSUB CHANNELS

List active channels

Lists the currently active channels.

An active channel is a Pub/Sub channel with one or more subscribers (excluding clients subscribed to patterns).

If no pattern is specified, all the channels are listed, otherwise if pattern is specified only channels matching the specified glob-style pattern are listed.

Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, PUBSUB's replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster.

Return

Array reply: a list of active channels, optionally matching the specified pattern.

319 - PUBSUB HELP

Show helpful text about the different subcommands

The PUBSUB HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

320 - PUBSUB NUMPAT

Get the count of unique patterns pattern subscriptions

Returns the number of unique patterns that are subscribed to by clients (that are performed using the PSUBSCRIBE command).

Note that this isn't the count of clients subscribed to patterns, but the total number of unique patterns all the clients are subscribed to.

Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, PUBSUB's replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster.

Return

Integer reply: the number of patterns all the clients are subscribed to.

321 - PUBSUB NUMSUB

Get the count of subscribers for channels

Returns the number of subscribers (exclusive of clients subscribed to patterns) for the specified channels.

Note that it is valid to call this command without channels. In this case it will just return an empty list.

Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, PUBSUB's replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster.

Return

Array reply: a list of channels and number of subscribers for every channel.

The format is channel, count, channel, count, ..., so the list is flat. The order in which the channels are listed is the same as the order of the channels specified in the command call.

322 - PUBSUB SHARDCHANNELS

List active shard channels

Lists the currently active shard channels.

An active shard channel is a Pub/Sub shard channel with one or more subscribers.

If no pattern is specified, all the channels are listed, otherwise if pattern is specified only channels matching the specified glob-style pattern are listed.

The information returned about the active shard channels are at the shard level and not at the cluster level.

Return

Array reply: a list of active channels, optionally matching the specified pattern.

Examples

> PUBSUB SHARDCHANNELS
1) "orders"
PUBSUB SHARDCHANNELS o*
1) "orders"

323 - PUBSUB SHARDNUMSUB

Get the count of subscribers for shard channels

Returns the number of subscribers for the specified shard channels.

Note that it is valid to call this command without channels, in this case it will just return an empty list.

Cluster note: in a Redis Cluster, PUBSUB's replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster.

Return

Array reply: a list of channels and number of subscribers for every channel.

The format is channel, count, channel, count, ..., so the list is flat. The order in which the channels are listed is the same as the order of the shard channels specified in the command call.

Examples

> PUBSUB SHARDNUMSUB orders
1) "orders"
2) (integer) 1

324 - PUNSUBSCRIBE

Stop listening for messages posted to channels matching the given patterns

Unsubscribes the client from the given patterns, or from all of them if none is given.

When no patterns are specified, the client is unsubscribed from all the previously subscribed patterns. In this case, a message for every unsubscribed pattern will be sent to the client.

325 - QUIT

Close the connection

Ask the server to close the connection. The connection is closed as soon as all pending replies have been written to the client.

Return

Simple string reply: always OK.

326 - RANDOMKEY

Return a random key from the keyspace

Return a random key from the currently selected database.

Return

Bulk string reply: the random key, or nil when the database is empty.

327 - READONLY

Enables read queries for a connection to a cluster replica node

Enables read queries for a connection to a Redis Cluster replica node.

Normally replica nodes will redirect clients to the authoritative master for the hash slot involved in a given command, however clients can use replicas in order to scale reads using the READONLY command.

READONLY tells a Redis Cluster replica node that the client is willing to read possibly stale data and is not interested in running write queries.

When the connection is in readonly mode, the cluster will send a redirection to the client only if the operation involves keys not served by the replica's master node. This may happen because:

  1. The client sent a command about hash slots never served by the master of this replica.
  2. The cluster was reconfigured (for example resharded) and the replica is no longer able to serve commands for a given hash slot.

Return

Simple string reply

328 - READWRITE

Disables read queries for a connection to a cluster replica node

Disables read queries for a connection to a Redis Cluster replica node.

Read queries against a Redis Cluster replica node are disabled by default, but you can use the READONLY command to change this behavior on a per- connection basis. The READWRITE command resets the readonly mode flag of a connection back to readwrite.

Return

Simple string reply

329 - RENAME

Rename a key

Renames key to newkey. It returns an error when key does not exist. If newkey already exists it is overwritten, when this happens RENAME executes an implicit DEL operation, so if the deleted key contains a very big value it may cause high latency even if RENAME itself is usually a constant-time operation.

In Cluster mode, both key and newkey must be in the same hash slot, meaning that in practice only keys that have the same hash tag can be reliably renamed in cluster.

Return

Simple string reply

Examples

SET mykey "Hello" RENAME mykey myotherkey GET myotherkey

Behavior change history

  • >= 3.2.0: The command no longer returns an error when source and destination names are the same.

330 - RENAMENX

Rename a key, only if the new key does not exist

Renames key to newkey if newkey does not yet exist. It returns an error when key does not exist.

In Cluster mode, both key and newkey must be in the same hash slot, meaning that in practice only keys that have the same hash tag can be reliably renamed in cluster.

Return

Integer reply, specifically:

  • 1 if key was renamed to newkey.
  • 0 if newkey already exists.

Examples

SET mykey "Hello" SET myotherkey "World" RENAMENX mykey myotherkey GET myotherkey

331 - REPLCONF

An internal command for configuring the replication stream

The REPLCONF command is an internal command. It is used by a Redis master to configure a connected replica.

332 - REPLICAOF

Make the server a replica of another instance, or promote it as master.

The REPLICAOF command can change the replication settings of a replica on the fly.

If a Redis server is already acting as replica, the command REPLICAOF NO ONE will turn off the replication, turning the Redis server into a MASTER. In the proper form REPLICAOF hostname port will make the server a replica of another server listening at the specified hostname and port.

If a server is already a replica of some master, REPLICAOF hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset.

The form REPLICAOF NO ONE will stop replication, turning the server into a MASTER, but will not discard the replication. So, if the old master stops working, it is possible to turn the replica into a master and set the application to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a replica.

Return

Simple string reply

Examples

> REPLICAOF NO ONE
"OK"

> REPLICAOF 127.0.0.1 6799
"OK"

333 - RESET

Reset the connection

This command performs a full reset of the connection's server-side context, mimicking the effect of disconnecting and reconnecting again.

When the command is called from a regular client connection, it does the following:

  • Discards the current MULTI transaction block, if one exists.
  • Unwatches all keys WATCHed by the connection.
  • Disables CLIENT TRACKING, if in use.
  • Sets the connection to READWRITE mode.
  • Cancels the connection's ASKING mode, if previously set.
  • Sets CLIENT REPLY to ON.
  • Sets the protocol version to RESP2.
  • SELECTs database 0.
  • Exits MONITOR mode, when applicable.
  • Aborts Pub/Sub's subscription state (SUBSCRIBE and PSUBSCRIBE), when appropriate.
  • Deauthenticates the connection, requiring a call AUTH to reauthenticate when authentication is enabled.

Return

Simple string reply: always 'RESET'.

334 - RESTORE

Create a key using the provided serialized value, previously obtained using DUMP.

Create a key associated with a value that is obtained by deserializing the provided serialized value (obtained via DUMP).

If ttl is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set.

If the ABSTTL modifier was used, ttl should represent an absolute Unix timestamp (in milliseconds) in which the key will expire.

For eviction purposes, you may use the IDLETIME or FREQ modifiers. See OBJECT for more information.

RESTORE will return a "Target key name is busy" error when key already exists unless you use the REPLACE modifier.

RESTORE checks the RDB version and data checksum. If they don't match an error is returned.

Return

Simple string reply: The command returns OK on success.

Examples

redis> DEL mykey
0
redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\
                        x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\
                        xff\x04\x00u#<\xc0;.\xe9\xdd"
OK
redis> TYPE mykey
list
redis> LRANGE mykey 0 -1
1) "1"
2) "2"
3) "3"

335 - RESTORE-ASKING

An internal command for migrating keys in a cluster

The RESTORE-ASKING command is an internal command. It is used by a Redis cluster master during slot migration.

336 - ROLE

Return the role of the instance in the context of replication

Provide information on the role of a Redis instance in the context of replication, by returning if the instance is currently a master, slave, or sentinel. The command also returns additional information about the state of the replication (if the role is master or slave) or the list of monitored master names (if the role is sentinel).

Output format

The command returns an array of elements. The first element is the role of the instance, as one of the following three strings:

  • "master"
  • "slave"
  • "sentinel"

The additional elements of the array depends on the role.

Master output

An example of output when ROLE is called in a master instance:

1) "master"
2) (integer) 3129659
3) 1) 1) "127.0.0.1"
      2) "9001"
      3) "3129242"
   2) 1) "127.0.0.1"
      2) "9002"
      3) "3129543"

The master output is composed of the following parts:

  1. The string master.
  2. The current master replication offset, which is an offset that masters and replicas share to understand, in partial resynchronizations, the part of the replication stream the replicas needs to fetch to continue.
  3. An array composed of three elements array representing the connected replicas. Every sub-array contains the replica IP, port, and the last acknowledged replication offset.

Output of the command on replicas

An example of output when ROLE is called in a replica instance:

1) "slave"
2) "127.0.0.1"
3) (integer) 9000
4) "connected"
5) (integer) 3167038

The replica output is composed of the following parts:

  1. The string slave, because of backward compatibility (see note at the end of this page).
  2. The IP of the master.
  3. The port number of the master.
  4. The state of the replication from the point of view of the master, that can be connect (the instance needs to connect to its master), connecting (the master-replica connection is in progress), sync (the master and replica are trying to perform the synchronization), connected (the replica is online).
  5. The amount of data received from the replica so far in terms of master replication offset.

Sentinel output

An example of Sentinel output:

1) "sentinel"
2) 1) "resque-master"
   2) "html-fragments-master"
   3) "stats-master"
   4) "metadata-master"

The sentinel output is composed of the following parts:

  1. The string sentinel.
  2. An array of master names monitored by this Sentinel instance.

Return

Array reply: where the first element is one of master, slave, sentinel and the additional elements are role-specific as illustrated above.

Examples

ROLE

A note about the word slave used in this man page: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated.

337 - RPOP

Remove and get the last elements in a list

Removes and returns the last elements of the list stored at key.

By default, the command pops a single element from the end of the list. When provided with the optional count argument, the reply will consist of up to count elements, depending on the list's length.

Return

When called without the count argument:

Bulk string reply: the value of the last element, or nil when key does not exist.

When called with the count argument:

Array reply: list of popped elements, or nil when key does not exist.

Examples

RPUSH mylist "one" "two" "three" "four" "five" RPOP mylist RPOP mylist 2 LRANGE mylist 0 -1

338 - RPOPLPUSH

Remove the last element in a list, prepend it to another list and return it

Atomically returns and removes the last element (tail) of the list stored at source, and pushes the element at the first element (head) of the list stored at destination.

For example: consider source holding the list a,b,c, and destination holding the list x,y,z. Executing RPOPLPUSH results in source holding a,b and destination holding c,x,y,z.

If source does not exist, the value nil is returned and no operation is performed. If source and destination are the same, the operation is equivalent to removing the last element from the list and pushing it as first element of the list, so it can be considered as a list rotation command.

Return

Bulk string reply: the element being popped and pushed.

Examples

RPUSH mylist "one" RPUSH mylist "two" RPUSH mylist "three" RPOPLPUSH mylist myotherlist LRANGE mylist 0 -1 LRANGE myotherlist 0 -1

Pattern: Reliable queue

Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue is often obtained pushing values into a list in the producer side, and waiting for this values in the consumer side using RPOP (using polling), or BRPOP if the client is better served by a blocking operation.

However in this context the obtained queue is not reliable as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but before it can be processed.

RPOPLPUSH (or BRPOPLPUSH for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it into a processing list. It will use the LREM command in order to remove the message from the processing list once the message has been processed.

An additional client may monitor the processing list for items that remain there for too much time, pushing timed out items into the queue again if needed.

Pattern: Circular list

Using RPOPLPUSH with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) without transferring the full list from the server to the client using a single LRANGE operation.

The above pattern works even if one or both of the following conditions occur:

  • There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts.
  • Other clients are actively pushing new items at the end of the list.

The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. An example is a monitoring system that must check that a set of web sites are reachable, with the smallest delay possible, using a number of parallel workers.

Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be processed at the next iteration.

339 - RPUSH

Append one or multiple elements to a list

Insert all the specified values at the tail of the list stored at key. If key does not exist, it is created as empty list before performing the push operation. When key holds a value that is not a list, an error is returned.

It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the tail of the list, from the leftmost element to the rightmost element. So for instance the command RPUSH mylist a b c will result into a list containing a as first element, b as second element and c as third element.

Return

Integer reply: the length of the list after the push operation.

Examples

RPUSH mylist "hello" RPUSH mylist "world" LRANGE mylist 0 -1

340 - RPUSHX

Append an element to a list, only if the list exists

Inserts specified values at the tail of the list stored at key, only if key already exists and holds a list. In contrary to RPUSH, no operation will be performed when key does not yet exist.

Return

Integer reply: the length of the list after the push operation.

Examples

RPUSH mylist "Hello" RPUSHX mylist "World" RPUSHX myotherlist "World" LRANGE mylist 0 -1 LRANGE myotherlist 0 -1

341 - SADD

Add one or more members to a set

Add the specified members to the set stored at key. Specified members that are already a member of this set are ignored. If key does not exist, a new set is created before adding the specified members.

An error is returned when the value stored at key is not a set.

Return

Integer reply: the number of elements that were added to the set, not including all the elements already present in the set.

Examples

SADD myset "Hello" SADD myset "World" SADD myset "World" SMEMBERS myset

342 - SAVE

Synchronously save the dataset to disk

The SAVE commands performs a synchronous save of the dataset producing a point in time snapshot of all the data inside the Redis instance, in the form of an RDB file.

You almost never want to call SAVE in production environments where it will block all the other clients. Instead usually BGSAVE is used. However in case of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the SAVE command can be a good last resort to perform the dump of the latest dataset.

Please refer to the persistence documentation for detailed information.

Return

Simple string reply: The commands returns OK on success.

343 - SCAN

Incrementally iterate the keys space

The SCAN command and the closely related commands SSCAN, HSCAN and ZSCAN are used in order to incrementally iterate over a collection of elements.

  • SCAN iterates the set of keys in the currently selected Redis database.
  • SSCAN iterates elements of Sets types.
  • HSCAN iterates fields of Hash types and their associated values.
  • ZSCAN iterates elements of Sorted Set types and their associated scores.

Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like KEYS or SMEMBERS that may block the server for a long time (even several seconds) when called against big collections of keys or elements.

However while blocking commands like SMEMBERS are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process.

Note that SCAN, SSCAN, HSCAN and ZSCAN all work very similarly, so this documentation covers all the four commands. However an obvious difference is that in the case of SSCAN, HSCAN and ZSCAN the first argument is the name of the key holding the Set, Hash or Sorted Set value. The SCAN command does not need any key name argument as it iterates keys in the current database, so the iterated object is the database itself.

SCAN basic usage

SCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call.

An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0. The following is an example of SCAN iteration:

redis 127.0.0.1:6379> scan 0
1) "17"
2)  1) "key:12"
    2) "key:8"
    3) "key:4"
    4) "key:14"
    5) "key:16"
    6) "key:17"
    7) "key:15"
    8) "key:10"
    9) "key:3"
   10) "key:7"
   11) "key:1"
redis 127.0.0.1:6379> scan 17
1) "0"
2) 1) "key:5"
   2) "key:18"
   3) "key:0"
   4) "key:2"
   5) "key:19"
   6) "key:13"
   7) "key:6"
   8) "key:9"
   9) "key:11"

In the example above, the first call uses zero as a cursor, to start the iteration. The second call uses the cursor returned by the previous call as the first element of the reply, that is, 17.

As you can see the SCAN return value is an array of two values: the first value is the new cursor to use in the next call, the second value is an array of elements.

Since in the second call the returned cursor is 0, the server signaled to the caller that the iteration finished, and the collection was completely explored. Starting an iteration with a cursor value of 0, and calling SCAN until the returned cursor is 0 again is called a full iteration.

Scan guarantees

The SCAN command, and the other commands in the SCAN family, are able to provide to the user a set of guarantees associated to full iterations.

  • A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started, and is still there when an iteration terminates, then at some point SCAN returned it to the user.
  • A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration, and is never added back to the collection for all the time an iteration lasts, SCAN ensures that this element will never be returned.

However because SCAN has very little state associated (just the cursor) it has the following drawbacks:

  • A given element may be returned multiple times. It is up to the application to handle the case of duplicated elements, for example only using the returned elements in order to perform operations that are safe when re-applied multiple times.
  • Elements that were not constantly present in the collection during a full iteration, may be returned or not: it is undefined.

Number of elements returned at every SCAN call

SCAN family functions do not guarantee that the number of elements returned per call are in a given range. The commands are also allowed to return zero elements, and the client should not consider the iteration complete as long as the returned cursor is not zero.

However the number of returned elements is reasonable, that is, in practical terms SCAN may return a maximum number of elements in the order of a few tens of elements when iterating a large collection, or may return all the elements of the collection in a single call when the iterated collection is small enough to be internally represented as an encoded data structure (this happens for small sets, hashes and sorted sets).

However there is a way for the user to tune the order of magnitude of the number of returned elements per call using the COUNT option.

The COUNT option

While SCAN does not provide guarantees about the number of elements returned at every iteration, it is possible to empirically adjust the behavior of SCAN using the COUNT option. Basically with COUNT the user specified the amount of work that should be done at every call in order to retrieve elements from the collection. This is just a hint for the implementation, however generally speaking this is what you could expect most of the times from the implementation.

  • The default COUNT value is 10.
  • When iterating the key space, or a Set, Hash or Sorted Set that is big enough to be represented by a hash table, assuming no MATCH option is used, the server will usually return count or a bit more than count elements per call. Please check the why SCAN may return all the elements at once section later in this document.
  • When iterating Sets encoded as intsets (small sets composed of just integers), or Hashes and Sorted Sets encoded as ziplists (small hashes and sets composed of small individual values), usually all the elements are returned in the first SCAN call regardless of the COUNT value.

Important: there is no need to use the same COUNT value for every iteration. The caller is free to change the count from one iteration to the other as required, as long as the cursor passed in the next call is the one obtained in the previous call to the command.

The MATCH option

It is possible to only iterate elements matching a given glob-style pattern, similarly to the behavior of the KEYS command that takes a pattern as only argument.

To do so, just append the MATCH <pattern> arguments at the end of the SCAN command (it works with all the SCAN family commands).

This is an example of iteration using MATCH:

redis 127.0.0.1:6379> sadd myset 1 2 3 foo foobar feelsgood
(integer) 6
redis 127.0.0.1:6379> sscan myset 0 match f*
1) "0"
2) 1) "foo"
   2) "feelsgood"
   3) "foobar"
redis 127.0.0.1:6379>

It is important to note that the MATCH filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, SCAN will likely return no elements in most iterations. An example is shown below:

redis 127.0.0.1:6379> scan 0 MATCH *11*
1) "288"
2) 1) "key:911"
redis 127.0.0.1:6379> scan 288 MATCH *11*
1) "224"
2) (empty list or set)
redis 127.0.0.1:6379> scan 224 MATCH *11*
1) "80"
2) (empty list or set)
redis 127.0.0.1:6379> scan 80 MATCH *11*
1) "176"
2) (empty list or set)
redis 127.0.0.1:6379> scan 176 MATCH *11* COUNT 1000
1) "0"
2)  1) "key:611"
    2) "key:711"
    3) "key:118"
    4) "key:117"
    5) "key:311"
    6) "key:112"
    7) "key:111"
    8) "key:110"
    9) "key:113"
   10) "key:211"
   11) "key:411"
   12) "key:115"
   13) "key:116"
   14) "key:114"
   15) "key:119"
   16) "key:811"
   17) "key:511"
   18) "key:11"
redis 127.0.0.1:6379>

As you can see most of the calls returned zero elements, but the last call where a COUNT of 1000 was used in order to force the command to do more scanning for that iteration.

The TYPE option

You can use the TYPE option to ask SCAN to only return objects that match a given type, allowing you to iterate through the database looking for keys of a specific type. The TYPE option is only available on the whole-database SCAN, not HSCAN or ZSCAN etc.

The type argument is the same string name that the TYPE command returns. Note a quirk where some Redis types, such as GeoHashes, HyperLogLogs, Bitmaps, and Bitfields, may internally be implemented using other Redis types, such as a string or zset, so can't be distinguished from other keys of that same type by SCAN. For example, a ZSET and GEOHASH:

redis 127.0.0.1:6379> GEOADD geokey 0 0 value
(integer) 1
redis 127.0.0.1:6379> ZADD zkey 1000 value
(integer) 1
redis 127.0.0.1:6379> TYPE geokey
zset
redis 127.0.0.1:6379> TYPE zkey
zset
redis 127.0.0.1:6379> SCAN 0 TYPE zset
1) "0"
2) 1) "geokey"
   2) "zkey"

It is important to note that the TYPE filter is also applied after elements are retrieved from the database, so the option does not reduce the amount of work the server has to do to complete a full iteration, and for rare types you may receive no elements in many iterations.

Multiple parallel iterations

It is possible for an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. No server side state is taken at all.

Terminating iterations in the middle

Since there is no state server side, but the full state is captured by the cursor, the caller is free to terminate an iteration half-way without signaling this to the server in any way. An infinite number of iterations can be started and never terminated without any issue.

Calling SCAN with a corrupted cursor

Calling SCAN with a broken, negative, out of range, or otherwise invalid cursor, will result into undefined behavior but never into a crash. What will be undefined is that the guarantees about the returned elements can no longer be ensured by the SCAN implementation.

The only valid cursors to use are:

  • The cursor value of 0 when starting an iteration.
  • The cursor returned by the previous call to SCAN in order to continue the iteration.

Guarantee of termination

The SCAN algorithm is guaranteed to terminate only if the size of the iterated collection remains bounded to a given maximum size, otherwise iterating a collection that always grows may result into SCAN to never terminate a full iteration.

This is easy to see intuitively: if the collection grows there is more and more work to do in order to visit all the possible elements, and the ability to terminate the iteration depends on the number of calls to SCAN and its COUNT option value compared with the rate at which the collection grows.

Why SCAN may return all the items of an aggregate data type in a single call?

In the COUNT option documentation, we state that sometimes this family of commands may return all the elements of a Set, Hash or Sorted Set at once in a single call, regardless of the COUNT option value. The reason why this happens is that the cursor-based iterator can be implemented, and is useful, only when the aggregate data type that we are scanning is represented as an hash table. However Redis uses a memory optimization where small aggregate data types, until they reach a given amount of items or a given max size of single elements, are represented using a compact single-allocation packed encoding. When this is the case, SCAN has no meaningful cursor to return, and must iterate the whole data structure at once, so the only sane behavior it has is to return everything in a call.

However once the data structures are bigger and are promoted to use real hash tables, the SCAN family of commands will resort to the normal behavior. Note that since this special behavior of returning all the elements is true only for small aggregates, it has no effects on the command complexity or latency. However the exact limits to get converted into real hash tables are user configurable, so the maximum number of elements you can see returned in a single call depends on how big an aggregate data type could be and still use the packed representation.

Also note that this behavior is specific of SSCAN, HSCAN and ZSCAN. SCAN itself never shows this behavior because the key space is always represented by hash tables.

Return value

SCAN, SSCAN, HSCAN and ZSCAN return a two elements multi-bulk reply, where the first element is a string representing an unsigned 64 bit number (the cursor), and the second element is a multi-bulk with an array of elements.

  • SCAN array of elements is a list of keys.
  • SSCAN array of elements is a list of Set members.
  • HSCAN array of elements contain two elements, a field and a value, for every returned element of the Hash.
  • ZSCAN array of elements contain two elements, a member and its associated score, for every returned element of the sorted set.

Additional examples

Iteration of a Hash value.

redis 127.0.0.1:6379> hmset hash name Jack age 33
OK
redis 127.0.0.1:6379> hscan hash 0
1) "0"
2) 1) "name"
   2) "Jack"
   3) "age"
   4) "33"

344 - SCARD

Get the number of members in a set

Returns the set cardinality (number of elements) of the set stored at key.

Return

Integer reply: the cardinality (number of elements) of the set, or 0 if key does not exist.

Examples

SADD myset "Hello" SADD myset "World" SCARD myset

345 - SCRIPT

A container for Lua scripts management commands

This is a container command for script management commands.

To see the list of available commands you can call SCRIPT HELP.

346 - SCRIPT DEBUG

Set the debug mode for executed scripts.

Set the debug mode for subsequent scripts executed with EVAL. Redis includes a complete Lua debugger, codename LDB, that can be used to make the task of writing complex scripts much simpler. In debug mode Redis acts as a remote debugging server and a client, such as redis-cli, can execute scripts step by step, set breakpoints, inspect variables and more - for additional information about LDB refer to the Redis Lua debugger page.

Important note: avoid debugging Lua scripts using your Redis production server. Use a development server instead.

LDB can be enabled in one of two modes: asynchronous or synchronous. In asynchronous mode the server creates a forked debugging session that does not block and all changes to the data are rolled back after the session finishes, so debugging can be restarted using the same initial state. The alternative synchronous debug mode blocks the server while the debugging session is active and retains all changes to the data set once it ends.

  • YES. Enable non-blocking asynchronous debugging of Lua scripts (changes are discarded).
  • SYNC. Enable blocking synchronous debugging of Lua scripts (saves changes to data).
  • NO. Disables scripts debug mode.

For more information about EVAL scripts please refer to Introduction to Eval Scripts.

Return

Simple string reply: OK.

347 - SCRIPT EXISTS

Check existence of scripts in the script cache.

Returns information about the existence of the scripts in the script cache.

This command accepts one or more SHA1 digests and returns a list of ones or zeros to signal if the scripts are already defined or not inside the script cache. This can be useful before a pipelining operation to ensure that scripts are loaded (and if not, to load them using SCRIPT LOAD) so that the pipelining operation can be performed solely using EVALSHA instead of EVAL to save bandwidth.

For more information about EVAL scripts please refer to Introduction to Eval Scripts.

Return

Array reply The command returns an array of integers that correspond to the specified SHA1 digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 is returned, otherwise 0 is returned.

348 - SCRIPT FLUSH

Remove all the scripts from the script cache.

Flush the Lua scripts cache.

By default, SCRIPT FLUSH will synchronously flush the cache. Starting with Redis 6.2, setting the lazyfree-lazy-user-flush configuration directive to "yes" changes the default flush mode to asynchronous.

It is possible to use one of the following modifiers to dictate the flushing mode explicitly:

  • ASYNC: flushes the cache asynchronously
  • SYNC: flushes the cache synchronously

For more information about EVAL scripts please refer to Introduction to Eval Scripts.

Return

Simple string reply

Behavior change history

  • >= 6.2.0: Default flush behavior now configurable by the lazyfree-lazy-user-flush configuration directive.

349 - SCRIPT HELP

Show helpful text about the different subcommands

The SCRIPT HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

350 - SCRIPT KILL

Kill the script currently in execution.

Kills the currently executing EVAL script, assuming no write operation was yet performed by the script.

This command is mainly useful to kill a script that is running for too much time(for instance, because it entered an infinite loop because of a bug). The script will be killed, and the client currently blocked into EVAL will see the command returning with an error.

If the script has already performed write operations, it can not be killed in this way because it would violate Lua's script atomicity contract. In such a case, only SHUTDOWN NOSAVE can kill the script, killing the Redis process in a hard way and preventing it from persisting with half-written information.

For more information about EVAL scripts please refer to Introduction to Eval Scripts.

Return

Simple string reply

351 - SCRIPT LOAD

Load the specified Lua script into the script cache.

Load a script into the scripts cache, without executing it. After the specified command is loaded into the script cache it will be callable using EVALSHA with the correct SHA1 digest of the script, exactly like after the first successful invocation of EVAL.

The script is guaranteed to stay in the script cache forever (unless SCRIPT FLUSH is called).

The command works in the same way even if the script was already present in the script cache.

For more information about EVAL scripts please refer to Introduction to Eval Scripts.

Return

Bulk string reply This command returns the SHA1 digest of the script added into the script cache.

352 - SDIFF

Subtract multiple sets

Returns the members of the set resulting from the difference between the first set and all the successive sets.

For example:

key1 = {a,b,c,d}
key2 = {c}
key3 = {a,c,e}
SDIFF key1 key2 key3 = {b,d}

Keys that do not exist are considered to be empty sets.

Return

Array reply: list with members of the resulting set.

Examples

SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SDIFF key1 key2

353 - SDIFFSTORE

Subtract multiple sets and store the resulting set in a key

This command is equal to SDIFF, but instead of returning the resulting set, it is stored in destination.

If destination already exists, it is overwritten.

Return

Integer reply: the number of elements in the resulting set.

Examples

SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SDIFFSTORE key key1 key2 SMEMBERS key

354 - SELECT

Change the selected database for the current connection

Select the Redis logical database having the specified zero-based numeric index. New connections always use the database 0.

Selectable Redis databases are a form of namespacing: all databases are still persisted in the same RDB / AOF file. However different databases can have keys with the same name, and commands like FLUSHDB, SWAPDB or RANDOMKEY work on specific databases.

In practical terms, Redis databases should be used to separate different keys belonging to the same application (if needed), and not to use a single Redis instance for multiple unrelated applications.

When using Redis Cluster, the SELECT command cannot be used, since Redis Cluster only supports database zero. In the case of a Redis Cluster, having multiple databases would be useless and an unnecessary source of complexity. Commands operating atomically on a single database would not be possible with the Redis Cluster design and goals.

Since the currently selected database is a property of the connection, clients should track the currently selected database and re-select it on reconnection. While there is no command in order to query the selected database in the current connection, the CLIENT LIST output shows, for each client, the currently selected database.

Return

Simple string reply

355 - SET

Set the string value of a key

Set key to hold the string value. If key already holds a value, it is overwritten, regardless of its type. Any previous time to live associated with the key is discarded on successful SET operation.

Options

The SET command supports a set of options that modify its behavior:

  • EX seconds -- Set the specified expire time, in seconds.
  • PX milliseconds -- Set the specified expire time, in milliseconds.
  • EXAT timestamp-seconds -- Set the specified Unix time at which the key will expire, in seconds.
  • PXAT timestamp-milliseconds -- Set the specified Unix time at which the key will expire, in milliseconds.
  • NX -- Only set the key if it does not already exist.
  • XX -- Only set the key if it already exist.
  • KEEPTTL -- Retain the time to live associated with the key.
  • GET -- Return the old string stored at key, or nil if key did not exist. An error is returned and SET aborted if the value stored at key is not a string.

Note: Since the SET command options can replace SETNX, SETEX, PSETEX, GETSET, it is possible that in future versions of Redis these commands will be deprecated and finally removed.

Return

Simple string reply: OK if SET was executed correctly.

Null reply: (nil) if the SET operation was not performed because the user specified the NX or XX option but the condition was not met.

If the command is issued with the GET option, the above does not apply. It will instead reply as follows, regardless if the SET was actually performed:

Bulk string reply: the old string value stored at key.

Null reply: (nil) if the key did not exist.

Examples

SET mykey "Hello" GET mykey SET anotherkey "will expire in a minute" EX 60

Patterns

Note: The following pattern is discouraged in favor of the Redlock algorithm which is only a bit more complex to implement, but offers better guarantees and is fault tolerant.

The command SET resource-name anystring NX EX max-lock-time is a simple way to implement a locking system with Redis.

A client can acquire the lock if the above command returns OK (or retry after some time if the command returns Nil), and remove the lock just using DEL.

The lock will be auto-released after the expire time is reached.

It is possible to make this system more robust modifying the unlock schema as follows:

  • Instead of setting a fixed string, set a non-guessable large random string, called token.
  • Instead of releasing the lock with DEL, send a script that only removes the key if the value matches.

This avoids that a client will try to release the lock after the expire time deleting the key created by another client that acquired the lock later.

An example of unlock script would be similar to the following:

if redis.call("get",KEYS[1]) == ARGV[1]
then
    return redis.call("del",KEYS[1])
else
    return 0
end

The script should be called with EVAL ...script... 1 resource-name token-value

356 - SETBIT

Sets or clears the bit at offset in the string value stored at key

Sets or clears the bit at offset in the string value stored at key.

The bit is either set or cleared depending on value, which can be either 0 or 1.

When key does not exist, a new string value is created. The string is grown to make sure it can hold a bit at offset. The offset argument is required to be greater than or equal to 0, and smaller than 2^32 (this limits bitmaps to 512MB). When the string at key is grown, added bits are set to 0.

Warning: When setting the last possible bit (offset equal to 2^32 -1) and the string value stored at key does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some time. On a 2010 MacBook Pro, setting bit number 2^32 -1 (512MB allocation) takes ~300ms, setting bit number 2^30 -1 (128MB allocation) takes ~80ms, setting bit number 2^28 -1 (32MB allocation) takes ~30ms and setting bit number 2^26 -1 (8MB allocation) takes ~8ms. Note that once this first allocation is done, subsequent calls to SETBIT for the same key will not have the allocation overhead.

Return

Integer reply: the original bit value stored at offset.

Examples

SETBIT mykey 7 1 SETBIT mykey 7 0 GET mykey

Pattern: accessing the entire bitmap

There are cases when you need to set all the bits of single bitmap at once, for example when initializing it to a default non-zero value. It is possible to do this with multiple calls to the SETBIT command, one for each bit that needs to be set. However, so as an optimization you can use a single SET command to set the entire bitmap.

Bitmaps are not an actual data type, but a set of bit-oriented operations defined on the String type (for more information refer to the Bitmaps section of the Data Types Introduction page). This means that bitmaps can be used with string commands, and most importantly with SET and GET.

Because Redis' strings are binary-safe, a bitmap is trivially encoded as a bytes stream. The first byte of the string corresponds to offsets 0..7 of the bitmap, the second byte to the 8..15 range, and so forth.

For example, after setting a few bits, getting the string value of the bitmap would look like this:

> SETBIT bitmapsarestrings 2 1
> SETBIT bitmapsarestrings 3 1
> SETBIT bitmapsarestrings 5 1
> SETBIT bitmapsarestrings 10 1
> SETBIT bitmapsarestrings 11 1
> SETBIT bitmapsarestrings 14 1
> GET bitmapsarestrings
"42"

By getting the string representation of a bitmap, the client can then parse the response's bytes by extracting the bit values using native bit operations in its native programming language. Symmetrically, it is also possible to set an entire bitmap by performing the bits-to-bytes encoding in the client and calling SET with the resultant string.

Pattern: setting multiple bits

SETBIT excels at setting single bits, and can be called several times when multiple bits need to be set. To optimize this operation you can replace multiple SETBIT calls with a single call to the variadic BITFIELD command and the use of fields of type u1.

For example, the example above could be replaced by:

> BITFIELD bitsinabitmap SET u1 2 1 SET u1 3 1 SET u1 5 1 SET u1 10 1 SET u1 11 1 SET u1 14 1

Advanced Pattern: accessing bitmap ranges

It is also possible to use the GETRANGE and SETRANGE string commands to efficiently access a range of bit offsets in a bitmap. Below is a sample implementation in idiomatic Redis Lua scripting that can be run with the EVAL command:

--[[
Sets a bitmap range

Bitmaps are stored as Strings in Redis. A range spans one or more bytes,
so we can call [`SETRANGE`](/commands/setrange) when entire bytes need to be set instead of flipping
individual bits. Also, to avoid multiple internal memory allocations in
Redis, we traverse in reverse.
Expected input:
  KEYS[1] - bitfield key
  ARGV[1] - start offset (0-based, inclusive)
  ARGV[2] - end offset (same, should be bigger than start, no error checking)
  ARGV[3] - value (should be 0 or 1, no error checking)
]]--

-- A helper function to stringify a binary string to semi-binary format
local function tobits(str)
  local r = ''
  for i = 1, string.len(str) do
    local c = string.byte(str, i)
    local b = ' '
    for j = 0, 7 do
      b = tostring(bit.band(c, 1)) .. b
      c = bit.rshift(c, 1)
    end
    r = r .. b
  end
  return r
end

-- Main
local k = KEYS[1]
local s, e, v = tonumber(ARGV[1]), tonumber(ARGV[2]), tonumber(ARGV[3])

-- First treat the dangling bits in the last byte
local ms, me = s % 8, (e + 1) % 8
if me > 0 then
  local t = math.max(e - me + 1, s)
  for i = e, t, -1 do
    redis.call('SETBIT', k, i, v)
  end
  e = t
end

-- Then the danglings in the first byte
if ms > 0 then
  local t = math.min(s - ms + 7, e)
  for i = s, t, 1 do
    redis.call('SETBIT', k, i, v)
  end
  s = t + 1
end

-- Set a range accordingly, if at all
local rs, re = s / 8, (e + 1) / 8
local rl = re - rs
if rl > 0 then
  local b = '\255'
  if 0 == v then
    b = '\0'
  end
  redis.call('SETRANGE', k, rs, string.rep(b, rl))
end

Note: the implementation for getting a range of bit offsets from a bitmap is left as an exercise to the reader.

357 - SETEX

Set the value and expiration of a key

Set key to hold the string value and set key to timeout after a given number of seconds. This command is equivalent to executing the following commands:

SET mykey value
EXPIRE mykey seconds

SETEX is atomic, and can be reproduced by using the previous two commands inside an MULTI / EXEC block. It is provided as a faster alternative to the given sequence of operations, because this operation is very common when Redis is used as a cache.

An error is returned when seconds is invalid.

Return

Simple string reply

Examples

SETEX mykey 10 "Hello" TTL mykey GET mykey

358 - SETNX

Set the value of a key, only if the key does not exist

Set key to hold string value if key does not exist. In that case, it is equal to SET. When key already holds a value, no operation is performed. SETNX is short for "SET if Not eXists".

Return

Integer reply, specifically:

  • 1 if the key was set
  • 0 if the key was not set

Examples

SETNX mykey "Hello" SETNX mykey "World" GET mykey

Design pattern: Locking with SETNX

Please note that:

  1. The following pattern is discouraged in favor of the Redlock algorithm which is only a bit more complex to implement, but offers better guarantees and is fault tolerant.
  2. We document the old pattern anyway because certain existing implementations link to this page as a reference. Moreover it is an interesting example of how Redis commands can be used in order to mount programming primitives.
  3. Anyway even assuming a single-instance locking primitive, starting with 2.6.12 it is possible to create a much simpler locking primitive, equivalent to the one discussed here, using the SET command to acquire the lock, and a simple Lua script to release the lock. The pattern is documented in the SET command page.

That said, SETNX can be used, and was historically used, as a locking primitive. For example, to acquire the lock of the key foo, the client could try the following:

SETNX lock.foo <current Unix time + lock timeout + 1>

If SETNX returns 1 the client acquired the lock, setting the lock.foo key to the Unix time at which the lock should no longer be considered valid. The client will later use DEL lock.foo in order to release the lock.

If SETNX returns 0 the key is already locked by some other client. We can either return to the caller if it's a non blocking lock, or enter a loop retrying to hold the lock until we succeed or some kind of timeout expires.

Handling deadlocks

In the above locking algorithm there is a problem: what happens if a client fails, crashes, or is otherwise not able to release the lock? It's possible to detect this condition because the lock key contains a UNIX timestamp. If such a timestamp is equal to the current Unix time the lock is no longer valid.

When this happens we can't just call DEL against the key to remove the lock and then try to issue a SETNX, as there is a race condition here, when multiple clients detected an expired lock and are trying to release it.

  • C1 and C2 read lock.foo to check the timestamp, because they both received 0 after executing SETNX, as the lock is still held by C3 that crashed after holding the lock.
  • C1 sends DEL lock.foo
  • C1 sends SETNX lock.foo and it succeeds
  • C2 sends DEL lock.foo
  • C2 sends SETNX lock.foo and it succeeds
  • ERROR: both C1 and C2 acquired the lock because of the race condition.

Fortunately, it's possible to avoid this issue using the following algorithm. Let's see how C4, our sane client, uses the good algorithm:

  • C4 sends SETNX lock.foo in order to acquire the lock

  • The crashed client C3 still holds it, so Redis will reply with 0 to C4.

  • C4 sends GET lock.foo to check if the lock expired. If it is not, it will sleep for some time and retry from the start.

  • Instead, if the lock is expired because the Unix time at lock.foo is older than the current Unix time, C4 tries to perform:

    GETSET lock.foo <current Unix timestamp + lock timeout + 1>
    
  • Because of the GETSET semantic, C4 can check if the old value stored at key is still an expired timestamp. If it is, the lock was acquired.

  • If another client, for instance C5, was faster than C4 and acquired the lock with the GETSET operation, the C4 GETSET operation will return a non expired timestamp. C4 will simply restart from the first step. Note that even if C4 set the key a bit a few seconds in the future this is not a problem.

In order to make this locking algorithm more robust, a client holding a lock should always check the timeout didn't expire before unlocking the key with DEL because client failures can be complex, not just crashing but also blocking a lot of time against some operations and trying to issue DEL after a lot of time (when the LOCK is already held by another client).

359 - SETRANGE

Overwrite part of a string at key starting at the specified offset

Overwrites part of the string stored at key, starting at the specified offset, for the entire length of value. If the offset is larger than the current length of the string at key, the string is padded with zero-bytes to make offset fit. Non-existing keys are considered as empty strings, so this command will make sure it holds a string large enough to be able to set value at offset.

Note that the maximum offset that you can set is 2^29 -1 (536870911), as Redis Strings are limited to 512 megabytes. If you need to grow beyond this size, you can use multiple keys.

Warning: When setting the last possible byte and the string value stored at key does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) takes ~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit number 8388608 (8MB allocation) takes ~8ms. Note that once this first allocation is done, subsequent calls to SETRANGE for the same key will not have the allocation overhead.

Patterns

Thanks to SETRANGE and the analogous GETRANGE commands, you can use Redis strings as a linear array with O(1) random access. This is a very fast and efficient storage in many real world use cases.

Return

Integer reply: the length of the string after it was modified by the command.

Examples

Basic usage:

SET key1 "Hello World" SETRANGE key1 6 "Redis" GET key1

Example of zero padding:

SETRANGE key2 6 "Redis" GET key2

360 - SHUTDOWN

Synchronously save the dataset to disk and then shut down the server

The command behavior is the following:

  • If there are any replicas lagging behind in replication:
    • Pause clients attempting to write by performing a CLIENT PAUSE with the WRITE option.
    • Wait up to the configured shutdown-timeout (default 10 seconds) for replicas to catch up the replication offset.
  • Stop all the clients.
  • Perform a blocking SAVE if at least one save point is configured.
  • Flush the Append Only File if AOF is enabled.
  • Quit the server.

If persistence is enabled this commands makes sure that Redis is switched off without any data loss.

Note: A Redis instance that is configured for not persisting on disk (no AOF configured, nor "save" directive) will not dump the RDB file on SHUTDOWN, as usually you don't want Redis instances used only for caching to block on when shutting down.

Also note: If Redis receives one of the signals SIGTERM and SIGINT, the same shutdown sequence is performed. See also Signal Handling.

Modifiers

It is possible to specify optional modifiers to alter the behavior of the command. Specifically:

  • SAVE will force a DB saving operation even if no save points are configured.
  • NOSAVE will prevent a DB saving operation even if one or more save points are configured.
  • NOW skips waiting for lagging replicas, i.e. it bypasses the first step in the shutdown sequence.
  • FORCE ignores any errors that would normally prevent the server from exiting. For details, see the following section.
  • ABORT cancels an ongoing shutdown and cannot be combined with other flags.

Conditions where a SHUTDOWN fails

When a save point is configured or the SAVE modifier is specified, the shutdown may fail if the RDB file can't be saved. Then, the server continues to run in order to ensure no data loss. This may be bypassed using the FORCE modifier, causing the server to exit anyway.

When the Append Only File is enabled the shutdown may fail because the system is in a state that does not allow to safely immediately persist on disk.

Normally if there is an AOF child process performing an AOF rewrite, Redis will simply kill it and exit. However, there are situations where it is unsafe to do so and, unless the FORCE modifier is specified, the SHUTDOWN command will be refused with an error instead. This happens in the following situations:

  • The user just turned on AOF, and the server triggered the first AOF rewrite in order to create the initial AOF file. In this context, stopping will result in losing the dataset at all: once restarted, the server will potentially have AOF enabled without having any AOF file at all.
  • A replica with AOF enabled, reconnected with its master, performed a full resynchronization, and restarted the AOF file, triggering the initial AOF creation process. In this case not completing the AOF rewrite is dangerous because the latest dataset received from the master would be lost. The new master can actually be even a different instance (if the REPLICAOF or SLAVEOF command was used in order to reconfigure the replica), so it is important to finish the AOF rewrite and start with the correct data set representing the data set in memory when the server was terminated.

There are situations when we want just to terminate a Redis instance ASAP, regardless of what its content is. In such a case, the command SHUTDOWN NOW NOSAVE FORCE can be used. In versions before 7.0, where the NOW and FORCE flags are not available, the right combination of commands is to send a CONFIG appendonly no followed by a SHUTDOWN NOSAVE. The first command will turn off the AOF if needed, and will terminate the AOF rewriting child if there is one active. The second command will not have any problem to execute since the AOF is no longer enabled.

Minimize the risk of data loss

Since Redis 7.0, the server waits for lagging replicas up to a configurable shutdown-timeout, by default 10 seconds, before shutting down. This provides a best effort minimizing the risk of data loss in a situation where no save points are configured and AOF is disabled. Before version 7.0, shutting down a heavily loaded master node in a diskless setup was more likely to result in data loss. To minimize the risk of data loss in such setups, it's advised to trigger a manual FAILOVER (or CLUSTER FAILOVER) to demote the master to a replica and promote one of the replicas to be the new master, before shutting down a master node.

Return

Simple string reply: OK if ABORT was specified and shutdown was aborted. On successful shutdown, nothing is returned since the server quits and the connection is closed. On failure, an error is returned.

Behavior change history

  • >= 7.0.0: Introduced waiting for lagging replicas before exiting.

361 - SINTER

Intersect multiple sets

Returns the members of the set resulting from the intersection of all the given sets.

For example:

key1 = {a,b,c,d}
key2 = {c}
key3 = {a,c,e}
SINTER key1 key2 key3 = {c}

Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set).

Return

Array reply: list with members of the resulting set.

Examples

SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SINTER key1 key2

362 - SINTERCARD

Intersect multiple sets and return the cardinality of the result

This command is similar to SINTER, but instead of returning the result set, it returns just the cardinality of the result. Returns the cardinality of the set which would result from the intersection of all the given sets.

Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set).

By default, the command calculates the cardinality of the intersection of all given sets. When provided with the optional LIMIT argument (which defaults to 0 and means unlimited), if the intersection cardinality reaches limit partway through the computation, the algorithm will exit and yield limit as the cardinality. Such implementation ensures a significant speedup for queries where the limit is lower than the actual intersection cardinality.

Return

Integer reply: the number of elements in the resulting intersection.

Examples

SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key1 "d" SADD key2 "c" SADD key2 "d" SADD key2 "e" SINTER key1 key2 SINTERCARD 2 key1 key2 SINTERCARD 2 key1 key2 LIMIT 1

363 - SINTERSTORE

Intersect multiple sets and store the resulting set in a key

This command is equal to SINTER, but instead of returning the resulting set, it is stored in destination.

If destination already exists, it is overwritten.

Return

Integer reply: the number of elements in the resulting set.

Examples

SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SINTERSTORE key key1 key2 SMEMBERS key

364 - SISMEMBER

Determine if a given value is a member of a set

Returns if member is a member of the set stored at key.

Return

Integer reply, specifically:

  • 1 if the element is a member of the set.
  • 0 if the element is not a member of the set, or if key does not exist.

Examples

SADD myset "one" SISMEMBER myset "one" SISMEMBER myset "two"

365 - SLAVEOF

Make the server a replica of another instance, or promote it as master.

A note about the word slave used in this man page and command name: starting with Redis version 5, if not for backward compatibility, the Redis project no longer uses the word slave. Please use the new command REPLICAOF. The command SLAVEOF will continue to work for backward compatibility.

The SLAVEOF command can change the replication settings of a replica on the fly. If a Redis server is already acting as replica, the command SLAVEOF NO ONE will turn off the replication, turning the Redis server into a MASTER. In the proper form SLAVEOF hostname port will make the server a replica of another server listening at the specified hostname and port.

If a server is already a replica of some master, SLAVEOF hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset.

The form SLAVEOF NO ONE will stop replication, turning the server into a MASTER, but will not discard the replication. So, if the old master stops working, it is possible to turn the replica into a master and set the application to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a replica.

Return

Simple string reply

366 - SLOWLOG

A container for slow log commands

This is a container command for slow log management commands.

To see the list of available commands you can call SLOWLOG HELP.

367 - SLOWLOG GET

Get the slow log's entries

The SLOWLOG GET command returns entries from the slow log in chronological order.

The Redis Slow Log is a system to log queries that exceeded a specified execution time. The execution time does not include I/O operations like talking with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime).

A new entry is added to the slow log whenever a command exceeds the execution time threshold defined by the slowlog-log-slower-than configuration directive. The maximum number of entries in the slow log is governed by the slowlog-max-len configuration directive.

By default the command returns all of the entries in the log. The optional count argument limits the number of returned entries, so the command returns at most up to count entries.

Each entry from the slow log is comprised of the following six values:

  1. A unique progressive identifier for every slow log entry.
  2. The unix timestamp at which the logged command was processed.
  3. The amount of time needed for its execution, in microseconds.
  4. The array composing the arguments of the command.
  5. Client IP address and port.
  6. Client name if set via the CLIENT SETNAME command.

The entry's unique ID can be used in order to avoid processing slow log entries multiple times (for instance you may have a script sending you an email alert for every new slow log entry). The ID is never reset in the course of the Redis server execution, only a server restart will reset it.

@reply

Array reply: a list of slow log entries.

368 - SLOWLOG HELP

Show helpful text about the different subcommands

The SLOWLOG HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

369 - SLOWLOG LEN

Get the slow log's length

This command returns the current number of entries in the slow log.

A new entry is added to the slow log whenever a command exceeds the execution time threshold defined by the slowlog-log-slower-than configuration directive. The maximum number of entries in the slow log is governed by the slowlog-max-len configuration directive. Once the slog log reaches its maximal size, the oldest entry is removed whenever a new entry is created. The slow log can be cleared with the SLOWLOG RESET command.

@reply

Integer reply

The number of entries in the slow log.

370 - SLOWLOG RESET

Clear all entries from the slow log

This command resets the slow log, clearing all entries in it.

Once deleted the information is lost forever.

@reply

Simple string reply: OK

371 - SMEMBERS

Get all the members in a set

Returns all the members of the set value stored at key.

This has the same effect as running SINTER with one argument key.

Return

Array reply: all elements of the set.

Examples

SADD myset "Hello" SADD myset "World" SMEMBERS myset

372 - SMISMEMBER

Returns the membership associated with the given elements for a set

Returns whether each member is a member of the set stored at key.

For every member, 1 is returned if the value is a member of the set, or 0 if the element is not a member of the set or if key does not exist.

Return

Array reply: list representing the membership of the given elements, in the same order as they are requested.

Examples

SADD myset "one" SADD myset "one" SMISMEMBER myset "one" "notamember"

373 - SMOVE

Move a member from one set to another

Move member from the set at source to the set at destination. This operation is atomic. In every given moment the element will appear to be a member of source or destination for other clients.

If the source set does not exist or does not contain the specified element, no operation is performed and 0 is returned. Otherwise, the element is removed from the source set and added to the destination set. When the specified element already exists in the destination set, it is only removed from the source set.

An error is returned if source or destination does not hold a set value.

Return

Integer reply, specifically:

  • 1 if the element is moved.
  • 0 if the element is not a member of source and no operation was performed.

Examples

SADD myset "one" SADD myset "two" SADD myotherset "three" SMOVE myset myotherset "two" SMEMBERS myset SMEMBERS myotherset

374 - SORT

Sort the elements in a list, set or sorted set

Returns or stores the elements contained in the list, set or sorted set at key.

There is also the SORT_RO read-only variant of this command.

By default, sorting is numeric and elements are compared by their value interpreted as double precision floating point number. This is SORT in its simplest form:

SORT mylist

Assuming mylist is a list of numbers, this command will return the same list with the elements sorted from small to large. In order to sort the numbers from large to small, use the DESC modifier:

SORT mylist DESC

When mylist contains string values and you want to sort them lexicographically, use the ALPHA modifier:

SORT mylist ALPHA

Redis is UTF-8 aware, assuming you correctly set the LC_COLLATE environment variable.

The number of returned elements can be limited using the LIMIT modifier. This modifier takes the offset argument, specifying the number of elements to skip and the count argument, specifying the number of elements to return from starting at offset. The following example will return 10 elements of the sorted version of mylist, starting at element 0 (offset is zero-based):

SORT mylist LIMIT 0 10

Almost all modifiers can be used together. The following example will return the first 5 elements, lexicographically sorted in descending order:

SORT mylist LIMIT 0 5 ALPHA DESC

Sorting by external keys

Sometimes you want to sort elements using external keys as weights to compare instead of comparing the actual elements in the list, set or sorted set. Let's say the list mylist contains the elements 1, 2 and 3 representing unique IDs of objects stored in object_1, object_2 and object_3. When these objects have associated weights stored in weight_1, weight_2 and weight_3, SORT can be instructed to use these weights to sort mylist with the following statement:

SORT mylist BY weight_*

The BY option takes a pattern (equal to weight_* in this example) that is used to generate the keys that are used for sorting. These key names are obtained substituting the first occurrence of * with the actual value of the element in the list (1, 2 and 3 in this example).

Skip sorting the elements

The BY option can also take a non-existent key, which causes SORT to skip the sorting operation. This is useful if you want to retrieve external keys (see the GET option below) without the overhead of sorting.

SORT mylist BY nosort

Retrieving external keys

Our previous example returns just the sorted IDs. In some cases, it is more useful to get the actual objects instead of their IDs (object_1, object_2 and object_3). Retrieving external keys based on the elements in a list, set or sorted set can be done with the following command:

SORT mylist BY weight_* GET object_*

The GET option can be used multiple times in order to get more keys for every element of the original list, set or sorted set.

It is also possible to GET the element itself using the special pattern #:

SORT mylist BY weight_* GET object_* GET #

Restrictions for using external keys

When enabling Redis cluster-mode there is no way to guarantee the existence of the external keys on the node which the command is processed on. In this case, any use of GET or BY which reference external key pattern will cause the command to fail with an error.

Starting from Redis 7.0, any use of GET or BY which reference external key pattern will only be allowed in case the current user running the command has full key read permissions. Full key read permissions can be set for the user by, for example, specifying '%R~*' or '~* with the relevant command access rules. You can check the ACL SETUSER command manual for more information on setting ACL access rules. If full key read permissions aren't set, the command will fail with an error.

Storing the result of a SORT operation

By default, SORT returns the sorted elements to the client. With the STORE option, the result will be stored as a list at the specified key instead of being returned to the client.

SORT mylist BY weight_* STORE resultkey

An interesting pattern using SORT ... STORE consists in associating an EXPIRE timeout to the resulting key so that in applications where the result of a SORT operation can be cached for some time. Other clients will use the cached list instead of calling SORT for every request. When the key will timeout, an updated version of the cache can be created by calling SORT ... STORE again.

Note that for correctly implementing this pattern it is important to avoid multiple clients rebuilding the cache at the same time. Some kind of locking is needed here (for instance using SETNX).

Using hashes in BY and GET

It is possible to use BY and GET options against hash fields with the following syntax:

SORT mylist BY weight_*->fieldname GET object_*->fieldname

The string -> is used to separate the key name from the hash field name. The key is substituted as documented above, and the hash stored at the resulting key is accessed to retrieve the specified hash field.

Return

Array reply: without passing the store option the command returns a list of sorted elements. Integer reply: when the store option is specified the command returns the number of sorted elements in the destination list.

375 - SORT_RO

Sort the elements in a list, set or sorted set. Read-only variant of SORT.

Read-only variant of the SORT command. It is exactly like the original SORT but refuses the STORE option and can safely be used in read-only replicas.

Since the original SORT has a STORE option it is technically flagged as a writing command in the Redis command table. For this reason read-only replicas in a Redis Cluster will redirect it to the master instance even if the connection is in read-only mode (see the READONLY command of Redis Cluster).

The SORT_RO variant was introduced in order to allow SORT behavior in read-only replicas without breaking compatibility on command flags.

See original SORT for more details.

Examples

SORT_RO mylist BY weight_*->fieldname GET object_*->fieldname

Return

Array reply: a list of sorted elements.

376 - SPOP

Remove and return one or multiple random members from a set

Removes and returns one or more random members from the set value store at key.

This operation is similar to SRANDMEMBER, that returns one or more random elements from a set but does not remove it.

By default, the command pops a single member from the set. When provided with the optional count argument, the reply will consist of up to count members, depending on the set's cardinality.

Return

When called without the count argument:

Bulk string reply: the removed member, or nil when key does not exist.

When called with the count argument:

Array reply: the removed members, or an empty array when key does not exist.

Examples

SADD myset "one" SADD myset "two" SADD myset "three" SPOP myset SMEMBERS myset SADD myset "four" SADD myset "five" SPOP myset 3 SMEMBERS myset

Distribution of returned elements

Note that this command is not suitable when you need a guaranteed uniform distribution of the returned elements. For more information about the algorithms used for SPOP, look up both the Knuth sampling and Floyd sampling algorithms.

377 - SPUBLISH

Post a message to a shard channel

Posts a message to the given shard channel.

In Redis Cluster, shard channels are assigned to slots by the same algorithm used to assign keys to slots. A shard message must be sent to a node that own the slot the shard channel is hashed to. The cluster makes sure that published shard messages are forwarded to all the node in the shard, so clients can subscribe to a shard channel by connecting to any one of the nodes in the shard.

For more information about sharded pubsub, see Sharded Pubsub.

Return

Integer reply: the number of clients that received the message.

Examples

For example the following command publish to channel orders with a subscriber already waiting for message(s).

> spublish orders hello
(integer) 1

378 - SRANDMEMBER

Get one or multiple random members from a set

When called with just the key argument, return a random element from the set value stored at key.

If the provided count argument is positive, return an array of distinct elements. The array's length is either count or the set's cardinality (SCARD), whichever is lower.

If called with a negative count, the behavior changes and the command is allowed to return the same element multiple times. In this case, the number of returned elements is the absolute value of the specified count.

Return

Bulk string reply: without the additional count argument, the command returns a Bulk Reply with the randomly selected element, or nil when key does not exist.

Array reply: when the additional count argument is passed, the command returns an array of elements, or an empty array when key does not exist.

Examples

SADD myset one two three SRANDMEMBER myset SRANDMEMBER myset 2 SRANDMEMBER myset -5

Specification of the behavior when count is passed

When the count argument is a positive value this command behaves as follows:

  • No repeated elements are returned.
  • If count is bigger than the set's cardinality, the command will only return the whole set without additional elements.
  • The order of elements in the reply is not truly random, so it is up to the client to shuffle them if needed.

When the count is a negative value, the behavior changes as follows:

  • Repeating elements are possible.
  • Exactly count elements, or an empty array if the set is empty (non-existing key), are always returned.
  • The order of elements in the reply is truly random.

Distribution of returned elements

Note: this section is relevant only for Redis 5 or below, as Redis 6 implements a fairer algorithm.

The distribution of the returned elements is far from perfect when the number of elements in the set is small, this is due to the fact that we used an approximated random element function that does not really guarantees good distribution.

The algorithm used, that is implemented inside dict.c, samples the hash table buckets to find a non-empty one. Once a non empty bucket is found, since we use chaining in our hash table implementation, the number of elements inside the bucket is checked and a random element is selected.

This means that if you have two non-empty buckets in the entire hash table, and one has three elements while one has just one, the element that is alone in its bucket will be returned with much higher probability.

379 - SREM

Remove one or more members from a set

Remove the specified members from the set stored at key. Specified members that are not a member of this set are ignored. If key does not exist, it is treated as an empty set and this command returns 0.

An error is returned when the value stored at key is not a set.

Return

Integer reply: the number of members that were removed from the set, not including non existing members.

Examples

SADD myset "one" SADD myset "two" SADD myset "three" SREM myset "one" SREM myset "four" SMEMBERS myset

380 - SSCAN

Incrementally iterate Set elements

See SCAN for SSCAN documentation.

381 - SSUBSCRIBE

Listen for messages published to the given shard channels

Subscribes the client to the specified shard channels.

In a Redis cluster, shard channels are assigned to slots by the same algorithm used to assign keys to slots. Client(s) can subscribe to a node covering a slot (primary/replica) to receive the messages published. All the specified shard channels needs to belong to a single slot to subscribe in a given SSUBSCRIBE call, A client can subscribe to channels across different slots over separate SSUBSCRIBE call.

For more information about sharded Pub/Sub, see Sharded Pub/Sub.

Examples

> ssubscribe orders
Reading messages... (press Ctrl-C to quit)
1) "ssubscribe"
2) "orders"
3) (integer) 1
1) "message"
2) "orders"
3) "hello"

382 - STRLEN

Get the length of the value stored in a key

Returns the length of the string value stored at key. An error is returned when key holds a non-string value.

Return

Integer reply: the length of the string at key, or 0 when key does not exist.

Examples

SET mykey "Hello world" STRLEN mykey STRLEN nonexisting

383 - SUBSCRIBE

Listen for messages published to the given channels

Subscribes the client to the specified channels.

Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional SUBSCRIBE, SSUBSCRIBE, PSUBSCRIBE, UNSUBSCRIBE, SUNSUBSCRIBE, PUNSUBSCRIBE, PING, RESET and QUIT commands.

Behavior change history

  • >= 6.2.0: RESET can be called to exit subscribed state.

384 - SUBSTR

Get a substring of the string stored at a key

Returns the substring of the string value stored at key, determined by the offsets start and end (both are inclusive). Negative offsets can be used in order to provide an offset starting from the end of the string. So -1 means the last character, -2 the penultimate and so forth.

The function handles out of range requests by limiting the resulting range to the actual length of the string.

Return

Bulk string reply

Examples

SET mykey "This is a string" GETRANGE mykey 0 3 GETRANGE mykey -3 -1 GETRANGE mykey 0 -1 GETRANGE mykey 10 100

385 - SUNION

Add multiple sets

Returns the members of the set resulting from the union of all the given sets.

For example:

key1 = {a,b,c,d}
key2 = {c}
key3 = {a,c,e}
SUNION key1 key2 key3 = {a,b,c,d,e}

Keys that do not exist are considered to be empty sets.

Return

Array reply: list with members of the resulting set.

Examples

SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SUNION key1 key2

386 - SUNIONSTORE

Add multiple sets and store the resulting set in a key

This command is equal to SUNION, but instead of returning the resulting set, it is stored in destination.

If destination already exists, it is overwritten.

Return

Integer reply: the number of elements in the resulting set.

Examples

SADD key1 "a" SADD key1 "b" SADD key1 "c" SADD key2 "c" SADD key2 "d" SADD key2 "e" SUNIONSTORE key key1 key2 SMEMBERS key

387 - SUNSUBSCRIBE

Stop listening for messages posted to the given shard channels

Unsubscribes the client from the given shard channels, or from all of them if none is given.

When no shard channels are specified, the client is unsubscribed from all the previously subscribed shard channels. In this case a message for every unsubscribed shard channel will be sent to the client.

Note: The global channels and shard channels needs to be unsubscribed from separately.

For more information about sharded Pub/Sub, see Sharded Pub/Sub.

388 - SWAPDB

Swaps two Redis databases

This command swaps two Redis databases, so that immediately all the clients connected to a given database will see the data of the other database, and the other way around. Example:

SWAPDB 0 1

This will swap database 0 with database 1. All the clients connected with database 0 will immediately see the new data, exactly like all the clients connected with database 1 will see the data that was formerly of database 0.

Return

Simple string reply: OK if SWAPDB was executed correctly.

Examples

SWAPDB 0 1

389 - SYNC

Internal command used for replication

Initiates a replication stream from the master.

The SYNC command is called by Redis replicas for initiating a replication stream from the master. It has been replaced in newer versions of Redis by PSYNC.

For more information about replication in Redis please check the replication page.

Return

Non standard return value, a bulk transfer of the data followed by PING and write requests from the master.

390 - TDIGEST.ADD

Adds one or more samples to a sketch

Adds one or more samples to a sketch.

Parameters:

  • key: The name of the sketch.
  • val: The value to add.
  • weight: The weight of this point.

Return

[] otherwise.

Examples

redis> TDIGEST.ADD t-digest 42 1 194 0.3
OK
redis> TDIGEST.ADD t-digest string 1.0
(error) ERR T-Digest: error parsing val parameter
redis> TDIGEST.ADD t-digest 42 string
(error) ERR T-Digest: error parsing weight parameter

391 - TDIGEST.CDF

Returns the fraction of all points added which are <= value

Returns the fraction of all points added which are <= value.

Parameters:

  • key: The name of the sketch.
  • quantile: upper limit for which the fraction of all points added which are <= value.

Return

[] - fraction of all points added which are <= value.

Examples

redis> TDIGEST.CDF t-digest 10
"0.041666666666666664"
redis> TDIGEST.QUANTILE nonexist 42
"nan"

392 - TDIGEST.CREATE

Allocate the memory and initialize the t-digest

Allocate the memory and initialize the t-digest.

Parameters:

  • key: The name of the sketch.
  • compression: The compression parameter. 100 is a common value for normal uses. 1000 is extremely large. See the further notes bellow.

Further notes on compression vs accuracy: Constructing a T-Digest requires a compression parameter which determines the size of the digest and accuracy of quantile estimation. The scaling of accuracy versus the compression parameter is illustrated in the following figure retrieved from "Ted Dunning, The t-digest: Efficient estimates of distributions, Software Impacts,Volume 7,2021".

The scaling of accuracy versus the compression parameter

Return

[] otherwise.

Examples

redis> TDIGEST.CREATE t-digest 100
OK

393 - TDIGEST.INFO

Returns information about a sketch

Returns compression, capacity, total merged and unmerged nodes, the total compressions made up to date on that key, and merged and unmerged weight.

Parameters:

  • key: The name of the sketch.

Return

Array reply with information of the sketch.

Examples

redis> tdigest.info t-digest
 1) Compression
 2) (integer) 100
 3) Capacity
 4) (integer) 610
 5) Merged nodes
 6) (integer) 3
 7) Unmerged nodes
 8) (integer) 2
 9) Merged weight
10) "120"
11) Unmerged weight
12) "1000"
13) Total compressions
14) (integer) 1

394 - TDIGEST.MAX

Get maximum value from the sketch

Get maximum value from the sketch. Will return DBL_MIN if the sketch is empty.

Parameters:

  • key: The name of the sketch.

Return

Simple string reply of MAXIMUM value from the sketch. Will return DBL_MIN if the sketch is empty.

Examples

redis> TDIGEST.MAX t-digest
"10"

395 - TDIGEST.MERGE

Merges all of the values from 'from' to 'this' sketch

Merges all of the values from 'from' to 'this' sketch.

Parameters:

  • to-key: Sketch to copy values to.
  • from-key: Sketch to copy values from.

Return

OK on success, error otherwise

Examples

redis> TDIGEST.MERGE to-sketch from-sketch
OK

396 - TDIGEST.MIN

Get minimum value from the sketch

Get minimum value from the sketch. Will return DBL_MAX if the sketch is empty.

Parameters:

  • key: The name of the sketch.

Return

Simple string reply of minimum value from the sketch. Will return DBL_MAX if the sketch is empty.

Examples

redis> TDIGEST.MIN t-digest
"10"

397 - TDIGEST.QUANTILE

Returns an estimate of the cutoff such that a specified fraction of the data added to this TDigest would be less than or equal to the specified cutoffs. Multiple quantiles can be returned with one call.

Returns an estimate of the cutoff such that a specified fraction of the data added to this TDigest would be less than or equal to the specified cutoffs.

Multiple quantiles can be returned with one call.

Parameters:

  • key: The name of the sketch.
  • quantile: The desired fraction ( between 0 and 1 inclusively ).

Return

Array reply - the command returns an array of results populated with quantile_1, cutoff_1, quantile_2, cutoff_2, ..., quantile_N, cutoff_N.

Examples

redis> TDIGEST.QUANTILE t-digest 0.5
1) "0.5"
2) "100.42"
redis> TDIGEST.QUANTILE t-digest 0.5 0.999
1) "0.5"
2) "100.42"
3) "0.999"
4) "190.01"

398 - TDIGEST.RESET

Reset the sketch to zero - empty out the sketch and re-initialize it

Reset the sketch to zero - empty out the sketch and re-initialize it.

Parameters:

  • key: The name of the sketch.

Return

[] otherwise.

Examples

redis> TDIGEST.RESET t-digest
OK

399 - TDIGEST.TRIMMED_MEAN

Returns the trimmed mean ignoring values outside given cutoff upper and lower limits

Get the mean value from the sketch, excluding values outside the low and high cutoff percentiles.

Parameters:

  • key: The name of the sketch.
  • low_cut_percentile: Exclude values lower than this percentile.
  • high_cut_percentile: Exclude values higher than this percentile.

Return

Simple string reply of mean value from the sketch. Will return DBL_MAX if the sketch is empty.

Examples

redis> TDIGEST.TRIMMED_MEAN t-digest 0.1 0.9
"9.500001"

400 - TIME

Return the current server time

The TIME command returns the current server time as a two items lists: a Unix timestamp and the amount of microseconds already elapsed in the current second. Basically the interface is very similar to the one of the gettimeofday system call.

Return

Array reply, specifically:

A multi bulk reply containing two elements:

  • unix time in seconds.
  • microseconds.

Examples

TIME TIME

401 - TOPK.ADD

Increases the count of one or more items by increment

Adds an item to the data structure. Multiple items can be added at once. If an item enters the Top-K list, the item which is expelled is returned. This allows dynamic heavy-hitter detection of items being entered or expelled from Top-K list.

Parameters

  • key: Name of sketch where item is added.
  • item: Item/s to be added.

Return

[] otherwise..

Example

redis> TOPK.ADD topk foo bar 42
1) (nil)
2) baz
3) (nil)

402 - TOPK.Count

Return the count for one or more items are in a sketch

Returns count for an item. Multiple items can be requested at once. Please note this number will never be higher than the real count and likely to be lower.

Parameters

  • key: Name of sketch where item is counted.
  • item: Item/s to be counted.

Return

[] - count for responding item.

Examples

redis> TOPK.COUNT topk foo 42 nonexist
1) (integer) 3
2) (integer) 1
3) (integer) 0

403 - TOPK.INCRBY

Increases the count of one or more items by increment

Increase the score of an item in the data structure by increment. Multiple items' score can be increased at once. If an item enters the Top-K list, the item which is expelled is returned.

Parameters

  • key: Name of sketch where item is added.
  • item: Item/s to be added.
  • increment: increment to current item score.

Return

[] otherwise..

@example

redis> TOPK.INCRBY topk foo 3 bar 2 42 30
1) (nil)
2) (nil)
3) foo

404 - TOPK.INFO

Returns information about a sketch

Returns number of required items (k), width, depth and decay values.

Parameters

  • key: Name of sketch.

Return

Array reply with information of the filter.

Examples

TOPK.INFO topk
1) k
2) (integer) 50
3) width
4) (integer) 2000
5) depth
6) (integer) 7
7) decay
8) "0.92500000000000004"

405 - TOPK.LIST

Return full list of items in Top K list

Return full list of items in Top K list.

Parameters

  • key: Name of sketch where item is counted.
  • WITHCOUNT: Count of each element is returned.

Return

k (or less) items in Top K list.

[] - the names of items in the TopK list. If WITHCOUNT is requested, [] and Integer reply pairs of the names of items in the TopK list and their count.

Examples

TOPK.LIST topk
1) foo
2) 42
3) bar
TOPK.LIST topk WITHCOUNT
1) foo
2) (integer) 12
3) 42
4) (integer) 7
5) bar
6) (integer) 2

406 - TOPK.QUERY

Checks whether one or more items are in a sketch

Checks whether an item is one of Top-K items. Multiple items can be checked at once.

Parameters

  • key: Name of sketch where item is queried.
  • item: Item/s to be queried.

Return

[] - "1" if item is in Top-K, otherwise "0".

Examples

redis> TOPK.QUERY topk 42 nonexist
1) (integer) 1
2) (integer) 0

407 - TOPK.RESERVE

Initializes a TopK with specified parameters

Initializes a TopK with specified parameters.

Parameters

  • key: Key under which the sketch is to be found.
  • topk: Number of top occurring items to keep.

Optional parameters

  • width: Number of counters kept in each array. (Default 8)
  • depth: Number of arrays. (Default 7)
  • decay: The probability of reducing a counter in an occupied bucket. It is raised to power of it's counter (decay ^ bucket[i].counter). Therefore, as the counter gets higher, the chance of a reduction is being reduced. (Default 0.9)

Return

[] otherwise.

Examples

redis> TOPK.RESERVE topk 50 2000 7 0.925
OK

408 - TOUCH

Alters the last access time of a key(s). Returns the number of existing keys specified.

Alters the last access time of a key(s). A key is ignored if it does not exist.

Return

Integer reply: The number of keys that were touched.

Examples

SET key1 "Hello" SET key2 "World" TOUCH key1 key2

409 - TS.ADD

Append a sample to a time series

TS.ADD

Append a sample to a time series.

TS.ADD key timestamp value [RETENTION retentionPeriod] [ENCODING [COMPRESSED|UNCOMPRESSED]] [CHUNK_SIZE size] [ON_DUPLICATE policy] [LABELS {label value}...]

If the time series does not exist - it will be automatically created.

  • key - Key name for time series
  • timestamp - (integer) UNIX sample timestamp in milliseconds. * can be used for an automatic timestamp from the server's clock.
  • value - (double) numeric data value of the sample. We expect the double number to follow RFC 7159 (JSON standard). In particular, the parser will reject overly large values that would not fit in binary64. It will not accept NaN or infinite values.

The following arguments are optional because they can be set by TS.CREATE:

  • RETENTION retentionPeriod - Maximum retention period, compared to maximal existing timestamp (in milliseconds).

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

    When set to 0, the series is not trimmed. If not specified: set to the global RETENTION_POLICY configuration of the database (which, by default, is 0).

  • ENCODING enc - Specify the series samples encoding format. One of the following values:

    • COMPRESSED: apply the DoubleDelta compression to the series samples, meaning compression of Delta of Deltas between timestamps and compression of values via XOR encoding.
    • UNCOMPRESSED: keep the raw samples in memory.

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

  • CHUNK_SIZE size - Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [128 .. 1048576].

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

    If not specified: set to 4096.

  • ON_DUPLICATE policy - Overwrite key and database configuration for DUPLICATE_POLICY (policy for handling samples with identical timestamps). One of the following values:

    • BLOCK - an error will occur for any out of order sample
    • FIRST - ignore any newly reported value
    • LAST - override with the newly reported value
    • MIN - only override if the value is lower than the existing value
    • MAX - only override if the value is higher than the existing value
    • SUM - If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value.
  • LABELS {label value}... - Set of label-value pairs that represent metadata labels of the time series.

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

Examples

127.0.0.1:6379>TS.ADD temperature:2:32 1548149180000 26 LABELS sensor_id 2 area_id 32
(integer) 1548149180000
127.0.0.1:6379>TS.ADD temperature:3:11 1548149183000 27 RETENTION 3600
(integer) 1548149183000
127.0.0.1:6379>TS.ADD temperature:3:11 * 30
(integer) 1559718352000

Complexity

If a compaction rule exits on a time series, TS.ADD performance might be reduced. The complexity of TS.ADD is always O(M) when M is the number of compaction rules or O(1) with no compaction.

Notes

  • You can use this command to add data to a nonexisting time series in a single command. This is why RETENTION, ENCODING, CHUNK_SIZE, ON_DUPLICATE, and LABELS are optional arguments.
  • When specified and the key doesn't exist, a new time series will be created. Setting RETENTION and LABELS introduces additional time complexity.
  • If timestamp is older than the retention period (compared to maximal existing timestamp) - the sample will not be appended.
  • When adding a sample to a time series for which compaction rules are defined:
    • If all the original samples for an affected aggregated time bucket are available - the compacted value will be recalculated based on the reported sample and the original samples.
    • If only part of the original samples for an affected aggregated time bucket are available (due to trimming caused in accordance with the time series RETENTION policy) - the compacted value will be recalculated based on the reported sample and the available original samples.
    • If the original samples for an affected aggregated time bucket are not available (due to trimming caused in accordance with the time series RETENTION policy) - the compacted value bucket will not be updated.

410 - TS.ALTER

Update the retention, chunk size, duplicate policy, and labels of an existing time series

Update

TS.ALTER

Update the retention, chunk size, duplicate policy, and labels of an existing time series.

TS.ALTER key [RETENTION retentionPeriod] [CHUNK_SIZE size] [DUPLICATE_POLICY policy] [LABELS [{label value}...]]
  • key - Key name for time series

  • RETENTION retentionPeriod - Maximum retention period, compared to maximal existing timestamp (in milliseconds).

    • When set to 0, the series is not trimmed at all
  • CHUNK_SIZE size - memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [128 .. 1048576].

  • DUPLICATE_POLICY policy - Policy for handling samples with identical timestamps. One of the following values:

    • BLOCK - an error will occur for any out of order sample
    • FIRST - ignore any newly reported value
    • LAST - override with the newly reported value
    • MIN - only override if the value is lower than the existing value
    • MAX - only override if the value is higher than the existing value
    • SUM - If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value.

    When not specified, the server-wide default will be used.

  • LABELS [{label value}...] - Set of label-value pairs that represent metadata labels of the key

    If LABELS is specified, the given label-list is applied. Labels that are not present in the given list are removed implicitly.

    Specifying LABELS with no label-value pairs will remove all existing labels.

Alter Example

TS.ALTER temperature:2:32 LABELS sensor_id 2 area_id 32 sub_area_id 15

Notes

  • This command alters only the given element. E.g., if LABELS is specified, but RETENTION isn't, only the labels are altered.

411 - TS.CREATE

Create a new time series

Create

TS.CREATE

Create a new time series.

TS.CREATE key [RETENTION retentionPeriod] [ENCODING [UNCOMPRESSED|COMPRESSED]] [CHUNK_SIZE size] [DUPLICATE_POLICY policy] [LABELS {label value}...]
  • key - Key name for time series

Optional args:

  • RETENTION retentionPeriod - Maximum age for samples compared to last event time (in milliseconds)

    When set to 0, the series is not trimmed.

    When not specified: set to the global RETENTION_POLICY configuration of the database (which, by default, is 0).

  • ENCODING enc - Specify the series samples encoding format. One of the following values:

    • COMPRESSED: apply the DoubleDelta compression to the series samples, meaning compression of Delta of Deltas between timestamps and compression of values via XOR encoding.
    • UNCOMPRESSED: keep the raw samples in memory. Adding this flag will keep data in an uncompressed form. Compression not only saves memory but usually improve performance due to lower number of memory accesses.

    When not specified: set to COMPRESSED.

  • CHUNK_SIZE size - memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [128 .. 1048576].

    When not specified: set to 4096.

  • DUPLICATE_POLICY policy - Policy for handling multiple samples with identical timestamps. One of the following values:

    • BLOCK - an error will occur for any out of order sample
    • FIRST - ignore any newly reported value
    • LAST - override with the newly reported value
    • MIN - only override if the value is lower than the existing value
    • MAX - only override if the value is higher than the existing value
    • SUM - If a previous sample exists, add the new sample to it so that the updated value is equal to (previous + new). If no previous sample exists, set the updated value equal to the new value.

    When not specified: set to the global DUPLICATE_POLICY configuration of the database (which, by default, is BLOCK).

  • LABELS {label value}... - Set of label-value pairs that represent metadata labels of the key

Complexity

TS.CREATE complexity is O(1).

Create Example

TS.CREATE temperature:2:32 RETENTION 60000 DUPLICATE_POLICY MAX LABELS sensor_id 2 area_id 32

Errors

  • If a key already exists, you get a normal Redis error reply TSDB: key already exists. You can check for the existence of a key with Redis EXISTS command.

Notes

TS.ADD can also create a new time-series if called with a key that does not exist.

412 - TS.CREATERULE

Create a compaction rule

TS.CREATERULE

Create a compaction rule.

TS.CREATERULE sourceKey destKey AGGREGATION aggregator bucketDuration [alignTimestamp]
  • sourceKey - Key name for source time series

  • destKey - Key name for destination (compacted) time series

  • AGGREGATION aggregator bucketDuration

    Aggregate results into time buckets.

    • aggregator - Aggregation type: One of the following:
      aggregatordescription
      avgarithmetic mean of all values
      sumsum of all values
      minminimum value
      maxmaximum value
      rangedifference between the highest and the lowest value
      countnumber of values
      firstthe value with the lowest timestamp in the bucket
      lastthe value with the highest timestamp in the bucket
      std.ppopulation standard deviation of the values
      std.ssample standard deviation of the values
      var.ppopulation variance of the values
      var.ssample variance of the values
      twatime-weighted average of all values
    • bucketDuration - duration of each bucket, in milliseconds
    • alignTimestamp - alignment of the compacted buckets start times

    The alignment of time buckets is 0.

destKey should be of a timeseries type, and should be created before TS.CREATERULE is called.

Notes:

  • Calling TS.CREATERULE with a nonempty destKey can result in an undefined behavior
  • Samples should not be explicitly added to destKey
  • Only new samples that are added into the source series after the creation of the rule will be aggregated

413 - TS.DECRBY

Decrease the value of the sample with the maximal existing timestamp, or create a new sample with a value equal to the value of the sample with the maximal existing timestamp with a given decrement

TS.DECRBY

Decrease the value of the sample with the maximal existing timestamp, or create a new sample with a value equal to the value of the sample with the maximal existing timestamp with a given decrement.

TS.DECRBY key value [TIMESTAMP timestamp] [RETENTION retentionPeriod] [UNCOMPRESSED] [CHUNK_SIZE size] [LABELS {label value}...]

If the time series does not exist - it will be automatically created.

This command can be used as a counter or gauge that automatically gets history as a time series.

  • key - Key name for time series
  • value - numeric data value of the sample (double)

Optional args:

  • TIMESTAMP timestamp - (integer) UNIX sample timestamp in milliseconds. * can be used for an automatic timestamp from the server's clock.

    timestamp must be equal to or higher than the maximal existing timestamp. When equal, the value of the sample with the maximal existing timestamp is decreased. When higher, a new sample with a timestamp set to timestamp will be created, and its value will be set to the value of the sample with the maximal existing timestamp minus value. If the time series is empty - the value would be set to value.

  • RETENTION retentionPeriod - Maximum retention period, compared to maximal existing timestamp (in milliseconds).

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

    When set to 0, the series is not trimmed. If not specified: set to the global RETENTION_POLICY configuration of the database (which, by default, is 0).

  • UNCOMPRESSED - Changes data storage from compressed (by default) to uncompressed

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

  • CHUNK_SIZE size - Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [128 .. 1048576].

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

    If not specified: set to 4096.

  • LABELS {label value}... - Set of label-value pairs that represent metadata labels of the time series.

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

Notes

  • You can use this command to add data to a nonexisting time series in a single command. This is why RETENTION, UNCOMPRESSED, CHUNK_SIZE, and LABELS are optional arguments.
  • When specified and the key doesn't exist, a new time series is created. Setting the RETENTION and LABELS introduces additional time complexity.

414 - TS.DEL

Delete all samples between two timestamps for a given time series

TS.DEL

Delete all samples between two timestamps for a given time series.

The given timestamp interval is closed (inclusive), meaning samples which timestamp eqauls the fromTimestamp or toTimestamp will also be deleted.

TS.DEL key fromTimestamp toTimestamp
  • key - Key name for time series
  • fromTimestamp - Start timestamp for the range deletion.
  • toTimestamp - End timestamp for the range deletion.

Return value

Integer reply: The number of samples that were removed.

Complexity

TS.DEL complexity is O(N) where N is the number of data points that will be removed.

Delete range of data points example

127.0.0.1:6379>TS.DEL temperature:2:32 1548149180000 1548149183000
(integer) 150

415 - TS.DELETERULE

Delete a compaction rule

TS.DELETERULE

Delete a compaction rule.

TS.DELETERULE sourceKey destKey
  • sourceKey - Key name for source time series
  • destKey - Key name for compacted time series

Note that this command does not delete the compacted series.

416 - TS.GET

Get the last sample

TS.GET

Get the last sample.

TS.GET key
  • key - Key name for time series

Return Value

Array-reply, specifically:

The returned array will contain:

  • The last sample timestamp followed by the last sample value, when the time series contains data.
  • An empty array, when the time series is empty.

Complexity

TS.GET complexity is O(1).

Examples

Get Example on time series containing data
127.0.0.1:6379> TS.GET temperature:2:32
1) (integer) 1548149279
2) "23"
Get Example on empty time series
127.0.0.1:6379> redis-cli TS.GET empty_ts
(empty array)

417 - TS.INCRBY

Increase the value of the sample with the maximal existing timestamp, or create a new sample with a value equal to the value of the sample with the maximal existing timestamp with a given increment

TS.INCRBY

Increase the value of the sample with the maximal existing timestamp, or create a new sample with a value equal to the value of the sample with the maximal existing timestamp with a given increment.

TS.INCRBY key value [TIMESTAMP timestamp] [RETENTION retentionPeriod] [UNCOMPRESSED] [CHUNK_SIZE size] [LABELS {label value}...]

If the time series does not exist - it will be automatically created.

This command can be used as a counter or gauge that automatically gets history as a time series.

  • key - Key name for time series
  • value - numeric data value of the sample (double)

Optional args:

  • TIMESTAMP timestamp - (integer) UNIX sample timestamp in milliseconds. * can be used for an automatic timestamp from the server's clock.

    timestamp must be equal to or higher than the maximal existing timestamp. When equal, the value of the sample with the maximal existing timestamp is increased. When higher, a new sample with a timestamp set to timestamp will be created, and its value will be set to the value of the sample with the maximal existing timestamp plus value. If the time series is empty - the value would be set to value.

  • RETENTION retentionPeriod - Maximum retention period, compared to maximal existing timestamp (in milliseconds).

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

    When set to 0, the series is not trimmed. If not specified: set to the global RETENTION_POLICY configuration of the database (which, by default, is 0).

  • UNCOMPRESSED - Changes data storage from compressed (by default) to uncompressed

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

  • CHUNK_SIZE size - Memory size, in bytes, allocated for each data chunk. Must be a multiple of 8 in the range [128 .. 1048576].

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

    If not specified: set to 4096.

  • LABELS {label value}... - Set of label-value pairs that represent metadata labels of the time series.

    Used only if a new time series is created. Ignored When adding samples to an existing time series.

Notes

  • You can use this command to add data to a nonexisting time series in a single command. This is why RETENTION, UNCOMPRESSED, CHUNK_SIZE, and LABELS are optional arguments.
  • When specified and the key doesn't exist, a new time series is created. Setting the RETENTION and LABELS introduces additional time complexity.

418 - TS.INFO

Returns information and statistics for a time series

TS.INFO

Format

TS.INFO key [DEBUG]

Description

Returns information and statistics for a time series.

Parameters

  • key - Key name of the time series
  • DEBUG - An optional flag to get a more detailed information about the chunks.

Complexity

O(1)

Return Value

Array-reply, specifically:

  • totalSamples - Total number of samples in this time series
  • memoryUsage - Total number of bytes allocated for this time series
  • firstTimestamp - First timestamp present in this time series
  • lastTimestamp - Last timestamp present in this time series
  • retentionTime - The retention period, in milliseconds, for this time series
  • chunkCount - Number of Memory Chunks used for this time series
  • chunkSize - Memory size, in bytes, allocated for data
  • chunkType - The chunk type: compressed or uncompressed
  • duplicatePolicy - The duplicate policy of this time series
  • labels - A nested array of label-value pairs that represent the metadata labels of this time series
  • sourceKey - Key name for source time series in case the current series is a target of a compaction rule
  • rules - A nested array of the compaction rules defined in this time series

When DEBUG is specified, the response will contain an additional array field called Chunks. Each item (per chunk) will contain:

  • startTimestamp - First timestamp present in the chunk
  • endTimestamp - Last timestamp present in the chunk
  • samples - Total number of samples in the chunk
  • size - The chunk data size in bytes (this is the exact size that used for data only inside the chunk, doesn't include other overheads)
  • bytesPerSample - Ratio of size and samples

TS.INFO Example

TS.INFO temperature:2:32
 1) totalSamples
 2) (integer) 100
 3) memoryUsage
 4) (integer) 4184
 5) firstTimestamp
 6) (integer) 1548149180
 7) lastTimestamp
 8) (integer) 1548149279
 9) retentionTime
10) (integer) 0
11) chunkCount
12) (integer) 1
13) chunkSize
14) (integer) 256
15) chunkType
16) compressed
17) duplicatePolicy
18) (nil)
19) labels
20) 1) 1) "sensor_id"
       2) "2"
    2) 1) "area_id"
       2) "32"
21) sourceKey
22) (nil)
23) rules
24) (empty list or set)

With DEBUG:

...
23) rules
24) (empty list or set)
25) keySelfName
26) "temperature:2:32"
25) Chunks
26) 1)  1) startTimestamp
        2) (integer) 1548149180
        3) endTimestamp
        4) (integer) 1548149279
        5) samples
        6) (integer) 100
        7) size
        8) (integer) 256
        9) bytesPerSample
       10) "1.2799999713897705"

419 - TS.MADD

Append new samples to one or more time series

TS.MADD

Append new samples to one or more time series.

TS.MADD {key timestamp value}...
  • key - Key name for time series
  • timestamp - (integer) UNIX sample timestamp in milliseconds. * can be used for an automatic timestamp from the server's clock.
  • value - numeric data value of the sample (double). We expect the double number to follow RFC 7159 (JSON standard). In particular, the parser will reject overly large values that would not fit in binary64. It will not accept NaN or infinite values.

Examples

127.0.0.1:6379>TS.MADD temperature:2:32 1548149180000 26 cpu:2:32 1548149183000 54
1) (integer) 1548149180000
2) (integer) 1548149183000
127.0.0.1:6379>TS.MADD temperature:2:32 1548149181000 45 cpu:2:32 1548149180000 30
1) (integer) 1548149181000
2) (integer) 1548149180000

Complexity

If a compaction rule exits on a time series, TS.MADD performance might be reduced. The complexity of TS.MADD is always O(N*M) when N is the amount of series updated and M is the amount of compaction rules or O(N) with no compaction.

420 - TS.MGET

Get the last samples matching a specific filter

TS.MGET

Get the last samples matching a specific filter.

TS.MGET [WITHLABELS | SELECTED_LABELS label...] FILTER filter...
  • FILTER filter...

    This is the list of possible filters:

    • label=value - label equals value
    • label!=value - label doesn't equal value
    • label= - key does not have the label label
    • label!= - key has label label
    • label=(value1,value2,...) - key with label label that equals one of the values in the list
    • lable!=(value1,value2,...) - key with label label that doesn't equal any of the values in the list

    Note: Whenever filters need to be provided, a minimum of one label=value filter must be applied.

Optional args:

  • WITHLABELS - Include in the reply all label-value pairs representing metadata labels of the time series.
  • SELECTED_LABELS label... - Include in the reply a subset of the label-value pairs that represent metadata labels of the time series. This is usefull when there is a large number of labels per series, but only the values of some of the labels are required.

If WITHLABELS or SELECTED_LABELS are not specified, by default, an empty list is reported as the label-value pairs.

Return Value

For each time series matching the specified filters, the following is reported:

  • The key name
  • A list of label-value pairs
    • By default, an empty list is reported
    • If WITHLABELS is specified, all labels associated with this time series are reported
    • If SELECTED_LABELS label... is specified, the selected labels are reported
  • The last sample's timetag-value pair

Note: MGET command can't be part of transaction when running on Redis cluster.

Complexity

TS.MGET complexity is O(n).

n = Number of time series that match the filters

Examples

MGET Example with default behaviour
127.0.0.1:6379> TS.MGET FILTER area_id=32
1) 1) "temperature:2:32"
   2) (empty list or set)
   3) 1) (integer) 1548149181000
      2) "30"
2) 1) "temperature:3:32"
   2) (empty list or set)
   3) 1) (integer) 1548149181000
      2) "29"
MGET Example with WITHLABELS option
127.0.0.1:6379> TS.MGET WITHLABELS FILTER area_id=32
1) 1) "temperature:2:32"
   2) 1) 1) "sensor_id"
         2) "2"
      2) 1) "area_id"
         2) "32"
   3) 1) (integer) 1548149181000
      2) "30"
2) 1) "temperature:3:32"
   2) 1) 1) "sensor_id"
         2) "2"
      2) 1) "area_id"
         2) "32"
   3) 1) (integer) 1548149181000
      2) "29"

421 - TS.MRANGE

Query a range across multiple time series by filters in forward direction

TS.MRANGE

Query a range across multiple time series by filters in forward direction.

TS.MRANGE fromTimestamp toTimestamp
          [FILTER_BY_TS TS...]
          [FILTER_BY_VALUE min max]
          [WITHLABELS | SELECTED_LABELS label...]
          [COUNT count]
          [[ALIGN value] AGGREGATION aggregator bucketDuration [BUCKETTIMESTAMP bt] [EMPTY]]
          FILTER filter..
          [GROUPBY label REDUCE reducer]
  • fromTimestamp - Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

  • toTimestamp - End timestamp for range query, + can be used to express the maximum possible timestamp.

  • FILTER filter...

    This is the list of possible filters:

    • label=value - label equals value
    • label!=value - label doesn't equal value
    • label= - key does not have the label label
    • label!= - key has label label
    • label=(value1,value2,...) - key with label label that equals one of the values in the list
    • lable!=(value1,value2,...) - key with label label that doesn't equal any of the values in the list

    Note: Whenever filters need to be provided, a minimum of one label=value filter must be applied.

Optional parameters:

  • FILTER_BY_TS ts... - Followed by a list of timestamps to filter the result by specific timestamps

  • FILTER_BY_VALUE min max - Filter result by value using minimum and maximum.

  • WITHLABELS - Include in the reply all label-value pairs representing metadata labels of the time series.

    If WITHLABELS or SELECTED_LABELS are not specified, by default, an empty list is reported as the label-value pairs.

  • SELECTED_LABELS label... - Include in the reply a subset of the label-value pairs that represent metadata labels of the time series. This is usefull when there is a large number of labels per series, but only the values of some of the labels are required.

    If WITHLABELS or SELECTED_LABELS are not specified, by default, an empty list is reported as the label-value pairs.

  • COUNT count - Maximum number of returned samples per time series.

  • ALIGN value - Time bucket alignment control for AGGREGATION. This will control the time bucket timestamps by changing the reference timestamp on which a bucket is defined. Possible values:

    • start or -: The reference timestamp will be the query start interval time (fromTimestamp) which can't be -
    • end or +: The reference timestamp will be the query end interval time (toTimestamp) which can't be +
    • A specific timestamp: align the reference timestamp to a specific time
    • Note: when not provided, alignment is set to 0
  • AGGREGATION aggregator bucketDuration

    Aggregate results into time buckets.

    • aggregator - Aggregation type: One of the following:
      aggregatordescription
      avgarithmetic mean of all values
      sumsum of all values
      minminimum value
      maxmaximum value
      rangedifference between the highest and the lowest value
      countnumber of values
      firstthe value with the lowest timestamp in the bucket
      lastthe value with the highest timestamp in the bucket
      std.ppopulation standard deviation of the values
      std.ssample standard deviation of the values
      var.ppopulation variance of the values
      var.ssample variance of the values
      twatime-weighted average of all values
    • bucketDuration - duration of each bucket, in milliseconds

    The alignment of time buckets is 0.

  • GROUPBY label REDUCE reducer

    Aggregate results across different time series, grouped by the provided label name.

    When combined with AGGREGATION the groupby/reduce is applied post aggregation stage.

    • label - label name to group series by. A new series for each value will be produced.
    • reducer - Reducer type used to aggregate series that share the same label value. One of the following:
      reducerdescription
      avgper label value: arithmetic mean of all values
      sumper label value: sum of all values
      minper label value: minimum value
      maxper label value: maximum value
      rangeper label value: difference between the highest and the lowest value
      countper label value: number of values
      std.pper label value: population standard deviation of the values
      std.sper label value: sample standard deviation of the values
      var.pper label value: population variance of the values
      var.sper label value: sample variance of the values
    • Note: The produced time series will be named <label>=<groupbyvalue>
    • Note: The produced time series will contain 2 labels with the following label array structure:
      • __reducer__ : the reducer used
      • __source__ : the time series keys used to compute the grouped series ("key1,key2,key3,...")

Return Value

For each time series matching the specified filters, the following is reported:

  • The key name
  • A list of label-value pairs
    • By default, an empty list is reported
    • If WITHLABELS is specified, all labels associated with this time series are reported
    • If SELECTED_LABELS label... is specified, the selected labels are reported
  • timestamp-value pairs for all samples/aggregations matching the range

Note: MRANGE command can't be part of transaction when running on Redis cluster.

Examples

Query by Filters Example
127.0.0.1:6379> TS.MRANGE 1548149180000 1548149210000 AGGREGATION avg 5000 FILTER area_id=32 sensor_id!=1
1) 1) "temperature:2:32"
   2) (empty list or set)
   3) 1) 1) (integer) 1548149180000
         2) "27.600000000000001"
      2) 1) (integer) 1548149185000
         2) "23.800000000000001"
      3) 1) (integer) 1548149190000
         2) "24.399999999999999"
      4) 1) (integer) 1548149195000
         2) "24"
      5) 1) (integer) 1548149200000
         2) "25.600000000000001"
      6) 1) (integer) 1548149205000
         2) "25.800000000000001"
      7) 1) (integer) 1548149210000
         2) "21"
2) 1) "temperature:3:32"
   2) (empty list or set)
   3) 1) 1) (integer) 1548149180000
         2) "26.199999999999999"
      2) 1) (integer) 1548149185000
         2) "27.399999999999999"
      3) 1) (integer) 1548149190000
         2) "24.800000000000001"
      4) 1) (integer) 1548149195000
         2) "23.199999999999999"
      5) 1) (integer) 1548149200000
         2) "25.199999999999999"
      6) 1) (integer) 1548149205000
         2) "28"
      7) 1) (integer) 1548149210000
         2) "20"
Query by Filters Example with WITHLABELS option
127.0.0.1:6379> TS.MRANGE 1548149180000 1548149210000 AGGREGATION avg 5000 WITHLABELS FILTER area_id=32 sensor_id!=1
1) 1) "temperature:2:32"
   2) 1) 1) "sensor_id"
         2) "2"
      2) 1) "area_id"
         2) "32"
   3) 1) 1) (integer) 1548149180000
         2) "27.600000000000001"
      2) 1) (integer) 1548149185000
         2) "23.800000000000001"
      3) 1) (integer) 1548149190000
         2) "24.399999999999999"
      4) 1) (integer) 1548149195000
         2) "24"
      5) 1) (integer) 1548149200000
         2) "25.600000000000001"
      6) 1) (integer) 1548149205000
         2) "25.800000000000001"
      7) 1) (integer) 1548149210000
         2) "21"
2) 1) "temperature:3:32"
   2) 1) 1) "sensor_id"
         2) "3"
      2) 1) "area_id"
         2) "32"
   3) 1) 1) (integer) 1548149180000
         2) "26.199999999999999"
      2) 1) (integer) 1548149185000
         2) "27.399999999999999"
      3) 1) (integer) 1548149190000
         2) "24.800000000000001"
      4) 1) (integer) 1548149195000
         2) "23.199999999999999"
      5) 1) (integer) 1548149200000
         2) "25.199999999999999"
      6) 1) (integer) 1548149205000
         2) "28"
      7) 1) (integer) 1548149210000
         2) "20"
Query time series with metric=cpu, group them by metric_name reduce max
127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric_name system
(integer) 1
127.0.0.1:6379> TS.ADD ts1 1548149185000 45
(integer) 2
127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric_name user
(integer) 2
127.0.0.1:6379> TS.MRANGE - + WITHLABELS FILTER metric=cpu GROUPBY metric_name REDUCE max
1) 1) "metric_name=system"
   2) 1) 1) "metric_name"
         2) "system"
      2) 1) "__reducer__"
         2) "max"
      3) 1) "__source__"
         2) "ts1"
   3) 1) 1) (integer) 1548149180000
         2) 90
      2) 1) (integer) 1548149185000
         2) 45
2) 1) "metric_name=user"
   2) 1) 1) "metric_name"
         2) "user"
      2) 1) "__reducer__"
         2) "max"
      3) 1) "__source__"
         2) "ts2"
   3) 1) 1) (integer) 1548149180000
         2) 99
Query time series with metric=cpu, filter values larger or equal to 90.0 and smaller or equal to 100.0
127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric_name system
(integer) 1
127.0.0.1:6379> TS.ADD ts1 1548149185000 45
(integer) 2
127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric_name user
(integer) 2
127.0.0.1:6379> TS.MRANGE - + FILTER_BY_VALUE 90 100 WITHLABELS FILTER metric=cpu
1) 1) "ts1"
   2) 1) 1) "metric"
         2) "cpu"
      2) 1) "metric_name"
         2) "system"
   3) 1) 1) (integer) 1548149180000
         2) 90
2) 1) "ts2"
   2) 1) 1) "metric"
         2) "cpu"
      2) 1) "metric_name"
         2) "user"
   3) 1) 1) (integer) 1548149180000
         2) 99
Query time series with metric=cpu, but only reply the team label
127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric_name system team NY
(integer) 1
127.0.0.1:6379> TS.ADD ts1 1548149185000 45
(integer) 2
127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric_name user team SF
(integer) 2
127.0.0.1:6379> TS.MRANGE - + SELECTED_LABELS team FILTER metric=cpu
1) 1) "ts1"
   2) 1) 1) "team"
         2) "NY"
   3) 1) 1) (integer) 1548149180000
         2) 90
      2) 1) (integer) 1548149185000
         2) 45
2) 1) "ts2"
   2) 1) 1) "team"
         2) "SF"
   3) 1) 1) (integer) 1548149180000
         2) 99

422 - TS.MREVRANGE

Query a range across multiple time-series by filters in reverse direction

TS.MREVRANGE

Query a range across multiple time series by filters in reverse direction.

TS.MREVRANGE fromTimestamp toTimestamp
          [FILTER_BY_TS TS...]
          [FILTER_BY_VALUE min max]
          [WITHLABELS | SELECTED_LABELS label...]
          [COUNT count]
          [[ALIGN value] AGGREGATION aggregator bucketDuration [BUCKETTIMESTAMP bt] [EMPTY]]
          FILTER filter..
          [GROUPBY label REDUCE reducer]
  • fromTimestamp - Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).

  • toTimestamp - End timestamp for range query, + can be used to express the maximum possible timestamp.

  • FILTER filter...

    This is the list of possible filters:

    • label=value - label equals value
    • label!=value - label doesn't equal value
    • label= - key does not have the label label
    • label!= - key has label label
    • label=(value1,value2,...) - key with label label that equals one of the values in the list
    • lable!=(value1,value2,...) - key with label label that doesn't equal any of the values in the list

    Note: Whenever filters need to be provided, a minimum of one label=value filter must be applied.

Optional parameters:

  • FILTER_BY_TS ts... - Followed by a list of timestamps to filter the result by specific timestamps

  • FILTER_BY_VALUE min max - Filter result by value using minimum and maximum.

  • WITHLABELS - Include in the reply all label-value pairs representing metadata labels of the time series.

    If WITHLABELS or SELECTED_LABELS are not specified, by default, an empty list is reported as the label-value pairs.

  • SELECTED_LABELS label... - Include in the reply a subset of the label-value pairs that represent metadata labels of the time series. This is usefull when there is a large number of labels per series, but only the values of some of the labels are required.

    If WITHLABELS or SELECTED_LABELS are not specified, by default, an empty list is reported as the label-value pairs.

  • COUNT count - Maximum number of returned samples per time series.

  • ALIGN value - Time bucket alignment control for AGGREGATION. This will control the time bucket timestamps by changing the reference timestamp on which a bucket is defined. Possible values:

    • start or -: The reference timestamp will be the query start interval time (fromTimestamp) which can't be -
    • end or +: The reference timestamp will be the query end interval time (toTimestamp) which can't be +
    • A specific timestamp: align the reference timestamp to a specific time
    • Note: when not provided, alignment is set to 0
  • AGGREGATION aggregator bucketDuration

    Aggregate results into time buckets.

    • aggregator - Aggregation type: One of the following:
      aggregatordescription
      avgarithmetic mean of all values
      sumsum of all values
      minminimum value
      maxmaximum value
      rangedifference between the highest and the lowest value
      countnumber of values
      firstthe value with the lowest timestamp in the bucket
      lastthe value with the highest timestamp in the bucket
      std.ppopulation standard deviation of the values
      std.ssample standard deviation of the values
      var.ppopulation variance of the values
      var.ssample variance of the values
      twatime-weighted average of all values
    • bucketDuration - duration of each bucket, in milliseconds

    The alignment of time buckets is 0.

  • GROUPBY label REDUCE reducer

    Aggregate results across different time series, grouped by the provided label name.

    When combined with AGGREGATION the groupby/reduce is applied post aggregation stage.

    • label - label name to group series by. A new series for each value will be produced.
    • reducer - Reducer type used to aggregate series that share the same label value. One of the following:
      reducerdescription
      avgper label value: arithmetic mean of all values
      sumper label value: sum of all values
      minper label value: minimum value
      maxper label value: maximum value
      rangeper label value: difference between the highest and the lowest value
      countper label value: number of values
      std.pper label value: population standard deviation of the values
      std.sper label value: sample standard deviation of the values
      var.pper label value: population variance of the values
      var.sper label value: sample variance of the values
    • Note: The produced time series will be named <label>=<groupbyvalue>
    • Note: The produced time series will contain 2 labels with the following label array structure:
      • __reducer__ : the reducer used
      • __source__ : the time series keys used to compute the grouped series ("key1,key2,key3,...")

Return Value

For each time series matching the specified filters, the following is reported:

  • The key name
  • A list of label-value pairs
    • By default, an empty list is reported
    • If WITHLABELS is specified, all labels associated with this time series are reported
    • If SELECTED_LABELS label... is specified, the selected labels are reported
  • timestamp-value pairs for all samples/aggregations matching the range

Note: MREVRANGE command can't be part of transaction when running on Redis cluster.

Examples

Query by Filters Example
127.0.0.1:6379> TS.MRANGE 1548149180000 1548149210000 AGGREGATION avg 5000 FILTER area_id=32 sensor_id!=1
1) 1) "temperature:2:32"
   2) (empty list or set)
   3) 1) 1) (integer) 1548149180000
         2) "27.600000000000001"
      2) 1) (integer) 1548149185000
         2) "23.800000000000001"
      3) 1) (integer) 1548149190000
         2) "24.399999999999999"
      4) 1) (integer) 1548149195000
         2) "24"
      5) 1) (integer) 1548149200000
         2) "25.600000000000001"
      6) 1) (integer) 1548149205000
         2) "25.800000000000001"
      7) 1) (integer) 1548149210000
         2) "21"
2) 1) "temperature:3:32"
   2) (empty list or set)
   3) 1) 1) (integer) 1548149180000
         2) "26.199999999999999"
      2) 1) (integer) 1548149185000
         2) "27.399999999999999"
      3) 1) (integer) 1548149190000
         2) "24.800000000000001"
      4) 1) (integer) 1548149195000
         2) "23.199999999999999"
      5) 1) (integer) 1548149200000
         2) "25.199999999999999"
      6) 1) (integer) 1548149205000
         2) "28"
      7) 1) (integer) 1548149210000
         2) "20"
Query by Filters Example with WITHLABELS option
127.0.0.1:6379> TS.MRANGE 1548149180000 1548149210000 AGGREGATION avg 5000 WITHLABELS FILTER area_id=32 sensor_id!=1
1) 1) "temperature:2:32"
   2) 1) 1) "sensor_id"
         2) "2"
      2) 1) "area_id"
         2) "32"
   3) 1) 1) (integer) 1548149180000
         2) "27.600000000000001"
      2) 1) (integer) 1548149185000
         2) "23.800000000000001"
      3) 1) (integer) 1548149190000
         2) "24.399999999999999"
      4) 1) (integer) 1548149195000
         2) "24"
      5) 1) (integer) 1548149200000
         2) "25.600000000000001"
      6) 1) (integer) 1548149205000
         2) "25.800000000000001"
      7) 1) (integer) 1548149210000
         2) "21"
2) 1) "temperature:3:32"
   2) 1) 1) "sensor_id"
         2) "3"
      2) 1) "area_id"
         2) "32"
   3) 1) 1) (integer) 1548149180000
         2) "26.199999999999999"
      2) 1) (integer) 1548149185000
         2) "27.399999999999999"
      3) 1) (integer) 1548149190000
         2) "24.800000000000001"
      4) 1) (integer) 1548149195000
         2) "23.199999999999999"
      5) 1) (integer) 1548149200000
         2) "25.199999999999999"
      6) 1) (integer) 1548149205000
         2) "28"
      7) 1) (integer) 1548149210000
         2) "20"
Query time series with metric=cpu, group them by metric_name reduce max
127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric_name system
(integer) 1
127.0.0.1:6379> TS.ADD ts1 1548149185000 45
(integer) 2
127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric_name user
(integer) 2
127.0.0.1:6379> TS.MRANGE - + WITHLABELS FILTER metric=cpu GROUPBY metric_name REDUCE max
1) 1) "metric_name=system"
   2) 1) 1) "metric_name"
         2) "system"
      2) 1) "__reducer__"
         2) "max"
      3) 1) "__source__"
         2) "ts1"
   3) 1) 1) (integer) 1548149180000
         2) 90
      2) 1) (integer) 1548149185000
         2) 45
2) 1) "metric_name=user"
   2) 1) 1) "metric_name"
         2) "user"
      2) 1) "__reducer__"
         2) "max"
      3) 1) "__source__"
         2) "ts2"
   3) 1) 1) (integer) 1548149180000
         2) 99
Query time series with metric=cpu, filter values larger or equal to 90.0 and smaller or equal to 100.0
127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric_name system
(integer) 1
127.0.0.1:6379> TS.ADD ts1 1548149185000 45
(integer) 2
127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric_name user
(integer) 2
127.0.0.1:6379> TS.MRANGE - + FILTER_BY_VALUE 90 100 WITHLABELS FILTER metric=cpu
1) 1) "ts1"
   2) 1) 1) "metric"
         2) "cpu"
      2) 1) "metric_name"
         2) "system"
   3) 1) 1) (integer) 1548149180000
         2) 90
2) 1) "ts2"
   2) 1) 1) "metric"
         2) "cpu"
      2) 1) "metric_name"
         2) "user"
   3) 1) 1) (integer) 1548149180000
         2) 99
Query time series with metric=cpu, but only reply the team label
127.0.0.1:6379> TS.ADD ts1 1548149180000 90 labels metric cpu metric_name system team NY
(integer) 1
127.0.0.1:6379> TS.ADD ts1 1548149185000 45
(integer) 2
127.0.0.1:6379> TS.ADD ts2 1548149180000 99 labels metric cpu metric_name user team SF
(integer) 2
127.0.0.1:6379> TS.MRANGE - + SELECTED_LABELS team FILTER metric=cpu
1) 1) "ts1"
   2) 1) 1) "team"
         2) "NY"
   3) 1) 1) (integer) 1548149180000
         2) 90
      2) 1) (integer) 1548149185000
         2) 45
2) 1) "ts2"
   2) 1) 1) "team"
         2) "SF"
   3) 1) 1) (integer) 1548149180000
         2) 99

423 - TS.QUERYINDEX

Get all time series keys matching a filter list

TS.QUERYINDEX

Get all time series keys matching a filter list.

TS.QUERYINDEX filter...
  • filter...

    This is the list of possible filters:

    • label=value - label equals value
    • label!=value - label doesn't equal value
    • label= - key does not have the label label
    • label!= - key has label label
    • label=(value1,value2,...) - key with label label that equals one of the values in the list
    • lable!=(value1,value2,...) - key with label label that doesn't equal any of the values in the list

    Note: Whenever filters need to be provided, a minimum of one label=value filter must be applied.

Note: QUERYINDEX command can't be part of transaction when running on Redis cluster.

Query index example

127.0.0.1:6379> TS.QUERYINDEX sensor_id=2
1) "temperature:2:32"
2) "temperature:2:33"

424 - TS.RANGE

Query a range in forward direction

TS.RANGE

Query a range in forward direction.

TS.RANGE key fromTimestamp toTimestamp
         [FILTER_BY_TS ts...]
         [FILTER_BY_VALUE min max]
         [COUNT count] 
         [[ALIGN value] AGGREGATION aggregator bucketDuration [BUCKETTIMESTAMP bt] [EMPTY]]
  • key - Key name for time series
  • fromTimestamp - Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).
  • toTimestamp - End timestamp for range query, + can be used to express the maximum possible timestamp.

Optional parameters:

  • FILTER_BY_TS ts... - a list of timestamps to filter the result by specific timestamps

  • FILTER_BY_VALUE min max - Filter result by value using minimum and maximum.

  • COUNT count - Maximum number of returned samples.

  • ALIGN value - Time bucket alignment control for AGGREGATION. This will control the time bucket timestamps by changing the reference timestamp on which a bucket is defined. Possible values:

    • start or -: The reference timestamp will be the query start interval time (fromTimestamp) which can't be -
    • end or +: The reference timestamp will be the query end interval time (toTimestamp) which can't be +
    • A specific timestamp: align the reference timestamp to a specific time
    • Note: when not provided, alignment is set to 0
  • AGGREGATION aggregator bucketDuration

    Aggregate results into time buckets.

    • aggregator - Aggregation type: One of the following:
      aggregatordescription
      avgarithmetic mean of all values
      sumsum of all values
      minminimum value
      maxmaximum value
      rangedifference between the highest and the lowest value
      countnumber of values
      firstthe value with the lowest timestamp in the bucket
      lastthe value with the highest timestamp in the bucket
      std.ppopulation standard deviation of the values
      std.ssample standard deviation of the values
      var.ppopulation variance of the values
      var.ssample variance of the values
      twatime-weighted average of all values
    • bucketDuration - duration of each bucket, in milliseconds

    The alignment of time buckets is 0.

Complexity

TS.RANGE complexity is O(n/m+k).

n = Number of data points m = Chunk size (data points per chunk) k = Number of data points that are in the requested range

This can be improved in the future by using binary search to find the start of the range, which makes this O(Log(n/m)+k*m). But because m is pretty small, we can neglect it and look at the operation as O(Log(n) + k).

Aggregated Query Example

127.0.0.1:6379> TS.RANGE temperature:3:32 1548149180000 1548149210000 AGGREGATION avg 5000
1) 1) (integer) 1548149180000
   2) "26.199999999999999"
2) 1) (integer) 1548149185000
   2) "27.399999999999999"
3) 1) (integer) 1548149190000
   2) "24.800000000000001"
4) 1) (integer) 1548149195000
   2) "23.199999999999999"
5) 1) (integer) 1548149200000
   2) "25.199999999999999"
6) 1) (integer) 1548149205000
   2) "28"
7) 1) (integer) 1548149210000
   2) "20"

425 - TS.REVRANGE

Query a range in reverse direction

TS.REVRANGE

Query a range in reverse direction.

TS.REVRANGE key fromTimestamp toTimestamp
         [FILTER_BY_TS TS...]
         [FILTER_BY_VALUE min max]
         [COUNT count]
         [[ALIGN value] AGGREGATION aggregator bucketDuration [BUCKETTIMESTAMP bt] [EMPTY]]
  • key - Key name for timeseries
  • fromTimestamp - Start timestamp for the range query. - can be used to express the minimum possible timestamp (0).
  • toTimestamp - End timestamp for range query, + can be used to express the maximum possible timestamp.

Optional parameters:

  • FILTER_BY_TS ts... - a list of timestamps to filter the result by specific timestamps

  • FILTER_BY_VALUE min max - Filter result by value using minimum and maximum.

  • COUNT count - Maximum number of returned samples.

  • ALIGN value - Time bucket alignment control for AGGREGATION. This will control the time bucket timestamps by changing the reference timestamp on which a bucket is defined. Possible values:
    • start or -: The reference timestamp will be the query start interval time (fromTimestamp)which can't be -
    • end or +: The reference timestamp will be the query end interval time (toTimestamp) which can't be +
    • A specific timestamp: align the reference timestamp to a specific time
    • Note: when not provided, alignment is set to 0
  • AGGREGATION aggregator bucketDuration

    Aggregate results into time buckets.

    • aggregator - Aggregation type: One of the following:
      aggregatordescription
      avgarithmetic mean of all values
      sumsum of all values
      minminimum value
      maxmaximum value
      rangedifference between the highest and the lowest value
      countnumber of values
      firstthe value with the lowest timestamp in the bucket
      lastthe value with the highest timestamp in the bucket
      std.ppopulation standard deviation of the values
      std.ssample standard deviation of the values
      var.ppopulation variance of the values
      var.ssample variance of the values
      twatime-weighted average of all values
    • bucketDuration - duration of each bucket, in milliseconds

    The alignment of time buckets is 0.

Complexity

TS.REVRANGE complexity is O(n/m+k).

n = Number of data points m = Chunk size (data points per chunk) k = Number of data points that are in the requested range

This can be improved in the future by using binary search to find the start of the range, which makes this O(Log(n/m)+k*m). But because m is pretty small, we can neglect it and look at the operation as O(Log(n) + k).

Aggregated Query Example

127.0.0.1:6379> TS.RANGE temperature:3:32 1548149180000 1548149210000 AGGREGATION avg 5000
1) 1) (integer) 1548149180000
   2) "26.199999999999999"
2) 1) (integer) 1548149185000
   2) "27.399999999999999"
3) 1) (integer) 1548149190000
   2) "24.800000000000001"
4) 1) (integer) 1548149195000
   2) "23.199999999999999"
5) 1) (integer) 1548149200000
   2) "25.199999999999999"
6) 1) (integer) 1548149205000
   2) "28"
7) 1) (integer) 1548149210000
   2) "20"

426 - TTL

Get the time to live for a key in seconds

Returns the remaining time to live of a key that has a timeout. This introspection capability allows a Redis client to check how many seconds a given key will continue to be part of the dataset.

In Redis 2.6 or older the command returns -1 if the key does not exist or if the key exist but has no associated expire.

Starting with Redis 2.8 the return value in case of error changed:

  • The command returns -2 if the key does not exist.
  • The command returns -1 if the key exists but has no associated expire.

See also the PTTL command that returns the same information with milliseconds resolution (Only available in Redis 2.6 or greater).

Return

Integer reply: TTL in seconds, or a negative value in order to signal an error (see the description above).

Examples

SET mykey "Hello" EXPIRE mykey 10 TTL mykey

427 - TYPE

Determine the type stored at key

Returns the string representation of the type of the value stored at key. The different types that can be returned are: string, list, set, zset, hash and stream.

Return

Simple string reply: type of key, or none when key does not exist.

Examples

SET key1 "value" LPUSH key2 "value" SADD key3 "value" TYPE key1 TYPE key2 TYPE key3

428 - UNLINK

Delete a key asynchronously in another thread. Otherwise it is just as DEL, but non blocking.

This command is very similar to DEL: it removes the specified keys. Just like DEL a key is ignored if it does not exist. However the command performs the actual memory reclaiming in a different thread, so it is not blocking, while DEL is. This is where the command name comes from: the command just unlinks the keys from the keyspace. The actual removal will happen later asynchronously.

Return

Integer reply: The number of keys that were unlinked.

Examples

SET key1 "Hello" SET key2 "World" UNLINK key1 key2 key3

429 - UNSUBSCRIBE

Stop listening for messages posted to the given channels

Unsubscribes the client from the given channels, or from all of them if none is given.

When no channels are specified, the client is unsubscribed from all the previously subscribed channels. In this case, a message for every unsubscribed channel will be sent to the client.

430 - UNWATCH

Forget about all watched keys

Flushes all the previously watched keys for a transaction.

If you call EXEC or DISCARD, there's no need to manually call UNWATCH.

Return

Simple string reply: always OK.

431 - WAIT

Wait for the synchronous replication of all the write commands sent in the context of the current connection

This command blocks the current client until all the previous write commands are successfully transferred and acknowledged by at least the specified number of replicas. If the timeout, specified in milliseconds, is reached, the command returns even if the specified number of replicas were not yet reached.

The command will always return the number of replicas that acknowledged the write commands sent before the WAIT command, both in the case where the specified number of replicas are reached, or when the timeout is reached.

A few remarks:

  1. When WAIT returns, all the previous write commands sent in the context of the current connection are guaranteed to be received by the number of replicas returned by WAIT.
  2. If the command is sent as part of a MULTI transaction, the command does not block but instead just return ASAP the number of replicas that acknowledged the previous write commands.
  3. A timeout of 0 means to block forever.
  4. Since WAIT returns the number of replicas reached both in case of failure and success, the client should check that the returned value is equal or greater to the replication level it demanded.

Consistency and WAIT

Note that WAIT does not make Redis a strongly consistent store: while synchronous replication is part of a replicated state machine, it is not the only thing needed. However in the context of Sentinel or Redis Cluster failover, WAIT improves the real world data safety.

Specifically if a given write is transferred to one or more replicas, it is more likely (but not guaranteed) that if the master fails, we'll be able to promote, during a failover, a replica that received the write: both Sentinel and Redis Cluster will do a best-effort attempt to promote the best replica among the set of available replicas.

However this is just a best-effort attempt so it is possible to still lose a write synchronously replicated to multiple replicas.

Implementation details

Since the introduction of partial resynchronization with replicas (PSYNC feature) Redis replicas asynchronously ping their master with the offset they already processed in the replication stream. This is used in multiple ways:

  1. Detect timed out replicas.
  2. Perform a partial resynchronization after a disconnection.
  3. Implement WAIT.

In the specific case of the implementation of WAIT, Redis remembers, for each client, the replication offset of the produced replication stream when a given write command was executed in the context of a given client. When WAIT is called Redis checks if the specified number of replicas already acknowledged this offset or a greater one.

Return

Integer reply: The command returns the number of replicas reached by all the writes performed in the context of the current connection.

Examples

> SET foo bar
OK
> WAIT 1 0
(integer) 1
> WAIT 2 1000
(integer) 1

In the following example the first call to WAIT does not use a timeout and asks for the write to reach 1 replica. It returns with success. In the second attempt instead we put a timeout, and ask for the replication of the write to two replicas. Since there is a single replica available, after one second WAIT unblocks and returns 1, the number of replicas reached.

432 - WATCH

Watch the given keys to determine execution of the MULTI/EXEC block

Marks the given keys to be watched for conditional execution of a transaction.

Return

Simple string reply: always OK.

433 - XACK

Marks a pending message as correctly processed, effectively removing it from the pending entries list of the consumer group. Return value of the command is the number of messages successfully acknowledged, that is, the IDs we were actually able to resolve in the PEL.

The XACK command removes one or multiple messages from the Pending Entries List (PEL) of a stream consumer group. A message is pending, and as such stored inside the PEL, when it was delivered to some consumer, normally as a side effect of calling XREADGROUP, or when a consumer took ownership of a message calling XCLAIM. The pending message was delivered to some consumer but the server is yet not sure it was processed at least once. So new calls to XREADGROUP to grab the messages history for a consumer (for instance using an ID of 0), will return such message. Similarly the pending message will be listed by the XPENDING command, that inspects the PEL.

Once a consumer successfully processes a message, it should call XACK so that such message does not get processed again, and as a side effect, the PEL entry about this message is also purged, releasing memory from the Redis server.

Return

Integer reply, specifically:

The command returns the number of messages successfully acknowledged. Certain message IDs may no longer be part of the PEL (for example because they have already been acknowledged), and XACK will not count them as successfully acknowledged.

Examples

redis> XACK mystream mygroup 1526569495631-0
(integer) 1

434 - XADD

Appends a new entry to a stream

Appends the specified stream entry to the stream at the specified key. If the key does not exist, as a side effect of running this command the key is created with a stream value. The creation of stream's key can be disabled with the NOMKSTREAM option.

An entry is composed of a set of field-value pairs, it is basically a small dictionary. The field-value pairs are stored in the same order they are given by the user, and commands to read the stream such as XRANGE or XREAD are guaranteed to return the fields and values exactly in the same order they were added by XADD.

XADD is the only Redis command that can add data to a stream, but there are other commands, such as XDEL and XTRIM, that are able to remove data from a stream.

Specifying a Stream ID as an argument

A stream entry ID identifies a given entry inside a stream.

The XADD command will auto-generate a unique ID for you if the ID argument specified is the * character (asterisk ASCII character). However, while useful only in very rare cases, it is possible to specify a well-formed ID, so that the new entry will be added exactly with the specified ID.

IDs are specified by two numbers separated by a - character:

1526919030474-55

Both quantities are 64-bit numbers. When an ID is auto-generated, the first part is the Unix time in milliseconds of the Redis instance generating the ID. The second part is just a sequence number and is used in order to distinguish IDs generated in the same millisecond.

You can also specify an incomplete ID, that consists only of the milliseconds part, which is interpreted as a zero value for sequence part. To have only the sequence part automatically generated, specify the milliseconds part followed by the - separator and the * character:

> XADD mystream 1526919030474-55 message "Hello,"
"1526919030474-55"
> XADD mystream 1526919030474-* message " World!"
"1526919030474-56"

IDs are guaranteed to be always incremental: If you compare the ID of the entry just inserted it will be greater than any other past ID, so entries are totally ordered inside a stream. In order to guarantee this property, if the current top ID in the stream has a time greater than the current local time of the instance, the top entry time will be used instead, and the sequence part of the ID incremented. This may happen when, for instance, the local clock jumps backward, or if after a failover the new master has a different absolute time.

When a user specified an explicit ID to XADD, the minimum valid ID is 0-1, and the user must specify an ID which is greater than any other ID currently inside the stream, otherwise the command will fail and return an error. Usually resorting to specific IDs is useful only if you have another system generating unique IDs (for instance an SQL table) and you really want the Redis stream IDs to match the one of this other system.

Capped streams

XADD incorporates the same semantics as the XTRIM command - refer to its documentation page for more information. This allows adding new entries and keeping the stream's size in check with a single call to XADD, effectively capping the stream with an arbitrary threshold. Although exact trimming is possible and is the default, due to the internal representation of steams it is more efficient to add an entry and trim stream with XADD using almost exact trimming (the ~ argument).

For example, calling XADD in the following form:

XADD mystream MAXLEN ~ 1000 * ... entry fields here ...

Will add a new entry but will also evict old entries so that the stream will contain only 1000 entries, or at most a few tens more.

Additional information about streams

For further information about Redis streams please check our introduction to Redis Streams document.

Return

Bulk string reply, specifically:

The command returns the ID of the added entry. The ID is the one auto-generated if * is passed as ID argument, otherwise the command just returns the same ID specified by the user during insertion.

The command returns a Null reply when used with the NOMKSTREAM option and the key doesn't exist.

Examples

XADD mystream * name Sara surname OConnor XADD mystream * field1 value1 field2 value2 field3 value3 XLEN mystream XRANGE mystream - +

435 - XAUTOCLAIM

Changes (or acquires) ownership of messages in a consumer group, as if the messages were delivered to the specified consumer.

This command transfers ownership of pending stream entries that match the specified criteria. Conceptually, XAUTOCLAIM is equivalent to calling XPENDING and then XCLAIM, but provides a more straightforward way to deal with message delivery failures via SCAN-like semantics.

Like XCLAIM, the command operates on the stream entries at <key> and in the context of the provided <group>. It transfers ownership to <consumer> of messages pending for more than <min-idle-time> milliseconds and having an equal or greater ID than <start>.

The optional <count> argument, which defaults to 100, is the upper limit of the number of entries that the command attempts to claim. Internally, the command begins scanning the consumer group's Pending Entries List (PEL) from <start> and filters out entries having an idle time less than or equal to <min-idle-time>. The maximum number of pending entries that the command scans is the product of multiplying <count>'s value by 10 (hard-coded). It is possible, therefore, that the number of entries claimed will be less than the specified value.

The optional JUSTID argument changes the reply to return just an array of IDs of messages successfully claimed, without returning the actual message. Using this option means the retry counter is not incremented.

The command returns the claimed entries as an array. It also returns a stream ID intended for cursor-like use as the <start> argument for its subsequent call. When there are no remaining PEL entries, the command returns the special 0-0 ID to signal completion. However, note that you may want to continue calling XAUTOCLAIM even after the scan is complete with the 0-0 as <start> ID, because enough time passed, so older pending entries may now be eligible for claiming.

Note that only messages that are idle longer than <min-idle-time> are claimed, and claiming a message resets its idle time. This ensures that only a single consumer can successfully claim a given pending message at a specific instant of time and trivially reduces the probability of processing the same message multiple times.

While iterating the PEL, if XAUTOCLAIM stumbles upon a message which doesn't exist in the stream anymore (either trimmed or deleted by XDEL) it does not claim it, and deletes it from the PEL in which it was found. This feature was introduced in Redis 7.0. These message IDs are returned to the caller as a part of XAUTOCLAIMs reply.

Lastly, claiming a message with XAUTOCLAIM also increments the attempted deliveries count for that message, unless the JUSTID option has been specified (which only delivers the message ID, not the message itself). Messages that cannot be processed for some reason - for example, because consumers systematically crash when processing them - will exhibit high attempted delivery counts that can be detected by monitoring.

Return

Array reply, specifically:

An array with three elements:

  1. A stream ID to be used as the <start> argument for the next call to XAUTOCLAIM.
  2. An array containing all the successfully claimed messages in the same format as XRANGE.
  3. An array containing message IDs that no longer exist in the stream, and were deleted from the PEL in which they were found.

Examples

> XAUTOCLAIM mystream mygroup Alice 3600000 0-0 COUNT 25
1) "0-0"
2) 1) 1) "1609338752495-0"
      2) 1) "field"
         2) "value"
3) (empty array)

In the above example, we attempt to claim up to 25 entries that are pending and idle (not having been acknowledged or claimed) for at least an hour, starting at the stream's beginning. The consumer "Alice" from the "mygroup" group acquires ownership of these messages. Note that the stream ID returned in the example is 0-0, indicating that the entire stream was scanned. We can also see that XAUTOCLAIM did not stumble upon any deleted messages (the third reply element is an empty array).

436 - XCLAIM

Changes (or acquires) ownership of a message in a consumer group, as if the message was delivered to the specified consumer.

In the context of a stream consumer group, this command changes the ownership of a pending message, so that the new owner is the consumer specified as the command argument. Normally this is what happens:

  1. There is a stream with an associated consumer group.
  2. Some consumer A reads a message via XREADGROUP from a stream, in the context of that consumer group.
  3. As a side effect a pending message entry is created in the Pending Entries List (PEL) of the consumer group: it means the message was delivered to a given consumer, but it was not yet acknowledged via XACK.
  4. Then suddenly that consumer fails forever.
  5. Other consumers may inspect the list of pending messages, that are stale for quite some time, using the XPENDING command. In order to continue processing such messages, they use XCLAIM to acquire the ownership of the message and continue. Consumers can also use the XAUTOCLAIM command to automatically scan and claim stale pending messages.

This dynamic is clearly explained in the Stream intro documentation.

Note that the message is claimed only if its idle time is greater the minimum idle time we specify when calling XCLAIM. Because as a side effect XCLAIM will also reset the idle time (since this is a new attempt at processing the message), two consumers trying to claim a message at the same time will never both succeed: only one will successfully claim the message. This avoids that we process a given message multiple times in a trivial way (yet multiple processing is possible and unavoidable in the general case).

Moreover, as a side effect, XCLAIM will increment the count of attempted deliveries of the message unless the JUSTID option has been specified (which only delivers the message ID, not the message itself). In this way messages that cannot be processed for some reason, for instance because the consumers crash attempting to process them, will start to have a larger counter and can be detected inside the system.

XCLAIM will not claim a message in the following cases:

  1. The message doesn't exist in the group PEL (i.e. it was never read by any consumer)
  2. The message exists in the group PEL but not in the stream itself (i.e. the message was read but never acknowledged, and then was deleted from the stream, either by trimming or by XDEL)

In both cases the reply will not contain a corresponding entry to that message (i.e. the length of the reply array may be smaller than the number of IDs provided to XCLAIM). In the latter case, the message will also be deleted from the PEL in which it was found. This feature was introduced in Redis 7.0.

Command options

The command has multiple options, however most are mainly for internal use in order to transfer the effects of XCLAIM or other commands to the AOF file and to propagate the same effects to the replicas, and are unlikely to be useful to normal users:

  1. IDLE <ms>: Set the idle time (last time it was delivered) of the message. If IDLE is not specified, an IDLE of 0 is assumed, that is, the time count is reset because the message has now a new owner trying to process it.
  2. TIME <ms-unix-time>: This is the same as IDLE but instead of a relative amount of milliseconds, it sets the idle time to a specific Unix time (in milliseconds). This is useful in order to rewrite the AOF file generating XCLAIM commands.
  3. RETRYCOUNT <count>: Set the retry counter to the specified value. This counter is incremented every time a message is delivered again. Normally XCLAIM does not alter this counter, which is just served to clients when the XPENDING command is called: this way clients can detect anomalies, like messages that are never processed for some reason after a big number of delivery attempts.
  4. FORCE: Creates the pending message entry in the PEL even if certain specified IDs are not already in the PEL assigned to a different client. However the message must be exist in the stream, otherwise the IDs of non existing messages are ignored.
  5. JUSTID: Return just an array of IDs of messages successfully claimed, without returning the actual message. Using this option means the retry counter is not incremented.

Return

Array reply, specifically:

The command returns all the messages successfully claimed, in the same format as XRANGE. However if the JUSTID option was specified, only the message IDs are reported, without including the actual message.

Examples

> XCLAIM mystream mygroup Alice 3600000 1526569498055-0
1) 1) 1526569498055-0
   2) 1) "message"
      2) "orange"

In the above example we claim the message with ID 1526569498055-0, only if the message is idle for at least one hour without the original consumer or some other consumer making progresses (acknowledging or claiming it), and assigns the ownership to the consumer Alice.

437 - XDEL

Removes the specified entries from the stream. Returns the number of items actually deleted, that may be different from the number of IDs passed in case certain IDs do not exist.

Removes the specified entries from a stream, and returns the number of entries deleted. This number may be less than the number of IDs passed to the command in the case where some of the specified IDs do not exist in the stream.

Normally you may think at a Redis stream as an append-only data structure, however Redis streams are represented in memory, so we are also able to delete entries. This may be useful, for instance, in order to comply with certain privacy policies.

Understanding the low level details of entries deletion

Redis streams are represented in a way that makes them memory efficient: a radix tree is used in order to index macro-nodes that pack linearly tens of stream entries. Normally what happens when you delete an entry from a stream is that the entry is not really evicted, it just gets marked as deleted.

Eventually if all the entries in a macro-node are marked as deleted, the whole node is destroyed and the memory reclaimed. This means that if you delete a large amount of entries from a stream, for instance more than 50% of the entries appended to the stream, the memory usage per entry may increment, since what happens is that the stream will become fragmented. However the stream performance will remain the same.

In future versions of Redis it is possible that we'll trigger a node garbage collection in case a given macro-node reaches a given amount of deleted entries. Currently with the usage we anticipate for this data structure, it is not a good idea to add such complexity.

Return

Integer reply: the number of entries actually deleted.

Examples

> XADD mystream * a 1
1538561698944-0
> XADD mystream * b 2
1538561700640-0
> XADD mystream * c 3
1538561701744-0
> XDEL mystream 1538561700640-0
(integer) 1
127.0.0.1:6379> XRANGE mystream - +
1) 1) 1538561698944-0
   2) 1) "a"
      2) "1"
2) 1) 1538561701744-0
   2) 1) "c"
      2) "3"

438 - XGROUP

A container for consumer groups commands

This is a container command for stream consumer group management commands.

To see the list of available commands you can call XGROUP HELP.

439 - XGROUP CREATE

Create a consumer group.

This command creates a new consumer group uniquely identified by <groupname> for the stream stored at <key>.

Every group has a unique name in a given stream. When a consumer group with the same name already exists, the command returns a -BUSYGROUP error.

The command's <id> argument specifies the last delivered entry in the stream from the new group's perspective. The special ID $ means the ID of the last entry in the stream, but you can provide any valid ID instead. For example, if you want the group's consumers to fetch the entire stream from the beginning, use zero as the starting ID for the consumer group:

XGROUP CREATE mystream mygroup 0

By default, the XGROUP CREATE command insists that the target stream exists and returns an error when it doesn't. However, you can use the optional MKSTREAM subcommand as the last argument after the <id> to automatically create the stream (with length of 0) if it doesn't exist:

XGROUP CREATE mystream mygroup $ MKSTREAM

The optional entries_read named argument can be specified to enable consumer group lag tracking for an arbitrary ID. An arbitrary ID is any ID that isn't the ID of the stream's first entry, its last entry or the zero ("0-0") ID. This can be useful you know exactly how many entries are between the arbitrary ID (excluding it) and the stream's last entry. In such cases, the entries_read can be set to the stream's entries_added subtracted with the number of entries.

Return

Simple string reply: OK on success.

440 - XGROUP CREATECONSUMER

Create a consumer in a consumer group.

Create a consumer named <consumername> in the consumer group <groupname> of the stream that's stored at <key>.

Consumers are also created automatically whenever an operation, such as XREADGROUP, references a consumer that doesn't exist.

Return

Integer reply: the number of created consumers (0 or 1)

441 - XGROUP DELCONSUMER

Delete a consumer from a consumer group.

The XGROUP DELCONSUMER command deletes a consumer from the consumer group.

Sometimes it may be useful to remove old consumers since they are no longer used.

Note, however, that any pending messages that the consumer had will become unclaimable after it was deleted. It is strongly recommended, therefore, that any pending messages are claimed or acknowledged prior to deleting the consumer from the group.

Return

Integer reply: the number of pending messages that the consumer had before it was deleted

442 - XGROUP DESTROY

Destroy a consumer group.

The XGROUP DESTROY command completely destroys a consumer group.

The consumer group will be destroyed even if there are active consumers, and pending messages, so make sure to call this command only when really needed.

Return

Integer reply: the number of destroyed consumer groups (0 or 1)

443 - XGROUP HELP

Show helpful text about the different subcommands

The XGROUP HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

444 - XGROUP SETID

Set a consumer group to an arbitrary last delivered ID value.

Set the last delivered ID for a consumer group.

Normally, a consumer group's last delivered ID is set when the group is created with XGROUP CREATE. The XGROUP SETID command allows modifying the group's last delivered ID, without having to delete and recreate the group. For instance if you want the consumers in a consumer group to re-process all the messages in a stream, you may want to set its next ID to 0:

XGROUP SETID mystream mygroup 0

The optional entries_read argument can be specified to enable consumer group lag tracking for an arbitrary ID. An arbitrary ID is any ID that isn't the ID of the stream's first entry, its last entry or the zero ("0-0") ID. This can be useful you know exactly how many entries are between the arbitrary ID (excluding it) and the stream's last entry. In such cases, the entries_read can be set to the stream's entries_added subtracted with the number of entries.

Return

Simple string reply: OK on success.

445 - XINFO

A container for stream introspection commands

This is a container command for stream introspection commands.

To see the list of available commands you can call XINFO HELP.

446 - XINFO CONSUMERS

List the consumers in a consumer group

This command returns the list of consumers that belong to the <groupname> consumer group of the stream stored at <key>.

The following information is provided for each consumer in the group:

  • name: the consumer's name
  • pending: the number of pending messages for the client, which are messages that were delivered but are yet to be acknowledged
  • idle: the number of milliseconds that have passed since the consumer last interacted with the server

@reply

Array reply: a list of consumers.

Examples

> XINFO CONSUMERS mystream mygroup
1) 1) name
   2) "Alice"
   3) pending
   4) (integer) 1
   5) idle
   6) (integer) 9104628
2) 1) name
   2) "Bob"
   3) pending
   4) (integer) 1
   5) idle
   6) (integer) 83841983

447 - XINFO GROUPS

List the consumer groups of a stream

This command returns the list of all consumers groups of the stream stored at <key>.

By default, only the following information is provided for each of the groups:

  • name: the consumer group's name
  • consumers: the number of consumers in the group
  • pending: the length of the group's pending entries list (PEL), which are messages that were delivered but are yet to be acknowledged
  • last-delivered-id: the ID of the last entry delivered the group's consumers
  • entries-read: the logical "read counter" of the last entry delivered to group's consumers
  • lag: the number of entries in the stream that are still waiting to be delivered to the group's consumers, or a NULL when that number can't be determined.

Consumer group lag

The lag of a given consumer group is the number of entries in the range between the group's entries_read and the stream's entries_added. Put differently, it is the number of entries that are yet to be delivered to the group's consumers.

The values and trends of this metric are helpful in making scaling decisions about the consumer group. You can address high lag values by adding more consumers to the group, whereas low values may indicate that you can remove consumers from the group to scale it down.

Redis reports the lag of a consumer group by keeping two counters: the number of all entries added to the stream and the number of logical reads made by the consumer group. The lag is the difference between these two.

The stream's counter (the entries_added field of the XINFO STREAM command) is incremented by one with every XADD and counts all of the entries added to the stream during its lifetime.

The consumer group's counter, entries_read, is the logical counter of entries that the group had read. It is important to note that this counter is only a heuristic rather than an accurate counter, and therefore the use of the term "logical". The counter attempts to reflect the number of entries that the group should have read to get to its current last-delivered-id. The entries_read counter is accurate only in a perfect world, where a consumer group starts at the stream's first entry and processes all of its entries (i.e., no entries deleted before processing).

There are two special cases in which this mechanism is unable to report the lag:

  1. A consumer group is created or set with an arbitrary last delivered ID (the XGROUP CREATE and XGROUP SETID commands, respectively). An arbitrary ID is any ID that isn't the ID of the stream's first entry, its last entry or the zero ("0-0") ID.
  2. One or more entries between the group's last-delivered-id and the stream's last-generated-id were deleted (with XDEL or a trimming operation).

In both cases, the group's read counter is considered invalid, and the returned value is set to NULL to signal that the lag isn't currently available.

However, the lag is only temporarily unavailable. It is restored automatically during regular operation as consumers keep processing messages. Once the consumer group delivers the last message in the stream to its members, it will be set with the correct logical read counter, and tracking its lag can be resumed.

@reply

Array reply: a list of consumer groups.

Examples

> XINFO GROUPS mystream
1)  1) "name"
    2) "mygroup"
    3) "consumers"
    4) (integer) 2
    5) "pending"
    6) (integer) 2
    7) "last-delivered-id"
    8) "1638126030001-0"
    9) "entries-read"
   10) (integer) 2
   11) "lag"
   12) (integer) 0
2)  1) "name"
    2) "some-other-group"
    3) "consumers"
    4) (integer) 1
    5) "pending"
    6) (integer) 0
    7) "last-delivered-id"
    8) "1638126028070-0"
    9) "entries-read"
   10) (integer) 1
   11) "lag"
   12) (integer) 1

448 - XINFO HELP

Show helpful text about the different subcommands

The XINFO HELP command returns a helpful text describing the different subcommands.

Return

Array reply: a list of subcommands and their descriptions

449 - XINFO STREAM

Get information about a stream

This command returns information about the stream stored at <key>.

The informative details provided by this command are:

  • length: the number of entries in the stream (see XLEN)
  • radix-tree-keys: the number of keys in the underlying radix data structure
  • radix-tree-nodes: the number of nodes in the underlying radix data structure
  • groups: the number of consumer groups defined for the stream
  • last-generated-id: the ID of the least-recently entry that was added to the stream
  • max-deleted-entry-id: the maximal entry ID that was deleted from the stream
  • entries-added: the count of all entries added to the stream during its lifetime
  • first-entry: the ID and field-value tuples of the first entry in the stream
  • last-entry: the ID and field-value tuples of the last entry in the stream

The optional FULL modifier provides a more verbose reply. When provided, the FULL reply includes an entries array that consists of the stream entries (ID and field-value tuples) in ascending order. Furthermore, groups is also an array, and for each of the consumer groups it consists of the information reported by XINFO GROUPS and XINFO CONSUMERS.

The COUNT option can be used to limit the number of stream and PEL entries that are returned (The first <count> entries are returned). The default COUNT is 10 and a COUNT of 0 means that all entries will be returned (execution time may be long if the stream has a lot of entries).

Return

Array reply: a list of informational bits

Examples

Default reply:

> XINFO STREAM mystream
 1) "length"
 2) (integer) 2
 3) "radix-tree-keys"
 4) (integer) 1
 5) "radix-tree-nodes"
 6) (integer) 2
 7) "last-generated-id"
 8) "1638125141232-0"
 9) "max-deleted-entry-id"
10) "0-0"
11) "entries-added"
12) (integer) 2
13) "groups"
14) (integer) 1
15) "first-entry"
16) 1) "1638125133432-0"
    2) 1) "message"
       2) "apple"
17) "last-entry"
18) 1) "1638125141232-0"
    2) 1) "message"
       2) "banana"

Full reply:

> XADD mystream * foo bar
"1638125133432-0"
> XADD mystream * foo bar2
"1638125141232-0"
> XGROUP CREATE mystream mygroup 0-0
OK
> XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream >
1) 1) "mystream"
   2) 1) 1) "1638125133432-0"
         2) 1) "foo"
            2) "bar"
> XINFO STREAM mystream FULL
 1) "length"
 2) (integer) 2
 3) "radix-tree-keys"
 4) (integer) 1
 5) "radix-tree-nodes"
 6) (integer) 2
 7) "last-generated-id"
 8) "1638125141232-0"
 9) "max-deleted-entry-id"
10) "0-0"
11) "entries-added"
12) (integer) 2
13) "entries"
14) 1) 1) "1638125133432-0"
       2) 1) "foo"
          2) "bar"
    2) 1) "1638125141232-0"
       2) 1) "foo"
          2) "bar2"
15) "groups"
16) 1)  1) "name"
        2) "mygroup"
        3) "last-delivered-id"
        4) "1638125133432-0"
        5) "entries-read"
        6) (integer) 1
        7) "lag"
        8) (integer) 1
        9) "pel-count"
       10) (integer) 1
       11) "pending"
       12) 1) 1) "1638125133432-0"
              2) "Alice"
              3) (integer) 1638125153423
              4) (integer) 1
       13) "consumers"
       14) 1) 1) "name"
              2) "Alice"
              3) "seen-time"
              4) (integer) 1638125153423
              5) "pel-count"
              6) (integer) 1
              7) "pending"
              8) 1) 1) "1638125133432-0"
                    2) (integer) 1638125153423
                    3) (integer) 1
>

450 - XLEN

Return the number of entries in a stream

Returns the number of entries inside a stream. If the specified key does not exist the command returns zero, as if the stream was empty. However note that unlike other Redis types, zero-length streams are possible, so you should call TYPE or EXISTS in order to check if a key exists or not.

Streams are not auto-deleted once they have no entries inside (for instance after an XDEL call), because the stream may have consumer groups associated with it.

Return

Integer reply: the number of entries of the stream at key.

Examples

XADD mystream * item 1 XADD mystream * item 2 XADD mystream * item 3 XLEN mystream

451 - XPENDING

Return information and entries from a stream consumer group pending entries list, that are messages fetched but never acknowledged.

Fetching data from a stream via a consumer group, and not acknowledging such data, has the effect of creating pending entries. This is well explained in the XREADGROUP command, and even better in our introduction to Redis Streams. The XACK command will immediately remove the pending entry from the Pending Entries List (PEL) since once a message is successfully processed, there is no longer need for the consumer group to track it and to remember the current owner of the message.

The XPENDING command is the interface to inspect the list of pending messages, and is as thus a very important command in order to observe and understand what is happening with a streams consumer groups: what clients are active, what messages are pending to be consumed, or to see if there are idle messages. Moreover this command, together with XCLAIM is used in order to implement recovering of consumers that are failing for a long time, and as a result certain messages are not processed: a different consumer can claim the message and continue. This is better explained in the streams intro and in the XCLAIM command page, and is not covered here.

Summary form of XPENDING

When XPENDING is called with just a key name and a consumer group name, it just outputs a summary about the pending messages in a given consumer group. In the following example, we create a consumer group and immediately create a pending message by reading from the group with XREADGROUP.

> XGROUP CREATE mystream group55 0-0
OK

> XREADGROUP GROUP group55 consumer-123 COUNT 1 STREAMS mystream >
1) 1) "mystream"
   2) 1) 1) 1526984818136-0
         2) 1) "duration"
            2) "1532"
            3) "event-id"
            4) "5"
            5) "user-id"
            6) "7782813"

We expect the pending entries list for the consumer group group55 to have a message right now: consumer named consumer-123 fetched the message without acknowledging its processing. The simple XPENDING form will give us this information:

> XPENDING mystream group55
1) (integer) 1
2) 1526984818136-0
3) 1526984818136-0
4) 1) 1) "consumer-123"
      2) "1"

In this form, the command outputs the total number of pending messages for this consumer group, which is one, followed by the smallest and greatest ID among the pending messages, and then list every consumer in the consumer group with at least one pending message, and the number of pending messages it has.

Extended form of XPENDING

The summary provides a good overview, but sometimes we are interested in the details. In order to see all the pending messages with more associated information we need to also pass a range of IDs, in a similar way we do it with XRANGE, and a non optional count argument, to limit the number of messages returned per call:

> XPENDING mystream group55 - + 10
1) 1) 1526984818136-0
   2) "consumer-123"
   3) (integer) 196415
   4) (integer) 1

In the extended form we no longer see the summary information, instead there is detailed information for each message in the pending entries list. For each message four attributes are returned:

  1. The ID of the message.
  2. The name of the consumer that fetched the message and has still to acknowledge it. We call it the current owner of the message.
  3. The number of milliseconds that elapsed since the last time this message was delivered to this consumer.
  4. The number of times this message was delivered.

The deliveries counter, that is the fourth element in the array, is incremented when some other consumer claims the message with XCLAIM, or when the message is delivered again via XREADGROUP, when accessing the history of a consumer in a consumer group (see the XREADGROUP page for more info).

It is possible to pass an additional argument to the command, in order to see the messages having a specific owner:

> XPENDING mystream group55 - + 10 consumer-123

But in the above case the output would be the same, since we have pending messages only for a single consumer. However what is important to keep in mind is that this operation, filtering by a specific consumer, is not inefficient even when there are many pending messages from many consumers: we have a pending entries list data structure both globally, and for every consumer, so we can very efficiently show just messages pending for a single consumer.

Idle time filter

It is also possible to filter pending stream entries by their idle-time, given in milliseconds (useful for XCLAIMing entries that have not been processed for some time):

> XPENDING mystream group55 IDLE 9000 - + 10
> XPENDING mystream group55 IDLE 9000 - + 10 consumer-123

The first case will return the first 10 (or less) PEL entries of the entire group that are idle for over 9 seconds, whereas in the second case only those of consumer-123.

Exclusive ranges and iterating the PEL

The XPENDING command allows iterating over the pending entries just like XRANGE and XREVRANGE allow for the stream's entries. You can do this by prefixing the ID of the last-read pending entry with the ( character that denotes an open (exclusive) range, and proving it to the subsequent call to the command.

Return

Array reply, specifically:

The command returns data in different format depending on the way it is called, as previously explained in this page. However the reply is always an array of items.

452 - XRANGE

Return a range of elements in a stream, with IDs matching the specified IDs interval

The command returns the stream entries matching a given range of IDs. The range is specified by a minimum and maximum ID. All the entries having an ID between the two specified or exactly one of the two IDs specified (closed interval) are returned.

The XRANGE command has a number of applications:

  • Returning items in a specific time range. This is possible because Stream IDs are related to time.
  • Iterating a stream incrementally, returning just a few items at every iteration. However it is semantically much more robust than the SCAN family of functions.
  • Fetching a single entry from a stream, providing the ID of the entry to fetch two times: as start and end of the query interval.

The command also has a reciprocal command returning items in the reverse order, called XREVRANGE, which is otherwise identical.

- and + special IDs

The - and + special IDs mean respectively the minimum ID possible and the maximum ID possible inside a stream, so the following command will just return every entry in the stream:

> XRANGE somestream - +
1) 1) 1526985054069-0
   2) 1) "duration"
      2) "72"
      3) "event-id"
      4) "9"
      5) "user-id"
      6) "839248"
2) 1) 1526985069902-0
   2) 1) "duration"
      2) "415"
      3) "event-id"
      4) "2"
      5) "user-id"
      6) "772213"
... other entries here ...

The - ID is effectively just exactly as specifying 0-0, while + is equivalent to 18446744073709551615-18446744073709551615, however they are nicer to type.

Incomplete IDs

Stream IDs are composed of two parts, a Unix millisecond time stamp and a sequence number for entries inserted in the same millisecond. It is possible to use XRANGE specifying just the first part of the ID, the millisecond time, like in the following example:

> XRANGE somestream 1526985054069 1526985055069

In this case, XRANGE will auto-complete the start interval with -0 and end interval with -18446744073709551615, in order to return all the entries that were generated between a given millisecond and the end of the other specified millisecond. This also means that repeating the same millisecond two times, we get all the entries within such millisecond, because the sequence number range will be from zero to the maximum.

Used in this way XRANGE works as a range query command to obtain entries in a specified time. This is very handy in order to access the history of past events in a stream.

Exclusive ranges

The range is close (inclusive) by default, meaning that the reply can include entries with IDs matching the query's start and end intervals. It is possible to specify an open interval (exclusive) by prefixing the ID with the character (. This is useful for iterating the stream, as explained below.

Returning a maximum number of entries

Using the COUNT option it is possible to reduce the number of entries reported. This is a very important feature even if it may look marginal, because it allows, for instance, to model operations such as give me the entry greater or equal to the following:

> XRANGE somestream 1526985054069-0 + COUNT 1
1) 1) 1526985054069-0
   2) 1) "duration"
      2) "72"
      3) "event-id"
      4) "9"
      5) "user-id"
      6) "839248"

In the above case the entry 1526985054069-0 exists, otherwise the server would have sent us the next one. Using COUNT is also the base in order to use XRANGE as an iterator.

Iterating a stream

In order to iterate a stream, we can proceed as follows. Let's assume that we want two elements per iteration. We start fetching the first two elements, which is trivial:

> XRANGE writers - + COUNT 2
1) 1) 1526985676425-0
   2) 1) "name"
      2) "Virginia"
      3) "surname"
      4) "Woolf"
2) 1) 1526985685298-0
   2) 1) "name"
      2) "Jane"
      3) "surname"
      4) "Austen"

Then instead of starting the iteration again from -, as the start of the range we use the entry ID of the last entry returned by the previous XRANGE call as an exclusive interval.

The ID of the last entry is 1526985685298-0, so we just prefix it with a '(', and continue our iteration:

> XRANGE writers (1526985685298-0 + COUNT 2
1) 1) 1526985691746-0
   2) 1) "name"
      2) "Toni"
      3) "surname"
      4) "Morrison"
2) 1) 1526985712947-0
   2) 1) "name"
      2) "Agatha"
      3) "surname"
      4) "Christie"

And so forth. Eventually this will allow to visit all the entries in the stream. Obviously, we can start the iteration from any ID, or even from a specific time, by providing a given incomplete start ID. Moreover, we can limit the iteration to a given ID or time, by providing an end ID or incomplete ID instead of +.

The command XREAD is also able to iterate the stream. The command XREVRANGE can iterate the stream reverse, from higher IDs (or times) to lower IDs (or times).

Iterating with earlier versions of Redis

While exclusive range intervals are only available from Redis 6.2, it is still possible to use a similar stream iteration pattern with earlier versions. You start fetching from the stream the same way as described above to obtain the first entries.

For the subsequent calls, you'll need to programmatically advance the last entry's ID returned. Most Redis client should abstract this detail, but the implementation can also be in the application if needed. In the example above, this means incrementing the sequence of 1526985685298-0 by one, from 0 to 1. The second call would, therefore, be:

> XRANGE writers 1526985685298-1 + COUNT 2
1) 1) 1526985691746-0
   2) 1) "name"
      2) "Toni"
...

Also, note that once the sequence part of the last ID equals 18446744073709551615, you'll need to increment the timestamp and reset the sequence part to 0. For example, incrementing the ID 1526985685298-18446744073709551615 should result in 1526985685299-0.

A symmetrical pattern applies to iterating the stream with XREVRANGE. The only difference is that the client needs to decrement the ID for the subsequent calls. When decrementing an ID with a sequence part of 0, the timestamp needs to be decremented by 1 and the sequence set to 18446744073709551615.

Fetching single items

If you look for an XGET command you'll be disappointed because XRANGE is effectively the way to go in order to fetch a single entry from a stream. All you have to do is to specify the ID two times in the arguments of XRANGE:

> XRANGE mystream 1526984818136-0 1526984818136-0
1) 1) 1526984818136-0
   2) 1) "duration"
      2) "1532"
      3) "event-id"
      4) "5"
      5) "user-id"
      6) "7782813"

Additional information about streams

For further information about Redis streams please check our introduction to Redis Streams document.

Return

Array reply, specifically:

The command returns the entries with IDs matching the specified range. The returned entries are complete, that means that the ID and all the fields they are composed are returned. Moreover, the entries are returned with their fields and values in the exact same order as XADD added them.

Examples

XADD writers * name Virginia surname Woolf XADD writers * name Jane surname Austen XADD writers * name Toni surname Morrison XADD writers * name Agatha surname Christie XADD writers * name Ngozi surname Adichie XLEN writers XRANGE writers - + COUNT 2

453 - XREAD

Return never seen elements in multiple streams, with IDs greater than the ones reported by the caller for each stream. Can block.

Read data from one or multiple streams, only returning entries with an ID greater than the last received ID reported by the caller. This command has an option to block if items are not available, in a similar fashion to BRPOP or BZPOPMIN and others.

Please note that before reading this page, if you are new to streams, we recommend to read our introduction to Redis Streams.

Non-blocking usage

If the BLOCK option is not used, the command is synchronous, and can be considered somewhat related to XRANGE: it will return a range of items inside streams, however it has two fundamental differences compared to XRANGE even if we just consider the synchronous usage:

  • This command can be called with multiple streams if we want to read at the same time from a number of keys. This is a key feature of XREAD because especially when blocking with BLOCK, to be able to listen with a single connection to multiple keys is a vital feature.
  • While XRANGE returns items in a range of IDs, XREAD is more suited in order to consume the stream starting from the first entry which is greater than any other entry we saw so far. So what we pass to XREAD is, for each stream, the ID of the last element that we received from that stream.

For example, if I have two streams mystream and writers, and I want to read data from both the streams starting from the first element they contain, I could call XREAD like in the following example.

Note: we use the COUNT option in the example, so that for each stream the call will return at maximum two elements per stream.

> XREAD COUNT 2 STREAMS mystream writers 0-0 0-0
1) 1) "mystream"
   2) 1) 1) 1526984818136-0
         2) 1) "duration"
            2) "1532"
            3) "event-id"
            4) "5"
            5) "user-id"
            6) "7782813"
      2) 1) 1526999352406-0
         2) 1) "duration"
            2) "812"
            3) "event-id"
            4) "9"
            5) "user-id"
            6) "388234"
2) 1) "writers"
   2) 1) 1) 1526985676425-0
         2) 1) "name"
            2) "Virginia"
            3) "surname"
            4) "Woolf"
      2) 1) 1526985685298-0
         2) 1) "name"
            2) "Jane"
            3) "surname"
            4) "Austen"

The STREAMS option is mandatory and MUST be the final option because such option gets a variable length of argument in the following format:

STREAMS key_1 key_2 key_3 ... key_N ID_1 ID_2 ID_3 ... ID_N

So we start with a list of keys, and later continue with all the associated IDs, representing the last ID we received for that stream, so that the call will serve us only greater IDs from the same stream.

For instance in the above example, the last items that we received for the stream mystream has ID 1526999352406-0, while for the stream writers has the ID 1526985685298-0.

To continue iterating the two streams I'll call:

> XREAD COUNT 2 STREAMS mystream writers 1526999352406-0 1526985685298-0
1) 1) "mystream"
   2) 1) 1) 1526999626221-0
         2) 1) "duration"
            2) "911"
            3) "event-id"
            4) "7"
            5) "user-id"
            6) "9488232"
2) 1) "writers"
   2) 1) 1) 1526985691746-0
         2) 1) "name"
            2) "Toni"
            3) "surname"
            4) "Morrison"
      2) 1) 1526985712947-0
         2) 1) "name"
            2) "Agatha"
            3) "surname"
            4) "Christie"

And so forth. Eventually, the call will not return any item, but just an empty array, then we know that there is nothing more to fetch from our stream (and we would have to retry the operation, hence this command also supports a blocking mode).

Incomplete IDs

To use incomplete IDs is valid, like it is valid for XRANGE. However here the sequence part of the ID, if missing, is always interpreted as zero, so the command:

> XREAD COUNT 2 STREAMS mystream writers 0 0

is exactly equivalent to

> XREAD COUNT 2 STREAMS mystream writers 0-0 0-0

Blocking for data

In its synchronous form, the command can get new data as long as there are more items available. However, at some point, we'll have to wait for producers of data to use XADD to push new entries inside the streams we are consuming. In order to avoid polling at a fixed or adaptive interval the command is able to block if it could not return any data, according to the specified streams and IDs, and automatically unblock once one of the requested keys accept data.

It is important to understand that this command fans out to all the clients that are waiting for the same range of IDs, so every consumer will get a copy of the data, unlike to what happens when blocking list pop operations are used.

In order to block, the BLOCK option is used, together with the number of milliseconds we want to block before timing out. Normally Redis blocking commands take timeouts in seconds, however this command takes a millisecond timeout, even if normally the server will have a timeout resolution near to 0.1 seconds. This time it is possible to block for a shorter time in certain use cases, and if the server internals will improve over time, it is possible that the resolution of timeouts will improve.

When the BLOCK command is passed, but there is data to return at least in one of the streams passed, the command is executed synchronously exactly like if the BLOCK option would be missing.

This is an example of blocking invocation, where the command later returns a null reply because the timeout has elapsed without new data arriving:

> XREAD BLOCK 1000 STREAMS mystream 1526999626221-0
(nil)

The special $ ID.

When blocking sometimes we want to receive just entries that are added to the stream via XADD starting from the moment we block. In such a case we are not interested in the history of already added entries. For this use case, we would have to check the stream top element ID, and use such ID in the XREAD command line. This is not clean and requires to call other commands, so instead it is possible to use the special $ ID to signal the stream that we want only the new things.

It is very important to understand that you should use the $ ID only for the first call to XREAD. Later the ID should be the one of the last reported item in the stream, otherwise you could miss all the entries that are added in between.

This is how a typical XREAD call looks like in the first iteration of a consumer willing to consume only new entries:

> XREAD BLOCK 5000 COUNT 100 STREAMS mystream $

Once we get some replies, the next call will be something like:

> XREAD BLOCK 5000 COUNT 100 STREAMS mystream 1526999644174-3

And so forth.

How multiple clients blocked on a single stream are served

Blocking list operations on lists or sorted sets have a pop behavior. Basically, the element is removed from the list or sorted set in order to be returned to the client. In this scenario you want the items to be consumed in a fair way, depending on the moment clients blocked on a given key arrived. Normally Redis uses the FIFO semantics in this use cases.

However note that with streams this is not a problem: stream entries are not removed from the stream when clients are served, so every client waiting will be served as soon as an XADD command provides data to the stream.

Return

Array reply, specifically:

The command returns an array of results: each element of the returned array is an array composed of a two element containing the key name and the entries reported for that key. The entries reported are full stream entries, having IDs and the list of all the fields and values. Field and values are guaranteed to be reported in the same order they were added by XADD.

When BLOCK is used, on timeout a null reply is returned.

Reading the Redis Streams introduction is highly suggested in order to understand more about the streams overall behavior and semantics.

454 - XREADGROUP

Return new entries from a stream using a consumer group, or access the history of the pending entries for a given consumer. Can block.

The XREADGROUP command is a special version of the XREAD command with support for consumer groups. Probably you will have to understand the XREAD command before reading this page will makes sense.

Moreover, if you are new to streams, we recommend to read our introduction to Redis Streams. Make sure to understand the concept of consumer group in the introduction so that following how this command works will be simpler.

Consumer groups in 30 seconds

The difference between this command and the vanilla XREAD is that this one supports consumer groups.

Without consumer groups, just using XREAD, all the clients are served with all the entries arriving in a stream. Instead using consumer groups with XREADGROUP, it is possible to create groups of clients that consume different parts of the messages arriving in a given stream. If, for instance, the stream gets the new entries A, B, and C and there are two consumers reading via a consumer group, one client will get, for instance, the messages A and C, and the other the message B, and so forth.

Within a consumer group, a given consumer (that is, just a client consuming messages from the stream), has to identify with an unique consumer name. Which is just a string.

One of the guarantees of consumer groups is that a given consumer can only see the history of messages that were delivered to it, so a message has just a single owner. However there is a special feature called message claiming that allows other consumers to claim messages in case there is a non recoverable failure of some consumer. In order to implement such semantics, consumer groups require explicit acknowledgment of the messages successfully processed by the consumer, via the XACK command. This is needed because the stream will track, for each consumer group, who is processing what message.

This is how to understand if you want to use a consumer group or not:

  1. If you have a stream and multiple clients, and you want all the clients to get all the messages, you do not need a consumer group.
  2. If you have a stream and multiple clients, and you want the stream to be partitioned or sharded across your clients, so that each client will get a sub set of the messages arriving in a stream, you need a consumer group.

Differences between XREAD and XREADGROUP

From the point of view of the syntax, the commands are almost the same, however XREADGROUP requires a special and mandatory option:

GROUP <group-name> <consumer-name>

The group name is just the name of a consumer group associated to the stream. The group is created using the XGROUP command. The consumer name is the string that is used by the client to identify itself inside the group. The consumer is auto created inside the consumer group the first time it is saw. Different clients should select a different consumer name.

When you read with XREADGROUP, the server will remember that a given message was delivered to you: the message will be stored inside the consumer group in what is called a Pending Entries List (PEL), that is a list of message IDs delivered but not yet acknowledged.

The client will have to acknowledge the message processing using XACK in order for the pending entry to be removed from the PEL. The PEL can be inspected using the XPENDING command.

The NOACK subcommand can be used to avoid adding the message to the PEL in cases where reliability is not a requirement and the occasional message loss is acceptable. This is equivalent to acknowledging the message when it is read.

The ID to specify in the STREAMS option when using XREADGROUP can be one of the following two:

  • The special > ID, which means that the consumer want to receive only messages that were never delivered to any other consumer. It just means, give me new messages.
  • Any other ID, that is, 0 or any other valid ID or incomplete ID (just the millisecond time part), will have the effect of returning entries that are pending for the consumer sending the command with IDs greater than the one provided. So basically if the ID is not >, then the command will just let the client access its pending entries: messages delivered to it, but not yet acknowledged. Note that in this case, both BLOCK and NOACK are ignored.

Like XREAD the XREADGROUP command can be used in a blocking way. There are no differences in this regard.

What happens when a message is delivered to a consumer?

Two things:

  1. If the message was never delivered to anyone, that is, if we are talking about a new message, then a PEL (Pending Entries List) is created.
  2. If instead the message was already delivered to this consumer, and it is just re-fetching the same message again, then the last delivery counter is updated to the current time, and the number of deliveries is incremented by one. You can access those message properties using the XPENDING command.

Usage example

Normally you use the command like that in order to get new messages and process them. In pseudo-code:

WHILE true
    entries = XREADGROUP GROUP $GroupName $ConsumerName BLOCK 2000 COUNT 10 STREAMS mystream >
    if entries == nil
        puts "Timeout... try again"
        CONTINUE
    end

    FOREACH entries AS stream_entries
        FOREACH stream_entries as message
            process_message(message.id,message.fields)

            # ACK the message as processed
            XACK mystream $GroupName message.id
        END
    END
END

In this way the example consumer code will fetch only new messages, process them, and acknowledge them via XACK. However the example code above is not complete, because it does not handle recovering after a crash. What will happen if we crash in the middle of processing messages, is that our messages will remain in the pending entries list, so we can access our history by giving XREADGROUP initially an ID of 0, and performing the same loop. Once providing an ID of 0 the reply is an empty set of messages, we know that we processed and acknowledged all the pending messages: we can start to use > as ID, in order to get the new messages and rejoin the consumers that are processing new things.

To see how the command actually replies, please check the XREAD command page.

Return

Array reply, specifically:

The command returns an array of results: each element of the returned array is an array composed of a two element containing the key name and the entries reported for that key. The entries reported are full stream entries, having IDs and the list of all the fields and values. Field and values are guaranteed to be reported in the same order they were added by XADD.

When BLOCK is used, on timeout a null reply is returned.

Reading the Redis Streams introduction is highly suggested in order to understand more about the streams overall behavior and semantics.

455 - XREVRANGE

Return a range of elements in a stream, with IDs matching the specified IDs interval, in reverse order (from greater to smaller IDs) compared to XRANGE

This command is exactly like XRANGE, but with the notable difference of returning the entries in reverse order, and also taking the start-end range in reverse order: in XREVRANGE you need to state the end ID and later the start ID, and the command will produce all the element between (or exactly like) the two IDs, starting from the end side.

So for instance, to get all the elements from the higher ID to the lower ID one could use:

XREVRANGE somestream + -

Similarly to get just the last element added into the stream it is enough to send:

XREVRANGE somestream + - COUNT 1

Return

Array reply, specifically:

The command returns the entries with IDs matching the specified range, from the higher ID to the lower ID matching. The returned entries are complete, that means that the ID and all the fields they are composed are returned. Moreover the entries are returned with their fields and values in the exact same order as XADD added them.

Examples

XADD writers * name Virginia surname Woolf XADD writers * name Jane surname Austen XADD writers * name Toni surname Morrison XADD writers * name Agatha surname Christie XADD writers * name Ngozi surname Adichie XLEN writers XREVRANGE writers + - COUNT 1

456 - XSETID

An internal command for replicating stream values

The XSETID command is an internal command. It is used by a Redis master to replicate the last delivered ID of streams.

457 - XTRIM

Trims the stream to (approximately if '~' is passed) a certain size

XTRIM trims the stream by evicting older entries (entries with lower IDs) if needed.

Trimming the stream can be done using one of these strategies:

  • MAXLEN: Evicts entries as long as the stream's length exceeds the specified threshold, where threshold is a positive integer.
  • MINID: Evicts entries with IDs lower than threshold, where threshold is a stream ID.

For example, this will trim the stream to exactly the latest 1000 items:

XTRIM mystream MAXLEN 1000

Whereas in this example, all entries that have an ID lower than 649085820-0 will be evicted:

XTRIM mystream MINID 649085820

By default, or when provided with the optional = argument, the command performs exact trimming.

Depending on the strategy, exact trimming means:

  • MAXLEN: the trimmed stream's length will be exactly the minimum between its original length and the specified threshold.
  • MINID: the oldest ID in the stream will be exactly the maximum between its original oldest ID and the specified threshold.

Nearly exact trimming

Because exact trimming may require additional effort from the Redis server, the optional ~ argument can be provided to make it more efficient.

For example:

XTRIM mystream MAXLEN ~ 1000

The ~ argument between the MAXLEN strategy and the threshold means that the user is requesting to trim the stream so its length is at least the threshold, but possibly slightly more. In this case, Redis will stop trimming early when performance can be gained (for example, when a whole macro node in the data structure can't be removed). This makes trimming much more efficient, and it is usually what you want, although after trimming, the stream may have few tens of additional entries over the threshold.

Another way to control the amount of work done by the command when using the ~, is the LIMIT clause. When used, it specifies the maximal count of entries that will be evicted. When LIMIT and count aren't specified, the default value of 100 * the number of entries in a macro node will be implicitly used as the count. Specifying the value 0 as count disables the limiting mechanism entirely.

Return

Integer reply: The number of entries deleted from the stream.

Examples

XADD mystream * field1 A field2 B field3 C field4 D XTRIM mystream MAXLEN 2 XRANGE mystream - +

458 - ZADD

Add one or more members to a sorted set, or update its score if it already exists

Adds all the specified members with the specified scores to the sorted set stored at key. It is possible to specify multiple score / member pairs. If a specified member is already a member of the sorted set, the score is updated and the element reinserted at the right position to ensure the correct ordering.

If key does not exist, a new sorted set with the specified members as sole members is created, like if the sorted set was empty. If the key exists but does not hold a sorted set, an error is returned.

The score values should be the string representation of a double precision floating point number. +inf and -inf values are valid values as well.

ZADD options

ZADD supports a list of options, specified after the name of the key and before the first score argument. Options are:

  • XX: Only update elements that already exist. Don't add new elements.
  • NX: Only add new elements. Don't update already existing elements.
  • LT: Only update existing elements if the new score is less than the current score. This flag doesn't prevent adding new elements.
  • GT: Only update existing elements if the new score is greater than the current score. This flag doesn't prevent adding new elements.
  • CH: Modify the return value from the number of new elements added, to the total number of elements changed (CH is an abbreviation of changed). Changed elements are new elements added and elements already existing for which the score was updated. So elements specified in the command line having the same score as they had in the past are not counted. Note: normally the return value of ZADD only counts the number of new elements added.
  • INCR: When this option is specified ZADD acts like ZINCRBY. Only one score-element pair can be specified in this mode.

Note: The GT, LT and NX options are mutually exclusive.

Range of integer scores that can be expressed precisely

Redis sorted sets use a double 64-bit floating point number to represent the score. In all the architectures we support, this is represented as an IEEE 754 floating point number, that is able to represent precisely integer numbers between -(2^53) and +(2^53) included. In more practical terms, all the integers between -9007199254740992 and 9007199254740992 are perfectly representable. Larger integers, or fractions, are internally represented in exponential form, so it is possible that you get only an approximation of the decimal number, or of the very big integer, that you set as score.

Sorted sets 101

Sorted sets are sorted by their score in an ascending way. The same element only exists a single time, no repeated elements are permitted. The score can be modified both by ZADD that will update the element score, and as a side effect, its position on the sorted set, and by ZINCRBY that can be used in order to update the score relatively to its previous value.

The current score of an element can be retrieved using the ZSCORE command, that can also be used to verify if an element already exists or not.

For an introduction to sorted sets, see the data types page on sorted sets.

Elements with the same score

While the same element can't be repeated in a sorted set since every element is unique, it is possible to add multiple different elements having the same score. When multiple elements have the same score, they are ordered lexicographically (they are still ordered by score as a first key, however, locally, all the elements with the same score are relatively ordered lexicographically).

The lexicographic ordering used is binary, it compares strings as array of bytes.

If the user inserts all the elements in a sorted set with the same score (for example 0), all the elements of the sorted set are sorted lexicographically, and range queries on elements are possible using the command ZRANGEBYLEX (Note: it is also possible to query sorted sets by range of scores using ZRANGEBYSCORE).

Return

Integer reply, specifically:

  • When used without optional arguments, the number of elements added to the sorted set (excluding score updates).
  • If the CH option is specified, the number of elements that were changed (added or updated).

If the INCR option is specified, the return value will be Bulk string reply:

  • The new score of member (a double precision floating point number) represented as string, or nil if the operation was aborted (when called with either the XX or the NX option).

Examples

ZADD myzset 1 "one" ZADD myzset 1 "uno" ZADD myzset 2 "two" 3 "three" ZRANGE myzset 0 -1 WITHSCORES

459 - ZCARD

Get the number of members in a sorted set

Returns the sorted set cardinality (number of elements) of the sorted set stored at key.

Return

Integer reply: the cardinality (number of elements) of the sorted set, or 0 if key does not exist.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZCARD myzset

460 - ZCOUNT

Count the members in a sorted set with scores within the given values

Returns the number of elements in the sorted set at key with a score between min and max.

The min and max arguments have the same semantic as described for ZRANGEBYSCORE.

Note: the command has a complexity of just O(log(N)) because it uses elements ranks (see ZRANK) to get an idea of the range. Because of this there is no need to do a work proportional to the size of the range.

Return

Integer reply: the number of elements in the specified score range.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZCOUNT myzset -inf +inf ZCOUNT myzset (1 3

461 - ZDIFF

Subtract multiple sorted sets

This command is similar to ZDIFFSTORE, but instead of storing the resulting sorted set, it is returned to the client.

Return

Array reply: the result of the difference (optionally with their scores, in case the WITHSCORES option is given).

Examples

ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset1 3 "three" ZADD zset2 1 "one" ZADD zset2 2 "two" ZDIFF 2 zset1 zset2 ZDIFF 2 zset1 zset2 WITHSCORES

462 - ZDIFFSTORE

Subtract multiple sorted sets and store the resulting sorted set in a new key

Computes the difference between the first and all successive input sorted sets and stores the result in destination. The total number of input keys is specified by numkeys.

Keys that do not exist are considered to be empty sets.

If destination already exists, it is overwritten.

Return

Integer reply: the number of elements in the resulting sorted set at destination.

Examples

ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset1 3 "three" ZADD zset2 1 "one" ZADD zset2 2 "two" ZDIFFSTORE out 2 zset1 zset2 ZRANGE out 0 -1 WITHSCORES

463 - ZINCRBY

Increment the score of a member in a sorted set

Increments the score of member in the sorted set stored at key by increment. If member does not exist in the sorted set, it is added with increment as its score (as if its previous score was 0.0). If key does not exist, a new sorted set with the specified member as its sole member is created.

An error is returned when key exists but does not hold a sorted set.

The score value should be the string representation of a numeric value, and accepts double precision floating point numbers. It is possible to provide a negative value to decrement the score.

Return

Bulk string reply: the new score of member (a double precision floating point number), represented as string.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZINCRBY myzset 2 "one" ZRANGE myzset 0 -1 WITHSCORES

464 - ZINTER

Intersect multiple sorted sets

This command is similar to ZINTERSTORE, but instead of storing the resulting sorted set, it is returned to the client.

For a description of the WEIGHTS and AGGREGATE options, see ZUNIONSTORE.

Return

Array reply: the result of intersection (optionally with their scores, in case the WITHSCORES option is given).

Examples

ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZINTER 2 zset1 zset2 ZINTER 2 zset1 zset2 WITHSCORES

465 - ZINTERCARD

Intersect multiple sorted sets and return the cardinality of the result

This command is similar to ZINTER, but instead of returning the result set, it returns just the cardinality of the result.

Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set).

By default, the command calculates the cardinality of the intersection of all given sets. When provided with the optional LIMIT argument (which defaults to 0 and means unlimited), if the intersection cardinality reaches limit partway through the computation, the algorithm will exit and yield limit as the cardinality. Such implementation ensures a significant speedup for queries where the limit is lower than the actual intersection cardinality.

Return

Integer reply: the number of elements in the resulting intersection.

Examples

ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZINTER 2 zset1 zset2 ZINTERCARD 2 zset1 zset2 ZINTERCARD 2 zset1 zset2 LIMIT 1

466 - ZINTERSTORE

Intersect multiple sorted sets and store the resulting sorted set in a new key

Computes the intersection of numkeys sorted sets given by the specified keys, and stores the result in destination. It is mandatory to provide the number of input keys (numkeys) before passing the input keys and the other (optional) arguments.

By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists. Because intersection requires an element to be a member of every given sorted set, this results in the score of every element in the resulting sorted set to be equal to the number of input sorted sets.

For a description of the WEIGHTS and AGGREGATE options, see ZUNIONSTORE.

If destination already exists, it is overwritten.

Return

Integer reply: the number of elements in the resulting sorted set at destination.

Examples

ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 ZRANGE out 0 -1 WITHSCORES

467 - ZLEXCOUNT

Count the number of members in a sorted set between a given lexicographical range

When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns the number of elements in the sorted set at key with a value between min and max.

The min and max arguments have the same meaning as described for ZRANGEBYLEX.

Note: the command has a complexity of just O(log(N)) because it uses elements ranks (see ZRANK) to get an idea of the range. Because of this there is no need to do a work proportional to the size of the range.

Return

Integer reply: the number of elements in the specified score range.

Examples

ZADD myzset 0 a 0 b 0 c 0 d 0 e ZADD myzset 0 f 0 g ZLEXCOUNT myzset - + ZLEXCOUNT myzset [b [f

468 - ZMPOP

Remove and return members with scores in a sorted set

Pops one or more elements, that are member-score pairs, from the first non-empty sorted set in the provided list of key names.

ZMPOP and BZMPOP are similar to the following, more limited, commands:

  • ZPOPMIN or ZPOPMAX which take only one key, and can return multiple elements.
  • BZPOPMIN or BZPOPMAX which take multiple keys, but return only one element from just one key.

See BZMPOP for the blocking variant of this command.

When the MIN modifier is used, the elements popped are those with the lowest scores from the first non-empty sorted set. The MAX modifier causes elements with the highest scores to be popped. The optional COUNT can be used to specify the number of elements to pop, and is set to 1 by default.

The number of popped elements is the minimum from the sorted set's cardinality and COUNT's value.

Return

Array reply: specifically:

  • A nil when no element could be popped.
  • A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of the popped elements. Every entry in the elements array is also an array that contains the member and its score.

Examples

ZMPOP 1 notsuchkey MIN ZADD myzset 1 "one" 2 "two" 3 "three" ZMPOP 1 myzset MIN ZRANGE myzset 0 -1 WITHSCORES ZMPOP 1 myzset MAX COUNT 10 ZADD myzset2 4 "four" 5 "five" 6 "six" ZMPOP 2 myzset myzset2 MIN COUNT 10 ZRANGE myzset 0 -1 WITHSCORES ZMPOP 2 myzset myzset2 MAX COUNT 10 ZRANGE myzset2 0 -1 WITHSCORES EXISTS myzset myzset2

469 - ZMSCORE

Get the score associated with the given members in a sorted set

Returns the scores associated with the specified members in the sorted set stored at key.

For every member that does not exist in the sorted set, a nil value is returned.

Return

Array reply: list of scores or nil associated with the specified member values (a double precision floating point number), represented as strings.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZMSCORE myzset "one" "two" "nofield"

470 - ZPOPMAX

Remove and return members with the highest scores in a sorted set

Removes and returns up to count members with the highest scores in the sorted set stored at key.

When left unspecified, the default value for count is 1. Specifying a count value that is higher than the sorted set's cardinality will not produce an error. When returning multiple elements, the one with the highest score will be the first, followed by the elements with lower scores.

Return

Array reply: list of popped elements and scores.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZPOPMAX myzset

471 - ZPOPMIN

Remove and return members with the lowest scores in a sorted set

Removes and returns up to count members with the lowest scores in the sorted set stored at key.

When left unspecified, the default value for count is 1. Specifying a count value that is higher than the sorted set's cardinality will not produce an error. When returning multiple elements, the one with the lowest score will be the first, followed by the elements with greater scores.

Return

Array reply: list of popped elements and scores.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZPOPMIN myzset

472 - ZRANDMEMBER

Get one or multiple random elements from a sorted set

When called with just the key argument, return a random element from the sorted set value stored at key.

If the provided count argument is positive, return an array of distinct elements. The array's length is either count or the sorted set's cardinality (ZCARD), whichever is lower.

If called with a negative count, the behavior changes and the command is allowed to return the same element multiple times. In this case, the number of returned elements is the absolute value of the specified count.

The optional WITHSCORES modifier changes the reply so it includes the respective scores of the randomly selected elements from the sorted set.

Return

Bulk string reply: without the additional count argument, the command returns a Bulk Reply with the randomly selected element, or nil when key does not exist.

Array reply: when the additional count argument is passed, the command returns an array of elements, or an empty array when key does not exist. If the WITHSCORES modifier is used, the reply is a list elements and their scores from the sorted set.

Examples

ZADD dadi 1 uno 2 due 3 tre 4 quattro 5 cinque 6 sei ZRANDMEMBER dadi ZRANDMEMBER dadi ZRANDMEMBER dadi -5 WITHSCORES

Specification of the behavior when count is passed

When the count argument is a positive value this command behaves as follows:

  • No repeated elements are returned.
  • If count is bigger than the cardinality of the sorted set, the command will only return the whole sorted set without additional elements.
  • The order of elements in the reply is not truly random, so it is up to the client to shuffle them if needed.

When the count is a negative value, the behavior changes as follows:

  • Repeating elements are possible.
  • Exactly count elements, or an empty array if the sorted set is empty (non-existing key), are always returned.
  • The order of elements in the reply is truly random.

473 - ZRANGE

Return a range of members in a sorted set

Returns the specified range of elements in the sorted set stored at <key>.

ZRANGE can perform different types of range queries: by index (rank), by the score, or by lexicographical order.

Starting with Redis 6.2.0, this command can replace the following commands: ZREVRANGE, ZRANGEBYSCORE, ZREVRANGEBYSCORE, ZRANGEBYLEX and ZREVRANGEBYLEX.

Common behavior and options

The order of elements is from the lowest to the highest score. Elements with the same score are ordered lexicographically.

The optional REV argument reverses the ordering, so elements are ordered from highest to lowest score, and score ties are resolved by reverse lexicographical ordering.

The optional LIMIT argument can be used to obtain a sub-range from the matching elements (similar to SELECT LIMIT offset, count in SQL). A negative <count> returns all elements from the <offset>. Keep in mind that if <offset> is large, the sorted set needs to be traversed for <offset> elements before getting to the elements to return, which can add up to O(N) time complexity.

The optional WITHSCORES argument supplements the command's reply with the scores of elements returned. The returned list contains value1,score1,...,valueN,scoreN instead of value1,...,valueN. Client libraries are free to return a more appropriate data type (suggestion: an array with (value, score) arrays/tuples).

Index ranges

By default, the command performs an index range query. The <start> and <stop> arguments represent zero-based indexes, where 0 is the first element, 1 is the next element, and so on. These arguments specify an inclusive range, so for example, ZRANGE myzset 0 1 will return both the first and the second element of the sorted set.

The indexes can also be negative numbers indicating offsets from the end of the sorted set, with -1 being the last element of the sorted set, -2 the penultimate element, and so on.

Out of range indexes do not produce an error.

If <start> is greater than either the end index of the sorted set or <stop>, an empty list is returned.

If <stop> is greater than the end index of the sorted set, Redis will use the last element of the sorted set.

Score ranges

When the BYSCORE option is provided, the command behaves like ZRANGEBYSCORE and returns the range of elements from the sorted set having scores equal or between <start> and <stop>.

<start> and <stop> can be -inf and +inf, denoting the negative and positive infinities, respectively. This means that you are not required to know the highest or lowest score in the sorted set to get all elements from or up to a certain score.

By default, the score intervals specified by <start> and <stop> are closed (inclusive). It is possible to specify an open interval (exclusive) by prefixing the score with the character (.

For example:

ZRANGE zset (1 5 BYSCORE

Will return all elements with 1 < score <= 5 while:

ZRANGE zset (5 (10 BYSCORE

Will return all the elements with 5 < score < 10 (5 and 10 excluded).

Reverse ranges

Using the REV option reverses the sorted set, with index 0 as the element with the highest score.

By default, <start> must be less than or equal to <stop> to return anything. However, if the BYSCORE, or BYLEX options are selected, the <start> is the highest score to consider, and <stop> is the lowest score to consider, therefore <start> must be greater than or equal to <stop> in order to return anything.

For example:

ZRANGE zset 5 10 REV

Will return the elements between index 5 and 10 in the reversed index.

ZRANGE zset 10 5 REV BYSCORE

Will return all elements with scores less than 10 and greater than 5.

Lexicographical ranges

When the BYLEX option is used, the command behaves like ZRANGEBYLEX and returns the range of elements from the sorted set between the <start> and <stop> lexicographical closed range intervals.

Note that lexicographical ordering relies on all elements having the same score. The reply is unspecified when the elements have different scores.

Valid <start> and <stop> must start with ( or [, in order to specify whether the range interval is exclusive or inclusive, respectively.

The special values of + or - for <start> and <stop> mean positive and negative infinite strings, respectively, so for instance the command ZRANGE myzset - + BYLEX is guaranteed to return all the elements in the sorted set, providing that all the elements have the same score.

The REV options reverses the order of the <start> and <stop> elements, where <start> must be lexicographically greater than <stop> to produce a non-empty result.

Lexicographical comparison of strings

Strings are compared as a binary array of bytes. Because of how the ASCII character set is specified, this means that usually this also have the effect of comparing normal ASCII characters in an obvious dictionary way. However, this is not true if non-plain ASCII strings are used (for example, utf8 strings).

However, the user can apply a transformation to the encoded string so that the first part of the element inserted in the sorted set will compare as the user requires for the specific application. For example, if I want to add strings that will be compared in a case-insensitive way, but I still want to retrieve the real case when querying, I can add strings in the following way:

ZADD autocomplete 0 foo:Foo 0 bar:BAR 0 zap:zap

Because of the first normalized part in every element (before the colon character), we are forcing a given comparison. However, after the range is queried using ZRANGE ... BYLEX, the application can display to the user the second part of the string, after the colon.

The binary nature of the comparison allows to use sorted sets as a general purpose index, for example, the first part of the element can be a 64-bit big-endian number. Since big-endian numbers have the most significant bytes in the initial positions, the binary comparison will match the numerical comparison of the numbers. This can be used in order to implement range queries on 64-bit values. As in the example below, after the first 8 bytes, we can store the value of the element we are indexing.

Return

Array reply: list of elements in the specified range (optionally with their scores, in case the WITHSCORES option is given).

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZRANGE myzset 0 -1 ZRANGE myzset 2 3 ZRANGE myzset -2 -1

The following example using WITHSCORES shows how the command returns always an array, but this time, populated with element_1, score_1, element_2, score_2, ..., element_N, score_N.

ZRANGE myzset 0 1 WITHSCORES

This example shows how to query the sorted set by score, excluding the value 1 and up to infinity, returning only the second element of the result:

ZRANGE myzset (1 +inf BYSCORE LIMIT 1 1

474 - ZRANGEBYLEX

Return a range of members in a sorted set, by lexicographical range

When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns all the elements in the sorted set at key with a value between min and max.

If the elements in the sorted set have different scores, the returned elements are unspecified.

The elements are considered to be ordered from lower to higher strings as compared byte-by-byte using the memcmp() C function. Longer strings are considered greater than shorter strings if the common part is identical.

The optional LIMIT argument can be used to only get a range of the matching elements (similar to SELECT LIMIT offset, count in SQL). A negative count returns all elements from the offset. Keep in mind that if offset is large, the sorted set needs to be traversed for offset elements before getting to the elements to return, which can add up to O(N) time complexity.

How to specify intervals

Valid start and stop must start with ( or [, in order to specify if the range item is respectively exclusive or inclusive. The special values of + or - for start and stop have the special meaning or positively infinite and negatively infinite strings, so for instance the command ZRANGEBYLEX myzset - + is guaranteed to return all the elements in the sorted set, if all the elements have the same score.

Details on strings comparison

Strings are compared as binary array of bytes. Because of how the ASCII character set is specified, this means that usually this also have the effect of comparing normal ASCII characters in an obvious dictionary way. However this is not true if non plain ASCII strings are used (for example utf8 strings).

However the user can apply a transformation to the encoded string so that the first part of the element inserted in the sorted set will compare as the user requires for the specific application. For example if I want to add strings that will be compared in a case-insensitive way, but I still want to retrieve the real case when querying, I can add strings in the following way:

ZADD autocomplete 0 foo:Foo 0 bar:BAR 0 zap:zap

Because of the first normalized part in every element (before the colon character), we are forcing a given comparison, however after the range is queries using ZRANGEBYLEX the application can display to the user the second part of the string, after the colon.

The binary nature of the comparison allows to use sorted sets as a general purpose index, for example the first part of the element can be a 64 bit big endian number: since big endian numbers have the most significant bytes in the initial positions, the binary comparison will match the numerical comparison of the numbers. This can be used in order to implement range queries on 64 bit values. As in the example below, after the first 8 bytes we can store the value of the element we are actually indexing.

Return

Array reply: list of elements in the specified score range.

Examples

ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g ZRANGEBYLEX myzset - [c ZRANGEBYLEX myzset - (c ZRANGEBYLEX myzset [aaa (g

475 - ZRANGEBYSCORE

Return a range of members in a sorted set, by score

Returns all the elements in the sorted set at key with a score between min and max (including elements with score equal to min or max). The elements are considered to be ordered from low to high scores.

The elements having the same score are returned in lexicographical order (this follows from a property of the sorted set implementation in Redis and does not involve further computation).

The optional LIMIT argument can be used to only get a range of the matching elements (similar to SELECT LIMIT offset, count in SQL). A negative count returns all elements from the offset. Keep in mind that if offset is large, the sorted set needs to be traversed for offset elements before getting to the elements to return, which can add up to O(N) time complexity.

The optional WITHSCORES argument makes the command return both the element and its score, instead of the element alone. This option is available since Redis 2.0.

Exclusive intervals and infinity

min and max can be -inf and +inf, so that you are not required to know the highest or lowest score in the sorted set to get all elements from or up to a certain score.

By default, the interval specified by min and max is closed (inclusive). It is possible to specify an open interval (exclusive) by prefixing the score with the character (. For example:

ZRANGEBYSCORE zset (1 5

Will return all elements with 1 < score <= 5 while:

ZRANGEBYSCORE zset (5 (10

Will return all the elements with 5 < score < 10 (5 and 10 excluded).

Return

Array reply: list of elements in the specified score range (optionally with their scores).

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZRANGEBYSCORE myzset -inf +inf ZRANGEBYSCORE myzset 1 2 ZRANGEBYSCORE myzset (1 2 ZRANGEBYSCORE myzset (1 (2

Pattern: weighted random selection of an element

Normally ZRANGEBYSCORE is simply used in order to get range of items where the score is the indexed integer key, however it is possible to do less obvious things with the command.

For example a common problem when implementing Markov chains and other algorithms is to select an element at random from a set, but different elements may have different weights that change how likely it is they are picked.

This is how we use this command in order to mount such an algorithm:

Imagine you have elements A, B and C with weights 1, 2 and 3. You compute the sum of the weights, which is 1+2+3 = 6

At this point you add all the elements into a sorted set using this algorithm:

SUM = ELEMENTS.TOTAL_WEIGHT // 6 in this case.
SCORE = 0
FOREACH ELE in ELEMENTS
    SCORE += ELE.weight / SUM
    ZADD KEY SCORE ELE
END

This means that you set:

A to score 0.16
B to score .5
C to score 1

Since this involves approximations, in order to avoid C is set to, like, 0.998 instead of 1, we just modify the above algorithm to make sure the last score is 1 (left as an exercise for the reader...).

At this point, each time you want to get a weighted random element, just compute a random number between 0 and 1 (which is like calling rand() in most languages), so you can just do:

RANDOM_ELE = ZRANGEBYSCORE key RAND() +inf LIMIT 0 1

476 - ZRANGESTORE

Store a range of members from sorted set into another key

This command is like ZRANGE, but stores the result in the <dst> destination key.

Return

Integer reply: the number of elements in the resulting sorted set.

Examples

ZADD srczset 1 "one" 2 "two" 3 "three" 4 "four" ZRANGESTORE dstzset srczset 2 -1 ZRANGE dstzset 0 -1

477 - ZRANK

Determine the index of a member in a sorted set

Returns the rank of member in the sorted set stored at key, with the scores ordered from low to high. The rank (or index) is 0-based, which means that the member with the lowest score has rank 0.

Use ZREVRANK to get the rank of an element with the scores ordered from high to low.

Return

  • If member exists in the sorted set, Integer reply: the rank of member.
  • If member does not exist in the sorted set or key does not exist, Bulk string reply: nil.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZRANK myzset "three" ZRANK myzset "four"

478 - ZREM

Remove one or more members from a sorted set

Removes the specified members from the sorted set stored at key. Non existing members are ignored.

An error is returned when key exists and does not hold a sorted set.

Return

Integer reply, specifically:

  • The number of members removed from the sorted set, not including non existing members.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREM myzset "two" ZRANGE myzset 0 -1 WITHSCORES

479 - ZREMRANGEBYLEX

Remove all members in a sorted set between the given lexicographical range

When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command removes all elements in the sorted set stored at key between the lexicographical range specified by min and max.

The meaning of min and max are the same of the ZRANGEBYLEX command. Similarly, this command actually removes the same elements that ZRANGEBYLEX would return if called with the same min and max arguments.

Return

Integer reply: the number of elements removed.

Examples

ZADD myzset 0 aaaa 0 b 0 c 0 d 0 e ZADD myzset 0 foo 0 zap 0 zip 0 ALPHA 0 alpha ZRANGE myzset 0 -1 ZREMRANGEBYLEX myzset [alpha [omega ZRANGE myzset 0 -1

480 - ZREMRANGEBYRANK

Remove all members in a sorted set within the given indexes

Removes all elements in the sorted set stored at key with rank between start and stop. Both start and stop are 0 -based indexes with 0 being the element with the lowest score. These indexes can be negative numbers, where they indicate offsets starting at the element with the highest score. For example: -1 is the element with the highest score, -2 the element with the second highest score and so forth.

Return

Integer reply: the number of elements removed.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREMRANGEBYRANK myzset 0 1 ZRANGE myzset 0 -1 WITHSCORES

481 - ZREMRANGEBYSCORE

Remove all members in a sorted set within the given scores

Removes all elements in the sorted set stored at key with a score between min and max (inclusive).

Return

Integer reply: the number of elements removed.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREMRANGEBYSCORE myzset -inf (2 ZRANGE myzset 0 -1 WITHSCORES

482 - ZREVRANGE

Return a range of members in a sorted set, by index, with scores ordered from high to low

Returns the specified range of elements in the sorted set stored at key. The elements are considered to be ordered from the highest to the lowest score. Descending lexicographical order is used for elements with equal score.

Apart from the reversed ordering, ZREVRANGE is similar to ZRANGE.

Return

Array reply: list of elements in the specified range (optionally with their scores).

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREVRANGE myzset 0 -1 ZREVRANGE myzset 2 3 ZREVRANGE myzset -2 -1

483 - ZREVRANGEBYLEX

Return a range of members in a sorted set, by lexicographical range, ordered from higher to lower strings.

When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns all the elements in the sorted set at key with a value between max and min.

Apart from the reversed ordering, ZREVRANGEBYLEX is similar to ZRANGEBYLEX.

Return

Array reply: list of elements in the specified score range.

Examples

ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g ZREVRANGEBYLEX myzset [c - ZREVRANGEBYLEX myzset (c - ZREVRANGEBYLEX myzset (g [aaa

484 - ZREVRANGEBYSCORE

Return a range of members in a sorted set, by score, with scores ordered from high to low

Returns all the elements in the sorted set at key with a score between max and min (including elements with score equal to max or min). In contrary to the default ordering of sorted sets, for this command the elements are considered to be ordered from high to low scores.

The elements having the same score are returned in reverse lexicographical order.

Apart from the reversed ordering, ZREVRANGEBYSCORE is similar to ZRANGEBYSCORE.

Return

Array reply: list of elements in the specified score range (optionally with their scores).

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREVRANGEBYSCORE myzset +inf -inf ZREVRANGEBYSCORE myzset 2 1 ZREVRANGEBYSCORE myzset 2 (1 ZREVRANGEBYSCORE myzset (2 (1

485 - ZREVRANK

Determine the index of a member in a sorted set, with scores ordered from high to low

Returns the rank of member in the sorted set stored at key, with the scores ordered from high to low. The rank (or index) is 0-based, which means that the member with the highest score has rank 0.

Use ZRANK to get the rank of an element with the scores ordered from low to high.

Return

  • If member exists in the sorted set, Integer reply: the rank of member.
  • If member does not exist in the sorted set or key does not exist, Bulk string reply: nil.

Examples

ZADD myzset 1 "one" ZADD myzset 2 "two" ZADD myzset 3 "three" ZREVRANK myzset "one" ZREVRANK myzset "four"

486 - ZSCAN

Incrementally iterate sorted sets elements and associated scores

See SCAN for ZSCAN documentation.

487 - ZSCORE

Get the score associated with the given member in a sorted set

Returns the score of member in the sorted set at key.

If member does not exist in the sorted set, or key does not exist, nil is returned.

Return

Bulk string reply: the score of member (a double precision floating point number), represented as string.

Examples

ZADD myzset 1 "one" ZSCORE myzset "one"

488 - ZUNION

Add multiple sorted sets

This command is similar to ZUNIONSTORE, but instead of storing the resulting sorted set, it is returned to the client.

For a description of the WEIGHTS and AGGREGATE options, see ZUNIONSTORE.

Return

Array reply: the result of union (optionally with their scores, in case the WITHSCORES option is given).

Examples

ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZUNION 2 zset1 zset2 ZUNION 2 zset1 zset2 WITHSCORES

489 - ZUNIONSTORE

Add multiple sorted sets and store the resulting sorted set in a new key

Computes the union of numkeys sorted sets given by the specified keys, and stores the result in destination. It is mandatory to provide the number of input keys (numkeys) before passing the input keys and the other (optional) arguments.

By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists.

Using the WEIGHTS option, it is possible to specify a multiplication factor for each input sorted set. This means that the score of every element in every input sorted set is multiplied by this factor before being passed to the aggregation function. When WEIGHTS is not given, the multiplication factors default to 1.

With the AGGREGATE option, it is possible to specify how the results of the union are aggregated. This option defaults to SUM, where the score of an element is summed across the inputs where it exists. When this option is set to either MIN or MAX, the resulting set will contain the minimum or maximum score of an element across the inputs where it exists.

If destination already exists, it is overwritten.

Return

Integer reply: the number of elements in the resulting sorted set at destination.

Examples

ZADD zset1 1 "one" ZADD zset1 2 "two" ZADD zset2 1 "one" ZADD zset2 2 "two" ZADD zset2 3 "three" ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3 ZRANGE out 0 -1 WITHSCORES