Redis集群(Cluster)-部署
开始部署
本文使用docker在一台机器上部署6台Redis[3主3从]
创建6个目录
新建6个文件夹:mkdir redis1 redis2 redis3 redis4 redis5 redis6
。
修改配置文件
# 端口号修改这里分别位 6371 6372 6373 6374 6375 6376
port 6371
#普通Redis实例不能成为Redis Cluster的一部分;只有作为集群节点启动的节点才能成为集群节点。
cluster-enabled yes
# 每个集群节点都有一个集群配置文件。这个文件不应该手动编辑。它由Redis节点创建和更新。
# 每个Redis集群节点都需要一个不同的集群配置文件。
# 确保在同一系统中运行的实例没有重叠的集群配置文件名。
cluster-config-file nodes-6379.conf
#集群节点超时是一个节点必须不可达的毫秒数,才能被视为处于故障状态。大多数其他内部时间限制都是节点超时的倍数。
cluster-node-timeout 15000
# 在某些部署中,Redis Cluster节点地址发现失败,因为地址是NAT的,或者因为端口被转发(典型情况是Docker和其他容器)。
# 为了使Redis Cluster在这些环境中工作,需要静态配置,其中每个节点都知道其公共地址。以下四个选项用于此目的:
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-tls-port
# * cluster-announce-bus-port
# 每个指示节点其地址、客户端端口(用于无TLS和带TLS的连接)和集群消息总线端口。然后,在总线数据包的头中发布信息,以便其他节点能够正确映射发布信息的节点的地址。
# 如果设置了cluster-tls为yes,并省略或将cluster-announce-tls-port设置为零,则cluster-announce-port将指TLS端口。请注意,如果cluster-tls设置为no,则cluster-announce-tls-port无效。
# 如果不使用上述选项,则将使用正常的Redis Cluster自动检测。
# 请注意,当重新映射时,总线端口可能不在客户端端口+ 10000的固定偏移量上,因此您可以根据它们的重新映射方式指定任何端口和总线端口。如果未设置总线端口,通常仍将使用固定偏移量10000。
cluster-announce-ip 192.168.64.2
cluster-announce-port 6371
cluster-announce-bus-port 16371
复制配置文件到刚才创建的文件夹的conf文件下面。
docker-compose.yml配置
按照Redis官网的提示,为了使Docker与RedisCluster兼容,需要使用Docker的
host
网络模式。
host
网络模式需要在创建容器时通过参数--net host
或者--network host
指定(docker-compose的方式如下),host
网络模式可以让容器共享宿主机网络栈,容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口。
docker-compose.yml文件和上面新建的文件夹在同级目录。
version: "3.8"
services:
redis-1:
image: 'redis:latest'
container_name: redis-1
network_mode: host
volumes:
- ./redis1/conf:/usr/local/etc/redis
- ./redis1/data:/data
- ./redis1/logs:/logs
command: redis-server /usr/local/etc/redis/redis.conf
redis-2:
image: 'redis:latest'
container_name: redis-2
network_mode: host
volumes:
- ./redis2/conf:/usr/local/etc/redis
- ./redis2/data:/data
- ./redis2/logs:/logs
command: redis-server /usr/local/etc/redis/redis.conf
redis-3:
image: 'redis:latest'
container_name: redis-3
network_mode: host
volumes:
- ./redis3/conf:/usr/local/etc/redis
- ./redis3/data:/data
- ./redis3/logs:/logs
command: redis-server /usr/local/etc/redis/redis.conf
redis-4:
image: 'redis:latest'
container_name: redis-4
network_mode: host
volumes:
- ./redis4/conf:/usr/local/etc/redis
- ./redis4/data:/data
- ./redis4/logs:/logs
command: redis-server /usr/local/etc/redis/redis.conf
redis-5:
image: 'redis:latest'
container_name: redis-5
network_mode: host
volumes:
- ./redis5/conf:/usr/local/etc/redis
- ./redis5/data:/data
- ./redis5/logs:/logs
command: redis-server /usr/local/etc/redis/redis.conf
redis-6:
image: 'redis:latest'
container_name: redis-6
network_mode: host
volumes:
- ./redis6/conf:/usr/local/etc/redis
- ./redis6/data:/data
- ./redis6/logs:/logs
command: redis-server /usr/local/etc/redis/redis.conf
启动容器
启动容器使用:docker-compose up -d
。
-
容器启动成功后随便进入一个容器(
docker exec -it redis-1 /bin/bash
)运行下面的命令://使用下面命令串联集群然他们产生关系,后面跟所有节点的ip和端口。(这里使用宿主机的IP) redis-cli --cluster create --cluster-replicas 1 ip:6371 ip:6372 ip:6373 ip:6374 ip:6375 ip:6376
看到
[OK] All 16384 slots covered
.表示启动成功。 -
使用
CLUSTER NODES
查看所有的节点的信息。127.0.0.1:6371> CLUSTER NODES c3d20a24230ce205f706312cb8f0d92dc77f6fed 192.168.64.2:6372@16372 master - 0 1678545602412 2 connected 5461-10922 c07d1212db745db47bb878ecc3d538e69038cccc 192.168.64.2:6373@16373 master - 0 1678545603418 3 connected 10923-16383 9640477d66b8baee727be0359059865b39905921 192.168.64.2:6375@16375 slave c07d1212db745db47bb878ecc3d538e69038cccc 0 1678545601000 3 connected 95657526f57fa2ce388620f60bfe540d37b254b0 192.168.80.2:6371@16371 myself,master - 0 1678545600000 1 connected 0-5460 dabf237cf10a36b6bca4bd266b05ff95f5c87861 192.168.64.2:6374@16374 slave c3d20a24230ce205f706312cb8f0d92dc77f6fed 0 1678545604421 2 connected 3142650be2929b7a05eb3a062cd0afce4dbc580f 192.168.64.2:6376@16376 slave 95657526f57fa2ce388620f60bfe540d37b254b0 0 1678545603000 1 connected
可以看到Master节点的后面写着对应的哈希槽范围。
-
集群的常用命令:
//查看该key在那个插槽中。 CLUSTER KEYSLOT <key> //查看该插槽中有几条数据。(只能擦看自己的插槽) CLUSTER COUNTKEYSINSOT <插槽的值>
注意:实际生产部署时不建议主从在同一台机器,这样如果主从在的那台机器挂了的话,主从就一起都挂了。
测试故障转移
首先使用CLUSTER NODES
查看节点信息
127.0.0.1:6371> CLUSTER NODES
1d7863e33a5e644029392f3d08cc11d0171c6829 192.168.64.2:6371@16371 myself,master - 0 1678551685000 1 connected 0-5460
5040b65363bd41a5f81e8aa7532ea82240aaaa70 192.168.64.2:6372@16372 master - 0 1678551685000 2 connected 5461-10922
c2b0fa980019d04539c77bd451732bae123cd515 192.168.64.2:6373@16373 master - 0 1678551687776 3 connected 10923-16383
120719ecdfaff15defc5220a7d4938178e18da10 192.168.64.2:6374@16374 slave c2b0fa980019d04539c77bd451732bae123cd515 0 1678551686774 3 connected
d0dc5bc69bcca8b9d507df7dc9e7331af963b2ae 192.168.64.2:6375@16375 slave 1d7863e33a5e644029392f3d08cc11d0171c6829 0 1678551686000 1 connected
545661a747b4d493eb78f4a56f0ef6a85bbd6e9c 192.168.64.2:6376@16376 slave 5040b65363bd41a5f81e8aa7532ea82240aaaa70 0 1678551686000 2 connected
从上面的信息可以看到3对主从,分别是:6371主6375从、6372主6376从、6373主6374从。
使用docker stop redis-1
关闭6371模拟宕机,再次使用CLUSTER NODES
查看节点信息
127.0.0.1:6372> CLUSTER NODES
1d7863e33a5e644029392f3d08cc11d0171c6829 192.168.64.2:6371@16371 master - 1678552094714 1678552089690 1 disconnected 0-5460
5040b65363bd41a5f81e8aa7532ea82240aaaa70 192.168.64.2:6372@16372 myself,master - 0 1678552101000 2 connected 5461-10922
c2b0fa980019d04539c77bd451732bae123cd515 192.168.64.2:6373@16373 master - 0 1678552102757 3 connected 10923-16383
120719ecdfaff15defc5220a7d4938178e18da10 192.168.64.2:6374@16374 slave c2b0fa980019d04539c77bd451732bae123cd515 0 1678552103762 3 connected
d0dc5bc69bcca8b9d507df7dc9e7331af963b2ae 192.168.64.2:6375@16375 slave 1d7863e33a5e644029392f3d08cc11d0171c6829 0 1678552103000 1 connected
545661a747b4d493eb78f4a56f0ef6a85bbd6e9c 192.168.64.2:6376@16376 slave 5040b65363bd41a5f81e8aa7532ea82240aaaa70 0 1678552102000 2 connected
这里发现Redis集群并没有进行故障转移。查看文档后发现有个配置cluster-replica-validity-factor
,该配置用于决定当Redis集群中的一个Master宕机后,如何选择一个Slave节点来完成故障转移自动恢复(FAILOVER)。如果设置为0,则不管Slave与Master之间断开多久,都认为自己有资格成为master。
下面提供了两种方式来评估Slave的数据是否太旧:
- 如果有多个Slave可以FAILOVER,他们之间会通过交换信息选出拥有拥有最大复制offset的Slave节点。
- 每个Slave节点计算上次与Master节点交互的时间,这个交互包含最后一次
ping
操作、Master节点传输过来的写指令、上次和Master断开的时间等。如果上次交互的时间过去很久,那么这个节点就不会发起FAILOVER。
针对第二点,交互时间可以通过配置定义,如果Slave与Master上次交互的时间大于
(node-timeout*cluster-replica-validity-factor)+repl-ping-replica-period
该Slave就不会发生FAILOVER。例如,node-timeout = 30 秒,cluster-replica-validity-factor=10秒,repl-ping-slave-period=10秒,表示slave节点与Master节点上次交互时间已经过去了310秒,那么slave节点就不会做FAILOVER。
cluster-replica-validity-factor
设置的越大,则允许存储越旧数据的Slave节点提升为Master,设置的越小的话可能会导致没有Slave节点可以升为Master节点。
考虑高可用,建议设置为 cluster-replica-validity-factor 0
。
修改cluster-replica-validity-factor 0
后再次尝试
127.0.0.1:6372> CLUSTER NODES
1d7863e33a5e644029392f3d08cc11d0171c6829 192.168.64.2:6371@16371 master,fail - 1678554277505 1678554272479 1 disconnected
5040b65363bd41a5f81e8aa7532ea82240aaaa70 192.168.64.2:6372@16372 myself,master - 0 1678554293000 2 connected 5461-10922
c2b0fa980019d04539c77bd451732bae123cd515 192.168.64.2:6373@16373 master - 0 1678554291581 3 connected 10923-16383
120719ecdfaff15defc5220a7d4938178e18da10 192.168.64.2:6374@16374 slave c2b0fa980019d04539c77bd451732bae123cd515 0 1678554295000 3 connected
d0dc5bc69bcca8b9d507df7dc9e7331af963b2ae 192.168.64.2:6375@16375 master - 0 1678554294000 7 connected 0-5460
545661a747b4d493eb78f4a56f0ef6a85bbd6e9c 192.168.64.2:6376@16376 slave 5040b65363bd41a5f81e8aa7532ea82240aaaa70 0 1678554295601 2 connected
可以看到已经发生FAILOVER,6375主机成为Master。
宕机的Master恢复会重新成为Master吗?
重启宕机后的Master后再次查看节点信息:
127.0.0.1:6372> CLUSTER NODES
1d7863e33a5e644029392f3d08cc11d0171c6829 192.168.64.2:6371@16371 slave d0dc5bc69bcca8b9d507df7dc9e7331af963b2ae 0 1678554598222 7 connected
120719ecdfaff15defc5220a7d4938178e18da10 192.168.64.2:6374@16374 slave c2b0fa980019d04539c77bd451732bae123cd515 0 1678554598000 3 connected
545661a747b4d493eb78f4a56f0ef6a85bbd6e9c 192.168.64.2:6376@16376 slave 5040b65363bd41a5f81e8aa7532ea82240aaaa70 0 1678554598000 2 connected
d0dc5bc69bcca8b9d507df7dc9e7331af963b2ae 192.168.64.2:6375@16375 master - 0 1678554596000 7 connected 0-5460
5040b65363bd41a5f81e8aa7532ea82240aaaa70 192.168.64.2:6372@16372 myself,master - 0 1678554597000 2 connected 5461-10922
c2b0fa980019d04539c77bd451732bae123cd515 192.168.64.2:6373@16373 master - 0 1678554599227 3 connected 10923-16383
可以看到宕机的Matser重启恢复后并没有重新成为Master而是成为Slave。
手动故障转移
上面Matser宕机然后重新启动之后,成为了Slave,如果希望这个Master还是能成为Matser改怎么办呢?这里可以使用手动故障转移:在这个Redis上使用CLUSTER FAILOVER
命令即可。
127.0.0.1:6371> CLUSTER FAILOVER
OK
127.0.0.1:6371> CLUSTER NODES
1d7863e33a5e644029392f3d08cc11d0171c6829 192.168.64.2:6371@16371 myself,master - 0 1678555118000 8 connected 0-5460
5040b65363bd41a5f81e8aa7532ea82240aaaa70 192.168.64.2:6372@16372 master - 0 1678555121082 2 connected 5461-10922
c2b0fa980019d04539c77bd451732bae123cd515 192.168.64.2:6373@16373 master - 0 1678555119000 3 connected 10923-16383
120719ecdfaff15defc5220a7d4938178e18da10 192.168.64.2:6374@16374 slave c2b0fa980019d04539c77bd451732bae123cd515 0 1678555120078 3 connected
d0dc5bc69bcca8b9d507df7dc9e7331af963b2ae 192.168.64.2:6375@16375 slave 1d7863e33a5e644029392f3d08cc11d0171c6829 0 1678555119074 8 connected
545661a747b4d493eb78f4a56f0ef6a85bbd6e9c 192.168.64.2:6376@16376 slave 5040b65363bd41a5f81e8aa7532ea82240aaaa70 0 1678555122085 2 connected
可以看到6371又重新成为了MASTER。
扩容和缩容
扩容
这里新启动两个Redis 6377和6378
-
在同目录下新建两个文件夹redis7和redis8以及一个docker-compse-new.yml:
mkdir redis7 redis8 touch docker-compse-new.yml
-
修改docker-compse-new.yml:
version: "3.8" services: redis-7: image: 'redis:latest' container_name: redis-7 network_mode: host volumes: - ./redis7/conf:/usr/local/etc/redis - ./redis7/data:/data - ./redis7/logs:/logs command: redis-server /usr/local/etc/redis/redis.conf redis-8: image: 'redis:latest' container_name: redis-8 network_mode: host volumes: - ./redis8/conf:/usr/local/etc/redis - ./redis8/data:/data - ./redis8/logs:/logs command: redis-server /usr/local/etc/redis/redis.conf
-
复制配置文件并修改:
port 6371 cluster-announce-ip 192.168.64.2 cluster-announce-port 6371 cluster-announce-bus-port 16371
-
启动新的的容器
docker-compose -f docker-compose-new.yml up -d
将新主机作为Master加入集群
当集群出现容量限制或者其他一些原因需要扩容时,Redis集群提供了比较优雅的集群扩容方案:
-
首先将新节点加入到集群中。
#将192.168.64.2:6377节点添加到 192.168.64.2:6371节点所在的集群中 docker exec -it redis-1 redis-cli --cluster add-node 192.168.64.2:6377 192.168.64.2:6371
-
查看节点信息
root@zygzyg:/# docker exec -it redis-1 redis-cli--cluster check 192.168.64.2 6371 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.64.2:6371 (8eadb7b0...) -> 0 keys | 5461 slots | 1 slaves. 192.168.64.2:6373 (ec4d1857...) -> 0 keys | 5461 slots | 1 slaves. 192.168.64.2:6372 (7148804d...) -> 0 keys | 5462 slots | 1 slaves. 192.168.64.2:6377 (14ea8ca8...) -> 0 keys | 0 slots | 0 slaves.
可以看到6377已经作为Master加入集群,但是并没有分配Slot
-
重新分配Slot将节点:
docker exec -it redis-1 redis-cli --cluster reshard 192.168.64.2:6371 >>> Performing Cluster Check (using node 192.168.64.2:6371) M: 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 192.168.64.2:6371 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 24e077c89581b67cbb1da1cc1f3a54fb7bb81f66 192.168.64.2:6374 slots: (0 slots) slave replicates 7148804d3d834ef09fd85116dd47e8c510817ed3 M: ec4d185716e3aaac362e50e8b0de29fbc8043c58 192.168.64.2:6373 slots:[10923-16383] (5461 slots) master 1 additional replica(s) M: 7148804d3d834ef09fd85116dd47e8c510817ed3 192.168.64.2:6372 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: 14ea8ca8ca5177f708e4e8cd35060825984b1f70 192.168.64.2:6377 slots: (0 slots) master S: a5c3552ee377921ff29c26ae2ad6559eef670938 192.168.64.2:6375 slots: (0 slots) slave replicates ec4d185716e3aaac362e50e8b0de29fbc8043c58 S: 9f5269c210004819088745d56b3f9d447c559c9a 192.168.64.2:6376 slots: (0 slots) slave replicates 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
这里选择分配4096个Slot,以及选择分配节点的ID:
How many slots do you want to move (from 1 to 16384)? 4096 #这里写需要转移的Slot数量 What is the receiving node ID? 14ea8ca8ca5177f708e4e8cd35060825984b1f70 #这里写需要转移到那个节点(填写节点的ID) # 这里选择从哪个节点分配,选择all表示从所有其他节点平均分出来 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: all
-
再次查看节点信息可以发现槽位已经分配完成,但是新加入的节点的Slot并不是连续的而是[0-1364],[5461-6826],[10923-12287],证明是从其他3个Master均匀分配出来的:
root@zygzyg:/r# docker exec -it redis-1 redis-cli--cluster check 192.168.64.2 6371 192.168.64.2:6371 (8eadb7b0...) -> 0 keys | 4096 slots | 1 slaves. 192.168.64.2:6373 (ec4d1857...) -> 0 keys | 4096 slots | 1 slaves. 192.168.64.2:6372 (7148804d...) -> 0 keys | 4096 slots | 1 slaves. 192.168.64.2:6377 (14ea8ca8...) -> 0 keys | 4096 slots | 0 slaves. [OK] 0 keys in 4 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 192.168.64.2:6371) M: 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 192.168.64.2:6371 slots:[1365-5460] (4096 slots) master 1 additional replica(s) S: 24e077c89581b67cbb1da1cc1f3a54fb7bb81f66 192.168.64.2:6374 slots: (0 slots) slave replicates 7148804d3d834ef09fd85116dd47e8c510817ed3 M: ec4d185716e3aaac362e50e8b0de29fbc8043c58 192.168.64.2:6373 slots:[12288-16383] (4096 slots) master 1 additional replica(s) M: 7148804d3d834ef09fd85116dd47e8c510817ed3 192.168.64.2:6372 slots:[6827-10922] (4096 slots) master 1 additional replica(s) M: 14ea8ca8ca5177f708e4e8cd35060825984b1f70 192.168.64.2:6377 slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master S: a5c3552ee377921ff29c26ae2ad6559eef670938 192.168.64.2:6375 slots: (0 slots) slave replicates ec4d185716e3aaac362e50e8b0de29fbc8043c58 S: 9f5269c210004819088745d56b3f9d447c559c9a 192.168.64.2:6376 slots: (0 slots) slave replicates 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
给新主机增加Slave
刚才启动了两个Redis(6377和6378),其中6377已经作为Matser加入了集群,接下来需要将6378作为Salve加入:
docker exec -it redis-1 redis-cli --cluster add-node 192.168.64.2:6378 192.168.64.2:6377 --cluster-slave --cluster-master-id 14ea8ca8ca5177f708e4e8cd35060825984b1f70
cluster-matser-id
为从机指定Matser,参数是Master的ID。
再次查看节点信息可以看到新加入的6377和6378已经成功加入集群,并且刚刚加入的6378成为6377的Slave,至此扩容就完成了。
127.0.0.1:6371> CLUSTER NODES
8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 192.168.64.2:6371@16371 myself,master - 0 1678608644000 1 connected 1365-5460
7148804d3d834ef09fd85116dd47e8c510817ed3 192.168.64.2:6372@16372 master - 0 1678608647390 2 connected 6827-10922
ec4d185716e3aaac362e50e8b0de29fbc8043c58 192.168.64.2:6373@16373 master - 0 1678608644375 3 connected 12288-16383
24e077c89581b67cbb1da1cc1f3a54fb7bb81f66 192.168.64.2:6374@16374 slave 7148804d3d834ef09fd85116dd47e8c510817ed3 0 1678608643371 2 connected
a5c3552ee377921ff29c26ae2ad6559eef670938 192.168.64.2:6375@16375 slave ec4d185716e3aaac362e50e8b0de29fbc8043c58 0 1678608645378 3 connected
9f5269c210004819088745d56b3f9d447c559c9a 192.168.64.2:6376@16376 slave 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 0 1678608644000 1 connected
14ea8ca8ca5177f708e4e8cd35060825984b1f70 192.168.64.2:6377@16377 master - 0 1678608646384 7 connected 0-1364 5461-6826 10923-12287
d3491a0ac541242adc623bb6789d9ff90c62a3ac 192.168.64.2:6378@16378 slave 14ea8ca8ca5177f708e4e8cd35060825984b1f70 0 1678608644000 7 connected
缩容
这里将刚才新加入集群的6377和6378移出集群:
首先将刚才加入的Slave移除(redis-cli --cluster del-node <集群IP>:<集群端口> <移除的节点ID>
)
root@zygzyg:/# docker exec -it redis-1 redis-cli --cluster del-node 192.168.64.2:6378 d3491a0ac541242adc623bb6789d9ff90c62a3ac
>>> Removing node d3491a0ac541242adc623bb6789d9ff90c62a3ac from cluster 192.168.64.2:6378
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
再次检查节点信息,可以看到已经成功移除Slave
127.0.0.1:6371> CLUSTER NODES
8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 192.168.64.2:6371@16371 myself,master - 0 1678617759000 1 connected 1365-5460
7148804d3d834ef09fd85116dd47e8c510817ed3 192.168.64.2:6372@16372 master - 0 1678617757000 2 connected 6827-10922
ec4d185716e3aaac362e50e8b0de29fbc8043c58 192.168.64.2:6373@16373 master - 0 1678617760573 3 connected 12288-16383
24e077c89581b67cbb1da1cc1f3a54fb7bb81f66 192.168.64.2:6374@16374 slave 7148804d3d834ef09fd85116dd47e8c510817ed3 0 1678617758000 2 connected
a5c3552ee377921ff29c26ae2ad6559eef670938 192.168.64.2:6375@16375 slave ec4d185716e3aaac362e50e8b0de29fbc8043c58 0 1678617759568 3 connected
9f5269c210004819088745d56b3f9d447c559c9a 192.168.64.2:6376@16376 slave 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 0 1678617759000 1 connected
14ea8ca8ca5177f708e4e8cd35060825984b1f70 192.168.64.2:6377@16377 master - 0 1678617757554 7 connected 0-1364 5461-6826 10923-12287
将刚才加入的Matser(6377)Slot清空,这里将清空出来的Slot全都给6371
docker exec -it redis-1 redis-cli --cluster reshard 192.168.64.2:6371
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing Cluster Check (using node 192.168.64.2:6371)
M: 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 192.168.64.2:6371
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: 24e077c89581b67cbb1da1cc1f3a54fb7bb81f66 192.168.64.2:6374
slots: (0 slots) slave
replicates 7148804d3d834ef09fd85116dd47e8c510817ed3
M: ec4d185716e3aaac362e50e8b0de29fbc8043c58 192.168.64.2:6373
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: 7148804d3d834ef09fd85116dd47e8c510817ed3 192.168.64.2:6372
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: 14ea8ca8ca5177f708e4e8cd35060825984b1f70 192.168.64.2:6377
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: a5c3552ee377921ff29c26ae2ad6559eef670938 192.168.64.2:6375
slots: (0 slots) slave
replicates ec4d185716e3aaac362e50e8b0de29fbc8043c58
S: 9f5269c210004819088745d56b3f9d447c559c9a 192.168.64.2:6376
slots: (0 slots) slave
replicates 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 #这里写需要转移的Slot数量
What is the receiving node ID? 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 #这里写需要转移到那个节点(填写节点的ID)
# 这里选择从哪个节点分配,选择all表示从所有其他节点平均分出来
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: 14ea8ca8ca5177f708e4e8cd35060825984b1f70 #这里填6377的ID,表示转移到6371的4096个Slot由6377提供
Source node #2: done #表示完成
再次查看节点信息
docker exec -it redis-1 redis-cli -a 970512zyg --cluster check 192.168.64.2 6371
192.168.64.2:6371 (8eadb7b0...) -> 0 keys | 8192 slots | 2 slaves.
192.168.64.2:6373 (ec4d1857...) -> 0 keys | 4096 slots | 1 slaves.
192.168.64.2:6372 (7148804d...) -> 0 keys | 4096 slots | 1 slaves.
这里可以看到6371的slot变成了8192说明转移成功,那么6371到哪里去了呢?在下面的信息中可以看到6377成为了6371的Slave。
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.64.2:6371)
M: 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1 192.168.64.2:6371
slots:[0-6826],[10923-12287] (8192 slots) master
2 additional replica(s)
S: 14ea8ca8ca5177f708e4e8cd35060825984b1f70 192.168.64.2:6377
slots: (0 slots) slave
replicates 8eadb7b0abc8b078e1976dd55b5d527dfaa5a2c1
重复上面的操作(docker exec -it redis-1 redis-cli --cluster del-node 192.168.64.2:6377 14ea8ca8ca5177f708e4e8cd35060825984b1f70
)即可再次从集群中移除6377,至此已经成功完成了缩容。
集群操作常用命令
不在同一个Slot下的多key操作支持不好
不在同一个Slot下的key无法使用mset
,mget
等多key操作。
127.0.0.1:6371> mget k1 k2 k3
(error) CROSSSLOT Keys in request don't hash to the same slot
这里可以通过{}
来定义同一个组的概念,使用key中{}
内相同内容的键值对放到一个Slot去。比如:
127.0.0.1:6371> mset k1{k} v1 k2{k} v2 k3{k} v3
-> Redirected to slot [7629] located at 141.147.157.234:6372
OK
192.168.64.2:6372> mget k1{k} k2{k} k3{k}
1) "v1"
2) "v2"
3) "v3"