之前搭建redis集群可以采用redis-trib.rb,它是采用ruby实现的redis集群管理工具,内部通过cluster 相关命令帮我们简化集群创建,检查,槽迁移和均衡等常见运维操作,使用之前需要安装ruby依赖环境,但是在我目前的redis版本中,redis-trib.rb工具已经被整合到redis-cli命令中了。
[root@redis src]# redis-cli --version redis-cli 6.2.6 [root@redis src]# redis-server --version Redis server v=6.2.6 sha=00000000:0 malloc=jemalloc-5.1.0 bits=64 build=dbb063e6f0357f98
因此下面将会用redis-cli 命令快速搭建redis集群
├── 6379 │ ├── conf │ │ └── redis-6379.conf │ ├── data │ └── log ├── 6380 │ ├── conf │ │ └── redis-6380.conf │ ├── data │ └── log ├── 6381 │ ├── conf │ │ └── redis-6381.conf │ ├── data │ └── log ├── 6382 │ ├── conf │ │ └── redis-6382.conf │ ├── data │ └── log ├── 6383 │ ├── conf │ │ └── redis-6383.conf │ ├── data │ └── log └── 6384 ├── conf │ └── redis-6384.conf ├── data │ └── dump.rdb └── log
这里相对于上一节的配置中在redis-{port}.conf加入了appendonly yes 持久化参数。
先启动每个redis数据节点
[root@redis data]# redis-server /data/6379/conf/redis-6379.conf [root@redis data]# redis-server /data/6380/conf/redis-6380.conf [root@redis data]# redis-server /data/6381/conf/redis-6381.conf [root@redis data]# redis-server /data/6382/conf/redis-6382.conf [root@redis data]# redis-server /data/6383/conf/redis-6383.conf [root@redis data]# redis-server /data/6384/conf/redis-6384.conf [root@redis data]# ps -ef|grep redis root 10906 6097 0 08:05 pts/1 00:00:00 redis-cli -p 6379 root 10924 5723 0 08:23 pts/0 00:00:00 redis-cli -p 6384 root 11309 1 0 09:28 ? 00:00:00 redis-server *:6379 [cluster] root 11315 1 0 09:28 ? 00:00:00 redis-server *:6380 [cluster] root 11321 1 0 09:29 ? 00:00:00 redis-server *:6381 [cluster] root 11327 1 0 09:29 ? 00:00:00 redis-server *:6382 [cluster] root 11333 1 0 09:29 ? 00:00:00 redis-server *:6383 [cluster] root 11339 1 0 09:29 ? 00:00:00 redis-server *:6384 [cluster]
将这些单节点的redis组成一个redis集群
[root@redis 6379]# redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 127.0.0.1:6383 to 127.0.0.1:6379 Adding replica 127.0.0.1:6384 to 127.0.0.1:6380 Adding replica 127.0.0.1:6382 to 127.0.0.1:6381 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: f87d0bac225f13d1d16e6f5cffecff3b83998484 127.0.0.1:6379 slots:[0-5460] (5461 slots) master M: 78f13905a498d20450033f04014f905193b251c0 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master M: 39276f17bb64869a1b4c03433bde1f688a2065e9 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master S: 9b9c5a435fb8fa59d978a98be0e0aeaedb5027d5 127.0.0.1:6382 replicates 78f13905a498d20450033f04014f905193b251c0 S: da69fbf33f9395f4d9ed3016e69855b7fbb28f62 127.0.0.1:6383 replicates 39276f17bb64869a1b4c03433bde1f688a2065e9 S: 889af758c59d392240d2d2d38e37c9df7be053ab 127.0.0.1:6384 replicates f87d0bac225f13d1d16e6f5cffecff3b83998484 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join .. >>> Performing Cluster Check (using node 127.0.0.1:6379) M: f87d0bac225f13d1d16e6f5cffecff3b83998484 127.0.0.1:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 39276f17bb64869a1b4c03433bde1f688a2065e9 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 889af758c59d392240d2d2d38e37c9df7be053ab 127.0.0.1:6384 slots: (0 slots) slave replicates f87d0bac225f13d1d16e6f5cffecff3b83998484 S: 9b9c5a435fb8fa59d978a98be0e0aeaedb5027d5 127.0.0.1:6382 slots: (0 slots) slave replicates 78f13905a498d20450033f04014f905193b251c0 S: da69fbf33f9395f4d9ed3016e69855b7fbb28f62 127.0.0.1:6383 slots: (0 slots) slave replicates 39276f17bb64869a1b4c03433bde1f688a2065e9 M: 78f13905a498d20450033f04014f905193b251c0 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
进单点进去查看
127.0.0.1:6379> cluster nodes 39276f17bb64869a1b4c03433bde1f688a2065e9 127.0.0.1:6381@16381 master - 0 1648345208000 3 connected 10923-16383 889af758c59d392240d2d2d38e37c9df7be053ab 127.0.0.1:6384@16384 slave f87d0bac225f13d1d16e6f5cffecff3b83998484 0 1648345207916 1 connected 9b9c5a435fb8fa59d978a98be0e0aeaedb5027d5 127.0.0.1:6382@16382 slave 78f13905a498d20450033f04014f905193b251c0 0 1648345209947 2 connected da69fbf33f9395f4d9ed3016e69855b7fbb28f62 127.0.0.1:6383@16383 slave 39276f17bb64869a1b4c03433bde1f688a2065e9 0 1648345209000 3 connected 78f13905a498d20450033f04014f905193b251c0 127.0.0.1:6380@16380 master - 0 1648345209000 2 connected 5461-10922 f87d0bac225f13d1d16e6f5cffecff3b83998484 127.0.0.1:6379@16379 myself,master - 0 1648345207000 1 connected 0-5460
将上面的显示的redis-cluster是搭建成功了。。。
集群完整性指所有的槽都分配到存活的主节点上,只要16384个槽有一个没有分配给节点则表示集群不不完整。可以使用check命令进行检查:
[root@redis 6379]# redis-cli --cluster check -h 127.0.0.1 -p 6379 [ERR] Wrong number of arguments for specified --cluster sub command [root@redis 6379]# [root@redis 6379]# redis-cli --cluster check 127.0.0.1:6379 127.0.0.1:6379 (f87d0bac...) -> 0 keys | 5461 slots | 1 slaves. 127.0.0.1:6381 (39276f17...) -> 0 keys | 5461 slots | 1 slaves. 127.0.0.1:6380 (78f13905...) -> 0 keys | 5462 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 127.0.0.1:6379) M: f87d0bac225f13d1d16e6f5cffecff3b83998484 127.0.0.1:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 39276f17bb64869a1b4c03433bde1f688a2065e9 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 889af758c59d392240d2d2d38e37c9df7be053ab 127.0.0.1:6384 slots: (0 slots) slave replicates f87d0bac225f13d1d16e6f5cffecff3b83998484 S: 9b9c5a435fb8fa59d978a98be0e0aeaedb5027d5 127.0.0.1:6382 slots: (0 slots) slave replicates 78f13905a498d20450033f04014f905193b251c0 S: da69fbf33f9395f4d9ed3016e69855b7fbb28f62 127.0.0.1:6383 slots: (0 slots) slave replicates 39276f17bb64869a1b4c03433bde1f688a2065e9 M: 78f13905a498d20450033f04014f905193b251c0 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
可以看到,提示集群所有的槽都已经分配到节点上。
大功告成。。。是不是很简单