1、引子
2、AFTER 的写一致性
3、AFTER 的读一致性
4、AFTER 执行流程
5、BEFORE 执行流程
6、一些思考
7、参考文档
某次测试过程中,发现在 AFTER 级别下,节点故障会导致集群无法进行事务提交,同时,当事务进入提交阶段后,其它节点无法开启只读事务。整个集群无法正常提供服务,直到故障节点被踢出集群。
以下首先复现上述故障场景的步骤:
1、初始化一个3节点的集群。集群信息如下:
mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | c223cde5-0719-11ec-8295-ec5c6826bca3 | 127.0.0.1 | 13000 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | c22847e0-0719-11ec-b30b-ec5c6826bca3 | 127.0.0.1 | 13004 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | c22c3752-0719-11ec-98ef-ec5c6826bca3 | 127.0.0.1 | 13002 | ONLINE | PRIMARY | 8.0.25 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec) mysql> select @@group_replication_member_expel_timeout; +------------------------------------------+ | @@group_replication_member_expel_timeout | +------------------------------------------+ | 1000 | +------------------------------------------+ 1 row in set (0.00 sec)
2、在 AFTER 级别下创建表并插入一条数据
13000-conn1 mysql> set session group_replication_consistency='AFTER'; Query OK, 0 rows affected (0.00 sec) mysql> create table t1 (c1 int primary key, c2 int); Query OK, 0 rows affected (0.12 sec) mysql> insert into t1 values (1,1); Query OK, 1 row affected (0.03 sec)
3、强杀一个节点,由于设置的 expel_timeout 设置为1000秒,故障节点变成 UNREACHABLE状态
mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | c223cde5-0719-11ec-8295-ec5c6826bca3 | 127.0.0.1 | 13000 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | c22847e0-0719-11ec-b30b-ec5c6826bca3 | 127.0.0.1 | 13004 | UNREACHABLE | PRIMARY | 8.0.25 | | group_replication_applier | c22c3752-0719-11ec-98ef-ec5c6826bca3 | 127.0.0.1 | 13002 | ONLINE | PRIMARY | 8.0.25 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)
4、此时,再次插入一条数据,无法返回,语句处于等待提交阶段
13000-conn1 mysql> insert into t1 values (2,2); 13000-conn2 mysql> select time, state, info from information_schema.processlist; +------+--------------------------------------------------------+--------------------------------------------------------------+ | time | state | info | +------+--------------------------------------------------------+--------------------------------------------------------------+ | 0 | executing | select time, state, info from information_schema.processlist | | 193 | waiting for handler commit | Group replication applier module | | 228 | Waiting on empty queue | NULL | | 193 | Slave has read all relay log; waiting for more updates | NULL | | 50 | waiting for handler commit | insert into t1 values (2,2) | +------+--------------------------------------------------------+--------------------------------------------------------------+ 5 rows in set (0.01 sec)
5、再次登录另一个活着的节点,无法执行查询操作
13002-conn1 mysql> set @@group_replication_consistency='eventual'; Query OK, 0 rows affected (0.00 sec) mysql> select * from t1; 13002-conn2 mysql> select time, state, info from information_schema.processlist; +------+--------------------------------------+--------------------------------------------------------------+ | time | state | info | +------+--------------------------------------+--------------------------------------------------------------+ | 0 | executing | select time, state, info from information_schema.processlist | | 354 | waiting for handler commit | Group replication applier module | | 403 | Waiting on empty queue | NULL | | 225 | waiting for handler commit | NULL | | 13 | Executing hook on transaction begin. | select * from t1 | +------+--------------------------------------+--------------------------------------------------------------+ 5 rows in set (0.01 sec)
上述故障的第一个问题是,在 MGR 集群中,当事务读写一致性级别设置为 AFTER 后,任何单点的故障都会导致集群不可用。在默认设置情况下,节点从故障到被踢出集群,一般需要5至10秒的时间,在这段时间内,整个数据库是无法进行写事务的提交的,当然,如上述测试一般,如果将expel_timeout设置的更大,则会有更长的时间无法正常工作。这对线上业务来说,将是一次灾难。
查看MGR读写一致性的原始worklog,发现上述现象是符合设计预期的,并不是一个bug。该 worklog 的以下需求即解释了上述的问题。
FR-08: When group_replication_consistency=AFTER or BEFORE_AND_AFTER, if there are unreachable members but the group still has a reachable majority, the transaction will wait until that members are reachable or leave the group.
如此处理的一个好处是,可以满足部分业务对严格数据一致性的需求,但对一般的业务却是极度不友好的。因此,对于一般的业务,并不推荐使用 AFTER 机制。然而,由于非 AFTER 机制下,事务消息只是通过 paxos 协议在内存层面达成多数派,而并不要求数据落盘,因此,如果多数节点同时故障,是存在丢失数据风险的。
上述故障的第二个问题是,当节点1 处于事务commit阶段过程中,在节点2上,甚至都无法在 eventual 级别下开启一个只读事务。此时,在故障节点被踢出集群之前,节点2无法提供任何的读写服务。
还是从 worklog 中,解释了这个问题。
FR-06: When group_replication_consistency=AFTER or BEFORE_AND_AFTER, since the remote ONLINE members do acknowledge the transaction on prepare, the new transactions on those members shall be held until the preceding prepared are committed.
也就是说,如果事务在 remote 节点进入了 prepared 阶段,则必须等待该事务完成提交才能开启新的事务,不论 consistency 是何种级别。但是如此处理的一个弊端是,AFTER 级别下,不光会导致执行节点性能吞吐降低,其它节点作为只读节点性能也会降低。
在多主写部署中,性能的影响可能更大。另一个问题是,如果用户恶意开启 AFTER 级别执行一个大事务操作,会导致其它节点长时间无法开启新的事务。官方 worklog 中也提到这个问题。
SECURITY CONTEXT ================ From a point of view of malicious attack to the group, since when group_replication_consistency=AFTER or BEFORE_AND_AFTER a transaction will wait for a acknowledge from all ONLINE members, a UNREACHABLE member will block a transaction execution until that member is reachable or leaves the group. A malicious user can set group_replication_consistency=AFTER or BEFORE_AND_AFTER on long lived transactions, which may block new transactions while those long lived transactions are being applied.
首先,在事务执行节点上的流程如下:
1.首先,事务进入提交阶段后,会执行一个before_commit的HOOK,在 mgr 中,对应的实现是 group_replication_trans_before_commit。AFTER 的一致性保证通过该接口实现。
2.假设事务 T1 在节点 M1 上执行,如果是 AFTER 级别,会通过 paxos 发送一个携带事务全部数据的Transaction_with_guarantee_message消息,消息类型为 CT_TRANSACTION_WITH_GUARANTEE_MESSAGE。
3.节点接收到该消息并处理时,首先会获取当前集群中的 online_members。这里需要注意的是,即使节点状态变为 UNREACHABLE,只要没有踢出集群,也会认为是 online_members。
4.节点 M1 需要等待其它节点的消息反馈
5.节点M1只有收到上述 online_members 中所有节点的 prepared 消息时,才能继续完成提交
接下来,看一下其它节点(以M2节点为例)处理 AFTER 事务的流程:
1.首先,paxos 接收到事务,并进入事务执行阶段
2.事务T1在M2进入提交阶段时,调用 before_hook进行处理,不同于 M1的用户线程,M2上的复制线程是在 GR_APPLIER_CHANNEL 上执行
3.将事务加入到 prepared 事务列表
4.发送 transaction_prepared 消息给所有的节点,并等待处理
5.接收到其它节点对 transaction_prepared 消息确认后,从 prepared 事务列表中移除该事务,并继续提交
对于 AFTER 模式,所有的节点在处理事务时,均需要发送一个 transaction_prepared 消息并等待所有节点的确认,之后,用户线程执行的事务才能成功提交。排除用户线程等待所有节点事务提交的时间开销,这些消息处理的网络开销也会对性能造成一定的影响。
另一个需要注意的是,如果在M2节点,事务T1还未进入prepared阶段,此时开启新的事务并不会阻塞。在DEBUG版本下,可以通过如下步骤进行验证。
connect 13000: mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | 1ca5023b-0a1d-11ec-82f9-c8f7507e5048 | 127.0.0.1 | 13000 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 1cab1e2b-0a1d-11ec-9eb9-c8f7507e5048 | 127.0.0.1 | 13004 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 1caf096c-0a1d-11ec-a241-c8f7507e5048 | 127.0.0.1 | 13002 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 1cbc3cf7-0a1d-11ec-955d-c8f7507e5048 | 127.0.0.1 | 13006 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 1cc6f5eb-0a1d-11ec-8e81-c8f7507e5048 | 127.0.0.1 | 13008 | ONLINE | PRIMARY | 8.0.25 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 5 rows in set (0.03 sec) mysql> set session group_replication_consistency='AFTER'; Query OK, 0 rows affected (0.00 sec) mysql> create table t1 (c1 int primary key, c2 int); Query OK, 0 rows affected (0.17 sec) mysql> insert into t1 values (1,1); Query OK, 1 row affected (0.07 sec) kill -9 13008 mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | 1ca5023b-0a1d-11ec-82f9-c8f7507e5048 | 127.0.0.1 | 13000 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 1cab1e2b-0a1d-11ec-9eb9-c8f7507e5048 | 127.0.0.1 | 13004 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 1caf096c-0a1d-11ec-a241-c8f7507e5048 | 127.0.0.1 | 13002 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 1cbc3cf7-0a1d-11ec-955d-c8f7507e5048 | 127.0.0.1 | 13006 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | 1cc6f5eb-0a1d-11ec-8e81-c8f7507e5048 | 127.0.0.1 | 13008 | UNREACHABLE | PRIMARY | 8.0.25 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 5 rows in set (0.00 sec) connect 13002,使用DEBUG_SYNC,控制节点不进入prepared阶段: mysql> set global debug='+d,group_replication_before_commit_hook_wait'; Query OK, 0 rows affected (0.00 sec) connect 13000,插入新的事务,没有返回 mysql> insert into t1 values (2,2); connect 13002,可以发现,事务被 DEBUG_SYNC 阻塞: mysql> select command, time, state, info from information_schema.processlist; +---------+------+----------------------------+-----------------------------------------------------------------------+ | command | time | state | info | +---------+------+----------------------------+-----------------------------------------------------------------------+ | Connect | 189 | waiting for handler commit | Group replication applier module | | Query | 0 | executing | select command, time, state, info from information_schema.processlist | | Sleep | 7 | | NULL | | Daemon | 240 | Waiting on empty queue | NULL | | Query | 64 | debug sync point: now | NULL | +---------+------+----------------------------+-----------------------------------------------------------------------+ 5 rows in set (0.01 sec) 此时,可以查询到数据: mysql> set session group_replication_consistency='eventual'; Query OK, 0 rows affected (0.00 sec) mysql> select * from t1; +----+------+ | c1 | c2 | +----+------+ | 1 | 1 | +----+------+ 1 row in set (0.00 sec)
与 AFTER 相对应的是 BEFORE,如果一个 session 开启了 BEFORE 级别,则在事务开启时,需要等待所有已经提交的事务已经在本地完成提交,这是通过 WAIT_FOR_GTID_SET 实现。在事务的开启时刻,获取到节点已经同步接收到的所有的事务 gtid 集合,并等待集合内所有的 gtid 完成提交,即可保证事务执行时,读取到最新的数据。
但是在获取 gtid 集合之前,节点需要通过 paxos 发送一个 SYNC_BEFORE_EXECUTION 类型的消息。由于 paxos 会对消息进行排队,因此,当 SYNC_BEFORE_EXECUTION 处理完成时,可以保证该消息发送之前的所有的事务消息均完成在 paxos 中的处理。由于该消息是本次事务开启时产生的,因而此时节点收到的 gtid 集合符合 BEFORE 级别。
如果节点不发送 SYNC_BEFORE_EXECUTION 消息,则 BEFORE 级别未必能够读取到最新数据。假设当前存在网络分区,总共三个节点A,B,C,网络分区后,A,B节点组成多数派,C节点为少数派,此时,A,B节点上新的写入事务将不会继续同步到C节点。在 C 节点被踢出集群之前,如果 C 开启了 BEFORE 级别,却未发送 SYNC_BEFORE_EXECUTION 消息,那么 C 中不能读取到新的数据,违背了 BEFORE 的设计宗旨。但是发送该消息后,由于无法达成消息一致性,那么新的事务将失败、或者一直等待消息返回,而不会返回用户过时的数据。
如下示例则显示了多数节点故障下,BEFORE 级别的执行行为。
开始阶段: mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | dd6398ec-09fe-11ec-95de-c8f7507e5048 | 127.0.0.1 | 13002 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | dd64424b-09fe-11ec-aeeb-c8f7507e5048 | 127.0.0.1 | 13000 | ONLINE | PRIMARY | 8.0.25 | | group_replication_applier | dd65b9de-09fe-11ec-9d06-c8f7507e5048 | 127.0.0.1 | 13004 | ONLINE | PRIMARY | 8.0.25 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.06 sec) mysql> select @@group_replication_member_expel_timeout; +------------------------------------------+ | @@group_replication_member_expel_timeout | +------------------------------------------+ | 60 | +------------------------------------------+ 1 row in set (0.00 sec) 查询数据: mysql> set session group_replication_consistency='BEFORE'; Query OK, 0 rows affected (0.00 sec) mysql> select * from t1; +----+------+ | c1 | c2 | +----+------+ | 2 | 2 | +----+------+ 1 row in set (0.01 sec) 使用kill -9模拟多数节点故障: mysql> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | dd6398ec-09fe-11ec-95de-c8f7507e5048 | 127.0.0.1 | 13002 | UNREACHABLE | PRIMARY | 8.0.25 | | group_replication_applier | dd64424b-09fe-11ec-aeeb-c8f7507e5048 | 127.0.0.1 | 13000 | UNREACHABLE | PRIMARY | 8.0.25 | | group_replication_applier | dd65b9de-09fe-11ec-9d06-c8f7507e5048 | 127.0.0.1 | 13004 | ONLINE | PRIMARY | 8.0.25 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec) 此时,再次查询数据,一直未返回 mysql> select * from t1; 使用另一个客户端查看状态,一直处于 before hook 阶段: mysql> select command, time, state, info from information_schema.processlist; +---------+------+--------------------------------------------------------+-----------------------------------------------------------------------+ | command | time | state | info | +---------+------+--------------------------------------------------------+-----------------------------------------------------------------------+ | Connect | 253 | waiting for handler commit | Group replication applier module | | Daemon | 318 | Waiting on empty queue | NULL | | Query | 238 | Slave has read all relay log; waiting for more updates | NULL | | Query | 170 | Executing hook on transaction begin. | select * from t1 | | Query | 0 | executing | select command, time, state, info from information_schema.processlist | +---------+------+--------------------------------------------------------+-----------------------------------------------------------------------+ 5 rows in set (0.00 sec)
当然,由于 BEFORE 级别下,额外发送了一次 SYNC_BEFORE_EXECUTION 消息,增加了一次网络开销,对性能是有一定的影响的。
AFTER 模式下,当其它节点事务进入到 prepared 阶段,但用户线程并未完成提交,此时要求新的事务开启时必须等待事务提交是否必要 ?当前设计下,这一要求会造成一定程度上的节点不可用。
AFTER 模式需要等待所有的节点提交成功,这是一个强一致的数据同步方案,但同时会导致集群不可用。同时由于参数 group_replication_consistency 是一个 session 级的控制变量,即使某一个用户连接开启 AFTER 模式,都可能导致整个集群不可用。一个备
选方案是,采用开源的GreatSQL分支的 majority-after 模式(group_replication_majority_after_mode = ON),可以规避上述问题。