当我们启动一个seata-server的时候,只需要执行一下seata-server.sh脚本即可,这个脚本其实就是执行了一下io.seata.server.Server的main方法,我们来看一下这个main方法做了哪些操作?
public class Server { public static void main(String[] args) throws IOException { // get port first, use to logback.xml // 获取监听的端口 int port = PortHelper.getPort(args); System.setProperty(ConfigurationKeys.SERVER_PORT, Integer.toString(port)); // create logger final Logger logger = LoggerFactory.getLogger(Server.class); if (ContainerHelper.isRunningInContainer()) { logger.info("The server is running in container."); } //initialize the parameter parser //Note that the parameter parser should always be the first line to execute. //Because, here we need to parse the parameters needed for startup. // 解析启动和配置文件中的各种参数 ParameterParser parameterParser = new ParameterParser(args); //initialize the metrics // 监控相关 MetricsManager.get().init(); System.setProperty(ConfigurationKeys.STORE_MODE, parameterParser.getStoreMode()); ThreadPoolExecutor workingThreads = new ThreadPoolExecutor(NettyServerConfig.getMinServerPoolSize(), NettyServerConfig.getMaxServerPoolSize(), NettyServerConfig.getKeepAliveTime(), TimeUnit.SECONDS, new LinkedBlockingQueue<>(NettyServerConfig.getMaxTaskQueueSize()), new NamedThreadFactory("ServerHandlerThread", NettyServerConfig.getMaxServerPoolSize()), new ThreadPoolExecutor.CallerRunsPolicy()); NettyRemotingServer nettyRemotingServer = new NettyRemotingServer(workingThreads); //server port nettyRemotingServer.setListenPort(parameterParser.getPort()); // 将serverNode作为雪花算法中的workerId UUIDGenerator.init(parameterParser.getServerNode()); //log store mode : file, db, redis // SessionHolder负责事务日志的持久化存储 // 设置存储模式,有三种可选类型,file,db,redis SessionHolder.init(parameterParser.getStoreMode()); // 创建事务协调器 DefaultCoordinator coordinator = new DefaultCoordinator(nettyRemotingServer); coordinator.init(); nettyRemotingServer.setHandler(coordinator); // register ShutdownHook ShutdownHook.getInstance().addDisposable(coordinator); ShutdownHook.getInstance().addDisposable(nettyRemotingServer); //127.0.0.1 and 0.0.0.0 are not valid here. if (NetUtil.isValidIp(parameterParser.getHost(), false)) { XID.setIpAddress(parameterParser.getHost()); } else { XID.setIpAddress(NetUtil.getLocalIp()); } XID.setPort(nettyRemotingServer.getListenPort()); try { // 启动nettyServer并阻塞在这里 nettyRemotingServer.init(); } catch (Throwable e) { logger.error("nettyServer init error:{}", e.getMessage(), e); System.exit(-1); } System.exit(0); } }
这个事务协调器有哪些作用呢?
public interface TCInboundHandler { /** * Handle global begin response. */ GlobalBeginResponse handle(GlobalBeginRequest globalBegin, RpcContext rpcContext); /** * Handle global commit response. */ GlobalCommitResponse handle(GlobalCommitRequest globalCommit, RpcContext rpcContext); /** * Handle global rollback response. */ GlobalRollbackResponse handle(GlobalRollbackRequest globalRollback, RpcContext rpcContext); /** * Handle branch register response. */ BranchRegisterResponse handle(BranchRegisterRequest branchRegister, RpcContext rpcContext); // ... }
启动设置的Handler
IdleStateHandler:处理心跳
ProtocolV1Decoder:消息解码器
ProtocolV1Encoder:消息编码器
AbstractNettyRemotingServer.ServerHandler:处理各种消息
@ChannelHandler.Sharable class ServerHandler extends ChannelDuplexHandler { /** * 处理读取到的消息 */ @Override public void channelRead(final ChannelHandlerContext ctx, Object msg) throws Exception { if (!(msg instanceof RpcMessage)) { return; } processMessage(ctx, (RpcMessage) msg); } }
可以看到ServerHandler类上有@ChannelHandler.Sharable注解,表明所有的连接都会共用这一个ChannelHandler,这样当消息处理的慢时,就会降低并发
典型的策略模式,根据消息类型找到对应的消息处理器RemotingProcessor
protected void processMessage(ChannelHandlerContext ctx, RpcMessage rpcMessage) throws Exception { if (LOGGER.isDebugEnabled()) { LOGGER.debug(String.format("%s msgId:%s, body:%s", this, rpcMessage.getId(), rpcMessage.getBody())); } Object body = rpcMessage.getBody(); if (body instanceof MessageTypeAware) { MessageTypeAware messageTypeAware = (MessageTypeAware) body; // 根据消息类型获取对应的处理器 final Pair<RemotingProcessor, ExecutorService> pair = this.processorTable.get((int) messageTypeAware.getTypeCode()); if (pair != null) { // 对应的处理器设置了线程池,则放到线程池中执行 if (pair.getSecond() != null) { try { pair.getSecond().execute(() -> { try { pair.getFirst().process(ctx, rpcMessage); } catch (Throwable th) { LOGGER.error(FrameworkErrorCode.NetDispatch.getErrCode(), th.getMessage(), th); } finally { MDC.clear(); } }); } catch (RejectedExecutionException e) { // 线程池拒绝策略之一,抛出RejectedExecutionException LOGGER.error(FrameworkErrorCode.ThreadPoolFull.getErrCode(), "thread pool is full, current max pool size is " + messageExecutor.getActiveCount()); if (allowDumpStack) { String name = ManagementFactory.getRuntimeMXBean().getName(); String pid = name.split("@")[0]; int idx = new Random().nextInt(100); try { Runtime.getRuntime().exec("jstack " + pid + " >d:/" + idx + ".log"); } catch (IOException exx) { LOGGER.error(exx.getMessage()); } allowDumpStack = false; } } } else { // 对应的处理器没有设置线程池,则直接执行 try { pair.getFirst().process(ctx, rpcMessage); } catch (Throwable th) { LOGGER.error(FrameworkErrorCode.NetDispatch.getErrCode(), th.getMessage(), th); } } } else { LOGGER.error("This message type [{}] has no processor.", messageTypeAware.getTypeCode()); } } else { LOGGER.error("This rpcMessage body[{}] is not MessageTypeAware type.", body); } }
上面说过了因为ServerHandler是被所有线程共享的,所以当某些消息的处理非常慢时,就会影响并发,此时要把他们扔到线程池中执行。
protected final HashMap<Integer/*MessageType*/, Pair<RemotingProcessor, ExecutorService>> processorTable = new HashMap<>(32);
seata server
[1]https://zhuanlan.zhihu.com/p/61981170
[2]http://seata.io/zh-cn/blog/seata-sourcecode-server-bootstrap.html
比较详细
[3]https://my.oschina.net/leitingweb/blog/3150652