本文将简述Flink SQL / Table API的内部实现,为大家把 "从SQL语句到具体执行" 这个流程串起来。并且尽量多提供调用栈,这样大家在遇到问题时就知道应该从什么地方设置断点,对整体架构理解也能更加深入。
目录
本文将简述Flink SQL / Table API的内部实现,为大家把 "从SQL语句到具体执行" 这个流程串起来。并且尽量多提供调用栈,这样大家在遇到问题时就知道应该从什么地方设置断点,对整体架构理解也能更加深入。
SQL流程中涉及到几个重要的节点举例如下:
// NOTE : 执行顺序是从上至下, " -----> " 表示生成的实例类型 * * +-----> "left outer JOIN" (SQL statement) * | * | * SqlParser.parseQuery // SQL 解析阶段,生成AST(抽象语法树),作用是SQL–>SqlNode * | * | * +-----> SqlJoin (SqlNode) * | * | * SqlToRelConverter.convertQuery // 语义分析,生成逻辑计划,作用是SqlNode–>RelNode * | * | * +-----> LogicalProject (RelNode) // Abstract Syntax Tree,未优化的RelNode * | * | * FlinkLogicalJoinConverter (RelOptRule) // Flink定制的优化rules * VolcanoRuleCall.onMatch // 基于Flink定制的一些优化rules去优化 Logical Plan * | * | * +-----> FlinkLogicalJoin (RelNode) // Optimized Logical Plan,逻辑执行计划 * | * | * StreamExecJoinRule (RelOptRule) // Rule that converts FlinkLogicalJoin without window bounds in join condition to StreamExecJoin * VolcanoRuleCall.onMatch // 基于Flink rules将optimized LogicalPlan转成Flink物理执行计划 * | * | * +-----> StreamExecJoin (FlinkRelNode) // Stream physical RelNode,物理执行计划 * | * | * StreamExecJoin.translateToPlanInternal // 作用是生成 StreamOperator, 即Flink算子 * | * | * +-----> StreamingJoinOperator (StreamOperator) // Streaming unbounded Join operator in StreamTask * | * | * StreamTwoInputProcessor.proce***ecord1// 在TwoInputStreamTask调用StreamingJoinOperator,真实的执行 * | * |
后续我们会以这个图为脉络进行讲解。
Flink Table API&SQL 为流式数据和静态数据的关系查询保留统一的接口,而且利用了Apache Calcite的查询优化框架和SQL parser。
为什么Flink要使用Table API呢?总结来说,关系型API的好处如下:
Calcite是这里面的核心成员。Apache Calcite是面向Hadoop新的sql引擎,它提供了标准的SQL语言、多种查询优化和连接各种数据源的能力。
下面是 Calcite 概念梳理:
Sql 的执行过程一般可以分为四个阶段,Calcite 与这个很类似,但Calcite是分成五个阶段 :
SQL 解析阶段,生成AST(抽象语法树)(SQL–>SqlNode)
SqlNode 验证(SqlNode–>SqlNode)
语义分析,生成逻辑计划(Logical Plan)(SqlNode–>RelNode/RexNode)
优化阶段,按照相应的规则(Rule)进行优化(RelNode–>RelNode)
生成ExecutionPlan,生成物理执行计划(DataStream Plan)
Flink承载了 Table API 和 SQL API 两套表达方式。它以Apache Calcite这个SQL解析器做SQL语义解析,统一生成为 Calcite Logical Plan(SqlNode 树);随后验证;再利用 Calcite的优化器优化转换规则和logical plan,根据数据源的性质(流和批)使用不同的规则进行优化,优化为 RelNode 逻辑执行计划树;最终优化后的plan转成常规的Flink DataSet 或 DataStream 程序。任何对于DataStream API和DataSet API的性能调优提升都能够自动地提升Table API或者SQL查询的效率。
一条stream sql从提交到calcite解析、优化最后到Flink引擎执行,一般分为以下几个阶段:
而如果是通过table api来提交任务的话,也会经过calcite优化等阶段,基本流程和直接运行sql类似:
可以看出来,Table API 与 SQL 在获取 RelNode 之后是一样的流程,只是获取 RelNode 的方式有所区别:
TableEnvironment对象是Table API和SQL集成的一个核心,支持以下场景:
一个查询中只能绑定一个指定的TableEnvironment,TableEnvironment可以通过来配置TableConfig来配置,通过TableConfig可以自定义查询优化以及translation的进程。
TableEnvironment执行过程如下:
TableEnvironment.sql()为调用入口;
Flink实现了FlinkPlannerImpl,执行parse(sql),validate(sqlNode),rel(sqlNode)操作;
生成Table;
具体代码摘要如下
package org.apache.Flink.table.api.internal; @Internal public class TableEnvironmentImpl implements TableEnvironment { private final CatalogManager catalogManager; private final ModuleManager moduleManager; private final OperationTreeBuilder operationTreeBuilder; private final ListbufferedModifyOperations = new ArrayList<>(); protected final TableConfig tableConfig; protected final Executor execEnv; protected final FunctionCatalog functionCatalog; protected final Planner planner; protected final Parser parser; } // 在程序中打印类内容如下 this = {StreamTableEnvironmentImpl@4701} functionCatalog = {FunctionCatalog@4702} scalaExecutionEnvironment = {StreamExecutionEnvironment@4703} planner = {StreamPlanner@4704} config = {TableConfig@4708} executor = {StreamExecutor@4709} PlannerBase.config = {TableConfig@4708} functionCatalog = {FunctionCatalog@4702} catalogManager = {CatalogManager@1250} isStreamingMode = true plannerContext = {PlannerContext@4711} parser = {ParserImpl@4696} catalogManager = {CatalogManager@1250} moduleManager = {ModuleManager@4705} operationTreeBuilder = {OperationTreeBuilder@4706} bufferedModifyOperations = {ArrayList@4707} size = 0 tableConfig = {TableConfig@4708} execEnv = {StreamExecutor@4709} TableEnvironmentImpl.functionCatalog = {FunctionCatalog@4702} TableEnvironmentImpl.planner = {StreamPlanner@4704} parser = {ParserImpl@4696} registration = {TableEnvironmentImpl$1@4710}
Catalog – 定义元数据和命名空间,包含 Schema(库),Table(表),RelDataType(类型信息)。
所有对数据库和表的元数据信息都存放在Flink CataLog内部目录结构中,其存放了Flink内部所有与Table相关的元数据信息,包括表结构信息/数据源信息等。
// TableEnvironment里面包含一个CatalogManager public final class CatalogManager { // A map between names and catalogs. private Mapcatalogs; } // Catalog接口 public interface Catalog { ...... default OptionalgetTableFactory() { return Optional.empty(); } ...... } // 当数据来源是在程序里面自定义的时候,对应是GenericInMemoryCatalog public class GenericInMemoryCatalog extends AbstractCatalog { public static final String DEFAULT_DB = "default"; private final Mapdatabases; private final Maptables; private final Mapfunctions; private final Map<ObjectPath, Map> partitions; private final MaptableStats; private final MaptableColumnStats; private final Map<ObjectPath, Map> partitionStats; private final Map<ObjectPath, Map> partitionColumnStats; } // 程序中调试的内容 catalogManager = {CatalogManager@4646} catalogs = {LinkedHashMap@4652} size = 1 "default_catalog" -> {GenericInMemoryCatalog@4659} key = "default_catalog" value = {char[15]@4668} hash = 552406043 value = {GenericInMemoryCatalog@4659} databases = {LinkedHashMap@4660} size = 1 tables = {LinkedHashMap@4661} size = 0 functions = {LinkedHashMap@4662} size = 0 partitions = {LinkedHashMap@4663} size = 0 tableStats = {LinkedHashMap@4664} size = 0 tableColumnStats = {LinkedHashMap@4665} size = 0 partitionStats = {LinkedHashMap@4666} size = 0 partitionColumnStats = {LinkedHashMap@4667} size = 0 catalogName = "default_catalog" defaultDatabase = "default_database" temporaryTables = {HashMap@4653} size = 2 currentCatalogName = "default_catalog" currentDatabaseName = "default_database" builtInCatalogName = "default_catalog"
StreamPlanner是新的Blink Planner一种。
Flink Table 的新架构实现了查询处理器的插件化,社区完整保留原有 Flink Planner (Old Planner),同时又引入了新的 Blink Planner,用户可以自行选择使用 Old Planner 还是 Blink Planner。
在模型上,Old Planner 没有考虑流计算作业和批处理作业的统一,针对流计算作业和批处理作业的实现不尽相同,在底层会分别翻译到 DataStream API 和 DataSet API 上。而 Blink Planner 将批数据集看作 bounded DataStream (有界流式数据) ,流计算作业和批处理作业最终都会翻译到 Transformation API 上。 在架构上,Blink Planner 针对批处理和流计算,分别实现了BatchPlanner 和 StreamPlanner ,两者共用了大部分代码,共享了很多优化逻辑。 Old Planner 针对批处理和流计算的代码实现的是完全独立的两套体系,基本没有实现代码和优化逻辑复用。
除了模型和架构上的优点外,Blink Planner 沉淀了许多实用功能,集中在三个方面:
具体对应代码来看,StreamPlanner体现在translateToPlan会调用到不同的 StreamOperator 生成系统上。
class StreamPlanner( executor: Executor, config: TableConfig, functionCatalog: FunctionCatalog, catalogManager: CatalogManager) extends PlannerBase(executor, config, functionCatalog, catalogManager, isStreamingMode = true) { override protected def translateToPlan( execNodes: util.List[ExecNode[_, _]]): util.List[Transformation[_]] = { execNodes.map { case node: StreamExecNode[_] => node.translateToPlan(this) case _ => throw new TableException("Cannot generate DataStream due to an invalid logical plan. " + "This is a bug and should not happen. Please file an issue.") } } } @Internal public final class StreamTableEnvironmentImpl extends TableEnvironmentImpl implements StreamTableEnvironment { privateDataStreamtoDataStream(Table table, OutputConversionModifyOperation modifyOperation) { // 在转换回DataStream时候进行调用 planner 生成plan的操作。 List<Transformation> transformations = planner.translate(Collections.singletonList(modifyOperation)); Transformationtransformation = getTransformation(table, transformations); executionEnvironment.addOperator(transformation); return new DataStream<>(executionEnvironment, transformation); } } // 程序中调试打印的运行栈 translateToPlanInternal:85, StreamExecUnion (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:39, StreamExecUnion (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.Flink.table.planner.plan.nodes.exec) translateToPlan:39, StreamExecUnion (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToTransformation:184, StreamExecSink (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:153, StreamExecSink (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:48, StreamExecSink (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.Flink.table.planner.plan.nodes.exec) translateToPlan:48, StreamExecSink (org.apache.Flink.table.planner.plan.nodes.physical.stream) apply:60, StreamPlanner$$anonfun$translateToPlan$1 (org.apache.Flink.table.planner.delegation) apply:59, StreamPlanner$$anonfun$translateToPlan$1 (org.apache.Flink.table.planner.delegation) apply:234, TraversableLike$$anonfun$map$1 (scala.collection) apply:234, TraversableLike$$anonfun$map$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) map:234, TraversableLike$class (scala.collection) map:104, AbstractTraversable (scala.collection) translateToPlan:59, StreamPlanner (org.apache.Flink.table.planner.delegation) translate:153, PlannerBase (org.apache.Flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:107, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:101, TableConversions (org.apache.Flink.table.api.scala) main:89, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport)
Flink实现了FlinkPlannerImpl,做为和Calcite 联系的桥梁,执行parse(sql),validate(sqlNode),rel(sqlNode)操作。
class FlinkPlannerImpl( config: FrameworkConfig, catalogReaderSupplier: JFunction[JBoolean, CalciteCatalogReader], typeFactory: FlinkTypeFactory, cluster: RelOptCluster) { val operatorTable: SqlOperatorTable = config.getOperatorTable val parser: CalciteParser = new CalciteParser(config.getParserConfig) val convertletTable: SqlRexConvertletTable = config.getConvertletTable val sqlToRelConverterConfig: SqlToRelConverter.Config = config.getSqlToRelConverterConfig } // 这里会有使用 FlinkPlannerImpl public class ParserImpl implements Parser { private final CatalogManager catalogManager; private final SuppliervalidatorSupplier; private final SuppliercalciteParserSupplier; @Override public Listparse(String statement) { CalciteParser parser = calciteParserSupplier.get(); // 这里会有使用 FlinkPlannerImpl FlinkPlannerImpl planner = validatorSupplier.get(); // parse the sql query SqlNode parsed = parser.parse(statement); Operation operation = SqlToOperationConverter.convert(planner, catalogManager, parsed) .orElseThrow(() -> new TableException("Unsupported query: " + statement)); return Collections.singletonList(operation); } } // 程序中调试的内容 planner = {FlinkPlannerImpl@4659} config = {Frameworks$StdFrameworkConfig@4685} catalogReaderSupplier = {PlannerContext$lambda@4686} typeFactory = {FlinkTypeFactory@4687} cluster = {FlinkRelOptCluster@4688} operatorTable = {ChainedSqlOperatorTable@4689} parser = {CalciteParser@4690} convertletTable = {StandardConvertletTable@4691} sqlToRelConverterConfig = {SqlToRelConverter$ConfigImpl@4692} validator = null // 程序调用栈之一 validate:104, FlinkPlannerImpl (org.apache.Flink.table.planner.calcite) convert:127, SqlToOperationConverter (org.apache.Flink.table.planner.operations) parse:66, ParserImpl (org.apache.Flink.table.planner.delegation) sqlQuery:464, TableEnvironmentImpl (org.apache.Flink.table.api.internal) main:82, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport) // 程序调用栈之二 rel:135, FlinkPlannerImpl (org.apache.Flink.table.planner.calcite) toQueryOperation:522, SqlToOperationConverter (org.apache.Flink.table.planner.operations) convertSqlQuery:436, SqlToOperationConverter (org.apache.Flink.table.planner.operations) convert:154, SqlToOperationConverter (org.apache.Flink.table.planner.operations) parse:66, ParserImpl (org.apache.Flink.table.planner.delegation) sqlQuery:464, TableEnvironmentImpl (org.apache.Flink.table.api.internal) main:82, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport)
从代码中能看出,这就是个把各种相关操作和信息封装起来类而已,并不涉及太多实际逻辑。
@Internal public class TableImpl implements Table { private static final AtomicInteger uniqueId = new AtomicInteger(0); private final TableEnvironment tableEnvironment; private final QueryOperation operationTree; private final OperationTreeBuilder operationTreeBuilder; private final LookupCallResolver lookupResolver; private TableImpl joinInternal( Table right, OptionaljoinPredicate, JoinType joinType) { verifyTableCompatible(right); return createTable(operationTreeBuilder.join( this.operationTree, right.getQueryOperation(), joinType, joinPredicate, false)); } } // 程序中调试的内容 view = {TableImpl@4583} "UnnamedTable$0" tableEnvironment = {StreamTableEnvironmentImpl@4580} functionCatalog = {FunctionCatalog@4646} scalaExecutionEnvironment = {StreamExecutionEnvironment@4579} planner = {StreamPlanner@4647} catalogManager = {CatalogManager@4644} moduleManager = {ModuleManager@4648} operationTreeBuilder = {OperationTreeBuilder@4649} bufferedModifyOperations = {ArrayList@4650} size = 0 tableConfig = {TableConfig@4651} execEnv = {StreamExecutor@4652} TableEnvironmentImpl.functionCatalog = {FunctionCatalog@4646} TableEnvironmentImpl.planner = {StreamPlanner@4647} parser = {ParserImpl@4653} registration = {TableEnvironmentImpl$1@4654} operationTree = {ScalaDataStreamQueryOperation@4665} identifier = null dataStream = {DataStreamSource@4676} fieldIndices = {int[2]@4677} tableSchema = {TableSchema@4678} "root\n |-- orderId: STRING\n |-- productName: STRING\n" operationTreeBuilder = {OperationTreeBuilder@4649} config = {TableConfig@4651} functionCatalog = {FunctionCatalog@4646} tableReferenceLookup = {TableEnvironmentImpl$lambda@4668} lookupResolver = {LookupCallResolver@4669} projectionOperationFactory = {ProjectionOperationFactory@4670} sortOperationFactory = {SortOperationFactory@4671} calculatedTableFactory = {CalculatedTableFactory@4672} setOperationFactory = {SetOperationFactory@4673} aggregateOperationFactory = {AggregateOperationFactory@4674} joinOperationFactory = {JoinOperationFactory@4675} lookupResolver = {LookupCallResolver@4666} functionLookup = {FunctionCatalog@4646} tableName = "UnnamedTable$0" value = {char[14]@4667} hash = 1355882650
这里对应前面脉络图,作用是生成了 SqlJoin 这样的 SqlNode
// NOTE : 执行顺序是从上至下," -----> " 表示生成的实例类型 * * +-----> "left outer JOIN" (SQL statement) * | * | * SqlParser.parseQuery // SQL 解析阶段,生成AST(抽象语法树),作用是SQL–>SqlNode * | * | * +-----> SqlJoin (SqlNode) * | * |
Calcite 使用 JavaCC 做 SQL 解析,JavaCC 根据 Calcite 中定义的 Parser.jj 文件,生成一系列的 java 代码,生成的 Java 代码会把 SQL 转换成 AST 的数据结构(这里是 SqlNode 类型)。
即:把 SQL 转换成为 AST (抽象语法树),在 Calcite 中用 SqlNode 来表示;
package org.apache.Flink.table.planner.delegation; public class ParserImpl implements Parser { @Override public Listparse(String statement) { CalciteParser parser = calciteParserSupplier.get(); FlinkPlannerImpl planner = validatorSupplier.get(); // parse the sql query SqlNode parsed = parser.parse(statement); Operation operation = SqlToOperationConverter.convert(planner, catalogManager, parsed) .orElseThrow(() -> new TableException("Unsupported query: " + statement)); return Collections.singletonList(operation); } } // 打印出来解析之后 parsed 的内容,我们能看到 SqlNode 的基本格式。 parsed = {SqlBasicCall@4690} "SELECT *\nFROM `UnnamedTable$0`\nWHERE `amount` > 2\nUNION ALL\nSELECT *\nFROM `OrderB`\nWHERE `amount` < 2" operator = {SqlSetOperator@4716} "UNION ALL" all = true name = "UNION ALL" kind = {SqlKind@4742} "UNION" leftPrec = 14 rightPrec = 15 returnTypeInference = {ReturnTypes$lambda@4743} operandTypeInference = null operandTypeChecker = {SetopOperandTypeChecker@4744} operands = {SqlNode[2]@4717} 0 = {SqlSelect@4746} "SELECT *\nFROM `UnnamedTable$0`\nWHERE `amount` > 2" 1 = {SqlSelect@4747} "SELECT *\nFROM `OrderB`\nWHERE `amount` < 2" functionQuantifier = null expanded = false pos = {SqlParserPos@4719} "line 2, column 1" // 下面是调试相关Stack,可以帮助大家深入理解 SqlStmt:3208, FlinkSqlParserImpl (org.apache.Flink.sql.parser.impl) SqlStmtEof:3732, FlinkSqlParserImpl (org.apache.Flink.sql.parser.impl) parseSqlStmtEof:234, FlinkSqlParserImpl (org.apache.Flink.sql.parser.impl) parseQuery:160, SqlParser (org.apache.calcite.sql.parser) parseStmt:187, SqlParser (org.apache.calcite.sql.parser) parse:48, CalciteParser (org.apache.Flink.table.planner.calcite) parse:64, ParserImpl (org.apache.Flink.table.planner.delegation) sqlQuery:464, TableEnvironmentImpl (org.apache.Flink.table.api.internal) main:82, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport) // 另一个参考 in FlinkSqlParserImpl.FromClause e = {SqlJoin@4709} "`Orders` AS `o`\nLEFT JOIN `Payment` AS `p` ON `o`.`orderId` = `p`.`orderId`" left = {SqlBasicCall@4676} "`Orders` AS `o`" operator = {SqlAsOperator@4752} "AS" operands = {SqlNode[2]@4753} functionQuantifier = null expanded = false pos = {SqlParserPos@4755} "line 7, column 3" natural = {SqlLiteral@4677} "FALSE" typeName = {SqlTypeName@4775} "BOOLEAN" value = {Boolean@4776} false pos = {SqlParserPos@4777} "line 7, column 13" joinType = {SqlLiteral@4678} "LEFT" typeName = {SqlTypeName@4758} "SYMBOL" value = {JoinType@4759} "LEFT" pos = {SqlParserPos@4724} "line 7, column 26" right = {SqlBasicCall@4679} "`Payment` AS `p`" operator = {SqlAsOperator@4752} "AS" operands = {SqlNode[2]@4763} functionQuantifier = null expanded = false pos = {SqlParserPos@4764} "line 7, column 31" conditionType = {SqlLiteral@4680} "ON" typeName = {SqlTypeName@4758} "SYMBOL" value = {JoinConditionType@4771} "ON" pos = {SqlParserPos@4772} "line 7, column 44" condition = {SqlBasicCall@4681} "`o`.`orderId` = `p`.`orderId`" operator = {SqlBinaryOperator@4766} "=" operands = {SqlNode[2]@4767} functionQuantifier = null expanded = false pos = {SqlParserPos@4768} "line 7, column 47" pos = {SqlParserPos@4724} "line 7, column 26" // 下面是调试相关Stack,可以帮助大家深入理解 FromClause:10192, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) SqlSelect:5918, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) LeafQuery:630, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) LeafQueryOrExpr:15651, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) QueryOrExpr:15118, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) OrderedQueryOrExpr:504, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) SqlStmt:3693, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) SqlStmtEof:3732, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) parseSqlStmtEof:234, FlinkSqlParserImpl (org.apache.flink.sql.parser.impl) parseQuery:160, SqlParser (org.apache.calcite.sql.parser) parseStmt:187, SqlParser (org.apache.calcite.sql.parser) parse:48, CalciteParser (org.apache.flink.table.planner.calcite) parse:64, ParserImpl (org.apache.flink.table.planner.delegation) sqlQuery:464, TableEnvironmentImpl (org.apache.flink.table.api.internal) main:73, SimpleOuterJoin$ (spendreport) main:-1, SimpleOuterJoin (spendreport)
经过上面的第一步,会生成一个 SqlNode 对象,它是一个未经验证的抽象语法树,下面就进入了一个语法检查阶段,语法检查前需要知道元数据信息,这个检查会包括表名、字段名、函数名、数据类型的检查。
即:语法检查,根据元数据信息进行语法验证,验证之后还是用 SqlNode 表示 AST 语法树;
package org.apache.Flink.table.planner.operations; public class SqlToOperationConverter { public static Optionalconvert( // 这里进行validate的调用 final SqlNode validated = FlinkPlanner.validate(sqlNode); SqlToOperationConverter converter = new SqlToOperationConverter(FlinkPlanner, catalogManager); } } // 打印出来解析之后 validated 的内容。 validated = {SqlBasicCall@4675} "SELECT `UnnamedTable$0`.`user`, `UnnamedTable$0`.`product`, `UnnamedTable$0`.`amount`\nFROM `default_catalog`.`default_database`.`UnnamedTable$0` AS `UnnamedTable$0`\nWHERE `UnnamedTable$0`.`amount` > 2\nUNION ALL\nSELECT `OrderB`.`user`, `OrderB`.`product`, `OrderB`.`amount`\nFROM `default_catalog`.`default_database`.`OrderB` AS `OrderB`\nWHERE `OrderB`.`amount` < 2" operator = {SqlSetOperator@5000} "UNION ALL" all = true name = "UNION ALL" kind = {SqlKind@5029} "UNION" leftPrec = 14 rightPrec = 15 returnTypeInference = {ReturnTypes$lambda@5030} operandTypeInference = null operandTypeChecker = {SetopOperandTypeChecker@5031} operands = {SqlNode[2]@5001} 0 = {SqlSelect@4840} "SELECT `UnnamedTable$0`.`user`, `UnnamedTable$0`.`product`, `UnnamedTable$0`.`amount`\nFROM `default_catalog`.`default_database`.`UnnamedTable$0` AS `UnnamedTable$0`\nWHERE `UnnamedTable$0`.`amount` > 2" 1 = {SqlSelect@5026} "SELECT `OrderB`.`user`, `OrderB`.`product`, `OrderB`.`amount`\nFROM `default_catalog`.`default_database`.`OrderB` AS `OrderB`\nWHERE `OrderB`.`amount` < 2" functionQuantifier = null expanded = false pos = {SqlParserPos@5003} "line 2, column 1" // 下面是调试相关Stack,可以帮助大家深入理解 validate:81, AbstractNamespace (org.apache.calcite.sql.validate) validateNamespace:1008, SqlValidatorImpl (org.apache.calcite.sql.validate) validateQuery:968, SqlValidatorImpl (org.apache.calcite.sql.validate) validateCall:90, SqlSetOperator (org.apache.calcite.sql) validateCall:5304, SqlValidatorImpl (org.apache.calcite.sql.validate) validate:116, SqlCall (org.apache.calcite.sql) validateScopedExpression:943, SqlValidatorImpl (org.apache.calcite.sql.validate) validate:650, SqlValidatorImpl (org.apache.calcite.sql.validate) org$apache$Flink$table$planner$calcite$FlinkPlannerImpl$$validate:126, FlinkPlannerImpl (org.apache.Flink.table.planner.calcite) validate:105, FlinkPlannerImpl (org.apache.Flink.table.planner.calcite) convert:127, SqlToOperationConverter (org.apache.Flink.table.planner.operations) parse:66, ParserImpl (org.apache.Flink.table.planner.delegation) sqlQuery:464, TableEnvironmentImpl (org.apache.Flink.table.api.internal) main:82, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport)
脉络图中,这时候来到了
// NOTE : 执行顺序是从上至下," -----> " 表示生成的实例类型 * * +-----> "left outer JOIN" (SQL statement) * | * | * SqlParser.parseQuery // SQL 解析阶段,生成AST(抽象语法树),作用是SQL–>SqlNode * | * | * +-----> SqlJoin (SqlNode) * | * | * SqlToRelConverter.convertQuery // 语义分析,生成逻辑计划,作用是SqlNode–>RelNode * | * | * +-----> LogicalProject (RelNode) // Abstract Syntax Tree,未优化的RelNode * | * |
经过第二步之后,这里的 SqlNode 就是经过语法校验的 SqlNode 树,接下来这一步就是将 SqlNode 转换成 RelNode/RexNode,也就是生成相应的逻辑计划(Logical Plan)
即:语义分析,根据 SqlNode及元信息构建 RelNode 树,也就是最初版本的逻辑计划(Logical Plan);
根据这个已经生成的Flink的logical Plan,将它转换成calcite的logicalPlan,这样我们才能用到calcite强大的优化规则。
Flink由上往下依次调用各个节点的construct方法,将Flink节点转换成calcite的RelNode节点。真正的实现是在 convertQueryRecursive()
方法中完成的。
比如生成 LogicalProject 调用关系大概如下:
createJoin:378, RelFactories$JoinFactoryImpl (org.apache.calcite.rel.core) createJoin:2520, SqlToRelConverter (org.apache.calcite.sql2rel) convertFrom:2111, SqlToRelConverter (org.apache.calcite.sql2rel) convertSelectImpl:646, SqlToRelConverter (org.apache.calcite.sql2rel) convertSelect:627, SqlToRelConverter (org.apache.calcite.sql2rel) convertQueryRecursive:3181, SqlToRelConverter (org.apache.calcite.sql2rel) convertQuery:563, SqlToRelConverter (org.apache.calcite.sql2rel) org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel:148, FlinkPlannerImpl (org.apache.flink.table.planner.calcite) rel:135, FlinkPlannerImpl (org.apache.flink.table.planner.calcite) toQueryOperation:522, SqlToOperationConverter (org.apache.flink.table.planner.operations) convertSqlQuery:436, SqlToOperationConverter (org.apache.flink.table.planner.operations) convert:154, SqlToOperationConverter (org.apache.flink.table.planner.operations) parse:66, ParserImpl (org.apache.flink.table.planner.delegation) sqlQuery:464, TableEnvironmentImpl (org.apache.flink.table.api.internal) main:73, SimpleOuterJoin$ (spendreport) main:-1, SimpleOuterJoin (spendreport)
具体详细源码如下:
SqlToRelConverter 中的 convertQuery() 将 SqlNode 转换为 RelRoot public class SqlToRelConverter { public RelRoot convertQuery(SqlNode query, boolean needsValidation, boolean top) { if (needsValidation) { query = this.validator.validate(query); } RelMetadataQuery.THREAD_PROVIDERS.set(JaninoRelMetadataProvider.of(this.cluster.getMetadataProvider())); RelNode result = this.convertQueryRecursive(query, top, (RelDataType)null).rel; if (top && isStream(query)) { result = new LogicalDelta(this.cluster, ((RelNode)result).getTraitSet(), (RelNode)result); } RelCollation collation = RelCollations.EMPTY; if (!query.isA(SqlKind.DML) && isOrdered(query)) { collation = this.requiredCollation((RelNode)result); } this.checkConvertedType(query, (RelNode)result); RelDataType validatedRowType = this.validator.getValidatedNodeType(query); // 这里设定了Root return RelRoot.of((RelNode)result, validatedRowType, query.getKind()).withCollation(collation); } } // 在这里打印 toQueryOperation:523, SqlToOperationConverter (org.apache.Flink.table.planner.operations) // 得到如下内容,可以看到一个RelRoot的真实结构 relational = {RelRoot@5248} "Root {kind: UNION, rel: LogicalUnion#6, rowType: RecordType(BIGINT user, VARCHAR(2147483647) product, INTEGER amount), fields: [,,], collation: []}" rel = {LogicalUnion@5227} "LogicalUnion#6" inputs = {RegularImmutableList@5272} size = 2 kind = {SqlKind@5029} "UNION" all = true desc = "LogicalUnion#6" rowType = {RelRecordType@5238} "RecordType(BIGINT user, VARCHAR(2147483647) product, INTEGER amount)" digest = "LogicalUnion#6" cluster = {FlinkRelOptCluster@4800} id = 6 traitSet = {RelTraitSet@5273} size = 5 validatedRowType = {RelRecordType@5238} "RecordType(BIGINT user, VARCHAR(2147483647) product, INTEGER amount)" kind = {StructKind@5268} "FULLY_QUALIFIED" nullable = false fieldList = {RegularImmutableList@5269} size = 3 digest = "RecordType(BIGINT user, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" product, INTEGER amount) NOT NULL" kind = {SqlKind@5029} "UNION" lowerName = "union" sql = "UNION" name = "UNION" ordinal = 18 fields = {RegularImmutableList@5254} size = 3 {Integer@5261} 0 -> "user" {Integer@5263} 1 -> "product" {Integer@5265} 2 -> "amount" collation = {RelCollationImpl@5237} "[]" fieldCollations = {RegularImmutableList@5256} size = 0 // 调用栈内容 convertQuery:561, SqlToRelConverter (org.apache.calcite.sql2rel) org$apache$Flink$table$planner$calcite$FlinkPlannerImpl$$rel:148, FlinkPlannerImpl (org.apache.Flink.table.planner.calcite) rel:135, FlinkPlannerImpl (org.apache.Flink.table.planner.calcite) toQueryOperation:522, SqlToOperationConverter (org.apache.Flink.table.planner.operations) convertSqlQuery:436, SqlToOperationConverter (org.apache.Flink.table.planner.operations) convert:154, SqlToOperationConverter (org.apache.Flink.table.planner.operations) parse:66, ParserImpl (org.apache.Flink.table.planner.delegation) sqlQuery:464, TableEnvironmentImpl (org.apache.Flink.table.api.internal) main:82, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport) // 再次举例,生成了LogicalProject bb = {SqlToRelConverter$Blackboard@4978} scope = {SelectScope@4977} nameToNodeMap = null root = {LogicalProject@5100} "LogicalProject#4" exps = {RegularImmutableList@5105} size = 3 input = {LogicalJoin@5106} "LogicalJoin#3" desc = "LogicalProject#4" rowType = {RelRecordType@5107} "RecordType(VARCHAR(2147483647) orderId, VARCHAR(2147483647) productName, VARCHAR(2147483647) payType)" digest = "LogicalProject#4" cluster = {FlinkRelOptCluster@4949} id = 4 traitSet = {RelTraitSet@5108} size = 5 inputs = {Collections$SingletonList@5111} size = 1 mapCorrelateToRex = {HashMap@5112} size = 0 isPatternVarRef = false cursors = {ArrayList@5113} size = 0 subQueryList = {LinkedHashSet@5114} size = 0 agg = null window = null mapRootRelToFieldProjection = {HashMap@5115} size = 0 columnMonotonicities = {ArrayList@5116} size = 3 systemFieldList = {ArrayList@5117} size = 0 top = true initializerExpressionFactory = {NullInitializerExpressionFactory@5118} this$0 = {SqlToRelConverter@4926} // 举例,LogicalProject是在这里生成的。 protected void convertFrom(SqlToRelConverter.Blackboard bb, SqlNode from) { case JOIN: RelNode joinRel = this.createJoin(fromBlackboard, leftRel, rightRel, conditionExp, convertedJoinType); bb.setRoot(joinRel, false); } // 相关调用栈 createJoin:378, RelFactories$JoinFactoryImpl (org.apache.calcite.rel.core) createJoin:2520, SqlToRelConverter (org.apache.calcite.sql2rel) convertFrom:2111, SqlToRelConverter (org.apache.calcite.sql2rel) convertSelectImpl:646, SqlToRelConverter (org.apache.calcite.sql2rel) convertSelect:627, SqlToRelConverter (org.apache.calcite.sql2rel) convertQueryRecursive:3181, SqlToRelConverter (org.apache.calcite.sql2rel) convertQuery:563, SqlToRelConverter (org.apache.calcite.sql2rel) org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel:148, FlinkPlannerImpl (org.apache.flink.table.planner.calcite) rel:135, FlinkPlannerImpl (org.apache.flink.table.planner.calcite) toQueryOperation:522, SqlToOperationConverter (org.apache.flink.table.planner.operations) convertSqlQuery:436, SqlToOperationConverter (org.apache.flink.table.planner.operations) convert:154, SqlToOperationConverter (org.apache.flink.table.planner.operations) parse:66, ParserImpl (org.apache.flink.table.planner.delegation) sqlQuery:464, TableEnvironmentImpl (org.apache.flink.table.api.internal) main:73, SimpleOuterJoin$ (spendreport) main:-1, SimpleOuterJoin (spendreport)
这时候,脉络图到了这里
// NOTE : 执行顺序是从上至下," -----> " 表示生成的实例类型 * * +-----> "left outer JOIN" (SQL statement) * | * | * SqlParser.parseQuery // SQL 解析阶段,生成AST(抽象语法树),作用是SQL–>SqlNode * | * | * +-----> SqlJoin (SqlNode) * | * | * SqlToRelConverter.convertQuery // 语义分析,生成逻辑计划,作用是SqlNode–>RelNode * | * | * +-----> LogicalProject (RelNode) // Abstract Syntax Tree,未优化的RelNode * | * | * FlinkLogicalJoinConverter (RelOptRule) // Flink定制的优化rules * VolcanoRuleCall.onMatch // 基于Flink定制的一些优化rules去优化 Logical Plan * | * | * +-----> FlinkLogicalJoin (RelNode) // Optimized Logical Plan,逻辑执行计划 * | * | * StreamExecJoinRule (RelOptRule) // Rule that converts FlinkLogicalJoin without window bounds in join condition to StreamExecJoin * VolcanoRuleCall.onMatch // 基于Flink rules将optimized LogicalPlan转成Flink物理执行计划 * | * | * +-----> StreamExecJoin (FlinkRelNode) // Stream physical RelNode,物理执行计划 * | * |
第四阶段,也就是 Calcite 的核心所在。
即:逻辑计划优化,优化器的核心,根据前面生成的逻辑计划按照相应的规则(Rule)进行优化;
Flink的这部分实现统一封装在optimize方法里头。这部分涉及到多个阶段,每个阶段都是用Rule对逻辑计划进行优化和改进。
在 Calcite 架构中,最核心地方就是 Optimizer,也就是优化器,一个 Optimization Engine 包含三个组成部分:
优化器的作用是将解析器生成的关系代数表达式转换成执行计划,供执行引擎执行,在这个过程中,会应用一些规则优化,以帮助生成更高效的执行计划。优化器进行优化的地方如过滤条件的下压(push down),在进行 join 操作前,先进行 filter 操作,这样的话就不需要在 join 时进行全量 join,减少参与 join 的数据量等。
Calcite 中 RelOptPlanner 是 Calcite 中优化器的基类。Calcite 中关于优化器提供了两种实现:
基于代价的优化器(Cost-Based Optimizer,CBO) 是根据优化规则对关系表达式进行转换。这里的转换是说一个关系表达式经过优化规则后会生成另外一个关系表达式,同时原有表达式也会保留,经过一系列转换后会生成多个执行计划,然后 CBO 会根据统计信息和代价模型 (Cost Model) 计算每个执行计划的 Cost,从中挑选 Cost 最小的执行计划。
由上可知,CBO 中有两个依赖:统计信息和代价模型。统计信息的准确与否、代价模型的合理与否都会影响 CBO 选择最优计划。 从上述描述可知,CBO 是优于 RBO 的,原因是 RBO 是一种只认规则,对数据不敏感的呆板的优化器,而在实际过程中,数据往往是有变化的,通过 RBO 生成的执行计划很有可能不是最优的。事实上目前各大数据库和大数据计算引擎都倾向于使用 CBO,但是对于流式计算引擎来说,使用 CBO 还是有很大难度的,因为并不能提前预知数据量等信息,这会极大地影响优化效果,CBO 主要还是应用在离线的场景。
VolcanoPlanner就是 CBO 的实现,它会一直迭代 rules,直到找到 cost 最小的 paln。其部分相关概念如下:
rels
中;best
)和最佳 plan 的 cost(bestCost
)信息。在应用 VolcanoPlanner 时,整体分为以下四步:
Convention
);setRoot()
方法注册相应的 RelNode,并进行相应的初始化操作;下面通过这个 示例 来详细看下 VolcanoPlanner 内部的实现逻辑。
//1. 初始化 VolcanoPlanner 对象,并添加相应的 Rule VolcanoPlanner planner = new VolcanoPlanner(); planner.addRelTraitDef(ConventionTraitDef.INSTANCE); planner.addRelTraitDef(RelDistributionTraitDef.INSTANCE); // 添加相应的 rule planner.addRule(FilterJoinRule.FilterIntoJoinRule.FILTER_ON_JOIN); planner.addRule(ReduceExpressionsRule.PROJECT_INSTANCE); planner.addRule(PruneEmptyRules.PROJECT_INSTANCE); // 添加相应的 ConverterRule planner.addRule(EnumerableRules.ENUMERABLE_MERGE_JOIN_RULE); planner.addRule(EnumerableRules.ENUMERABLE_SORT_RULE); planner.addRule(EnumerableRules.ENUMERABLE_VALUES_RULE); planner.addRule(EnumerableRules.ENUMERABLE_PROJECT_RULE); planner.addRule(EnumerableRules.ENUMERABLE_FILTER_RULE); //2. Changes a relational expression to an equivalent one with a different set of traits. RelTraitSet desiredTraits = relNode.getCluster().traitSet().replace(EnumerableConvention.INSTANCE); relNode = planner.changeTraits(relNode, desiredTraits); //3. 通过 VolcanoPlanner 的 setRoot 方法注册相应的 RelNode,并进行相应的初始化操作 planner.setRoot(relNode); //4. 通过动态规划算法找到 cost 最小的 plan relNode = planner.findBestExp();
Flink 中相关代码如下:
public PlannerContext( TableConfig tableConfig, FunctionCatalog functionCatalog, CatalogManager catalogManager, CalciteSchema rootSchema, ListtraitDefs) { this.tableConfig = tableConfig; this.context = new FlinkContextImpl( tableConfig, functionCatalog, catalogManager, this::createSqlExprToRexConverter); this.rootSchema = rootSchema; this.traitDefs = traitDefs; // Make a framework config to initialize the RelOptCluster instance, // caution that we can only use the attributes that can not be overwrite/configured // by user. this.frameworkConfig = createFrameworkConfig(); // 这里使用了VolcanoPlanner RelOptPlanner planner = new VolcanoPlanner(frameworkConfig.getCostFactory(), frameworkConfig.getContext()); planner.setExecutor(frameworkConfig.getExecutor()); for (RelTraitDef traitDef : frameworkConfig.getTraitDefs()) { planner.addRelTraitDef(traitDef); } this.cluster = FlinkRelOptClusterFactory.create(planner, new RexBuilder(typeFactory)); } //初始化:119, PlannerContext (org.apache.Flink.table.planner.delegation):86, PlannerBase (org.apache.Flink.table.planner.delegation):44, StreamPlanner (org.apache.Flink.table.planner.delegation) create:50, BlinkPlannerFactory (org.apache.Flink.table.planner.delegation) create:325, StreamTableEnvironmentImpl$ (org.apache.Flink.table.api.scala.internal) create:425, StreamTableEnvironment$ (org.apache.Flink.table.api.scala) main:56, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport) class FlinkVolcanoProgram[OC <: FlinkOptimizeContext] extends FlinkRuleSetProgram[OC] { override def optimize(root: RelNode, context: OC): RelNode = { val targetTraits = root.getTraitSet.plusAll(requiredOutputTraits.get).simplify() // VolcanoPlanner limits that the planer a RelNode tree belongs to and // the VolcanoPlanner used to optimize the RelNode tree should be same instance. // see: VolcanoPlanner#registerImpl // here, use the planner in cluster directly // 这里也使用了VolcanoPlanner val planner = root.getCluster.getPlanner.asInstanceOf[VolcanoPlanner] val optProgram = Programs.ofRules(rules) } } // 其调用栈 optimize:60, FlinkVolcanoProgram (org.apache.Flink.table.planner.plan.optimize.program) apply:62, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:58, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) foldLeft:157, TraversableOnce$class (scala.collection) foldLeft:104, AbstractTraversable (scala.collection) optimize:57, FlinkChainedProgram (org.apache.Flink.table.planner.plan.optimize.program) optimizeTree:170, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) doOptimize:90, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:77, CommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:248, PlannerBase (org.apache.Flink.table.planner.delegation) translate:151, PlannerBase (org.apache.Flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:107, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:101, TableConversions (org.apache.Flink.table.api.scala) main:89, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport) // 下面全部是 VolcanoPlanner 相关代码和调用栈 // VolcanoPlanner添加Rule,筛选出来的优化规则会封装成VolcanoRuleMatch,然后扔到RuleQueue里,而这个RuleQueue正是接下来执行动态规划算法要用到的核心类。 public class VolcanoPlanner extends AbstractRelOptPlanner { public boolean addRule(RelOptRule rule) { ...... } } addRule:438, VolcanoPlanner (org.apache.calcite.plan.volcano) run:315, Programs$RuleSetProgram (org.apache.calcite.tools) optimize:64, FlinkVolcanoProgram (org.apache.Flink.table.planner.plan.optimize.program) apply:62, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:58, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) foldLeft:157, TraversableOnce$class (scala.collection) foldLeft:104, AbstractTraversable (scala.collection) optimize:57, FlinkChainedProgram (org.apache.Flink.table.planner.plan.optimize.program) optimizeTree:170, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) doOptimize:90, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:77, CommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:248, PlannerBase (org.apache.Flink.table.planner.delegation) translate:151, PlannerBase (org.apache.Flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:107, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:101, TableConversions (org.apache.Flink.table.api.scala) main:89, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport) // VolcanoPlanner修改Traits public class VolcanoPlanner extends AbstractRelOptPlanner { public RelNode changeTraits(RelNode rel, RelTraitSet toTraits) { assert !rel.getTraitSet().equals(toTraits); assert toTraits.allSimple(); RelSubset rel2 = this.ensureRegistered(rel, (RelNode)null); return rel2.getTraitSet().equals(toTraits) ? rel2 : rel2.set.getOrCreateSubset(rel.getCluster(), toTraits.simplify()); } } changeTraits:529, VolcanoPlanner (org.apache.calcite.plan.volcano) run:324, Programs$RuleSetProgram (org.apache.calcite.tools) optimize:64, FlinkVolcanoProgram (org.apache.Flink.table.planner.plan.optimize.program) apply:62, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:58, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) foldLeft:157, TraversableOnce$class (scala.collection) foldLeft:104, AbstractTraversable (scala.collection) optimize:57, FlinkChainedProgram (org.apache.Flink.table.planner.plan.optimize.program) optimizeTree:170, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) doOptimize:90, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:77, CommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:248, PlannerBase (org.apache.Flink.table.planner.delegation) translate:151, PlannerBase (org.apache.Flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:107, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:101, TableConversions (org.apache.Flink.table.api.scala) main:89, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport) // VolcanoPlanner设定Root public class VolcanoPlanner extends AbstractRelOptPlanner { public void setRoot(RelNode rel) { this.registerMetadataRels(); this.root = this.registerImpl(rel, (RelSet)null); if (this.originalRoot == null) { this.originalRoot = rel; } this.ruleQueue.recompute(this.root); this.ensureRootConverters(); } } setRoot:294, VolcanoPlanner (org.apache.calcite.plan.volcano) run:326, Programs$RuleSetProgram (org.apache.calcite.tools) optimize:64, FlinkVolcanoProgram (org.apache.Flink.table.planner.plan.optimize.program) apply:62, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:58, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) foldLeft:157, TraversableOnce$class (scala.collection) foldLeft:104, AbstractTraversable (scala.collection) optimize:57, FlinkChainedProgram (org.apache.Flink.table.planner.plan.optimize.program) optimizeTree:170, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) doOptimize:90, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:77, CommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:248, PlannerBase (org.apache.Flink.table.planner.delegation) translate:151, PlannerBase (org.apache.Flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:107, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:101, TableConversions (org.apache.Flink.table.api.scala) main:89, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport) // VolcanoPlanner找到最小cost,本质上就是一个动态规划算法的实现。 public class VolcanoPlanner extends AbstractRelOptPlanner { public RelNode findBestExp() { this.ensureRootConverters(); this.registerMaterializations(); int cumulativeTicks = 0; VolcanoPlannerPhase[] var2 = VolcanoPlannerPhase.values(); int var3 = var2.length; for(int var4 = 0; var4 < var3; ++var4) { VolcanoPlannerPhase phase = var2[var4]; this.setInitialImportance(); RelOptCost targetCost = this.costFactory.makeHugeCost(); int tick = 0; int firstFiniteTick = -1; int splitCount = 0; int giveUpTick = 2147483647; while(true) { ++tick; ++cumulativeTicks; if (this.root.bestCost.isLe(targetCost)) { if (firstFiniteTick < 0) { firstFiniteTick = cumulativeTicks; this.clearImportanceBoost(); } if (!this.ambitious) { break; } targetCost = this.root.bestCost.multiplyBy(0.9D); ++splitCount; if (this.impatient) { if (firstFiniteTick < 10) { giveUpTick = cumulativeTicks + 25; } else { giveUpTick = cumulativeTicks + Math.max(firstFiniteTick / 10, 25); } } } else { if (cumulativeTicks > giveUpTick) { break; } if (this.root.bestCost.isInfinite() && tick % 10 == 0) { this.injectImportanceBoost(); } } VolcanoRuleMatch match = this.ruleQueue.popMatch(phase); if (match == null) { break; } assert match.getRule().matches(match); match.onMatch(); this.root = this.canonize(this.root); } this.ruleQueue.phaseCompleted(phase); } RelNode cheapest = this.root.buildCheapestPlan(this); return cheapest; } } // VolcanoPlanner得到的Flink逻辑节点 cheapest,就是最终选择的结点 cheapest = {FlinkLogicalUnion@6487} "FlinkLogicalUnion#443" cluster = {FlinkRelOptCluster@6224} inputs = {RegularImmutableList@6493} size = 2 0 = {FlinkLogicalCalc@6498} "FlinkLogicalCalc#441" cluster = {FlinkRelOptCluster@6224} calcProgram = {RexProgram@6509} "(expr#0..2=[{inputs}], expr#3=[2], expr#4=[>($t2, $t3)], proj#0..2=[{exprs}], $condition=[$t4])" program = {RexProgram@6509} "(expr#0..2=[{inputs}], expr#3=[2], expr#4=[>($t2, $t3)], proj#0..2=[{exprs}], $condition=[$t4])" input = {FlinkLogicalDataStreamTableScan@6510} "rel#437:FlinkLogicalDataStreamTableScan.LOGICAL.any.None: 0.false.UNKNOWN(table=[default_catalog, default_database, UnnamedTable$0])" desc = "FlinkLogicalCalc#441" rowType = {RelRecordType@6504} "RecordType(BIGINT user, VARCHAR(2147483647) product, INTEGER amount)" digest = "FlinkLogicalCalc#441" AbstractRelNode.cluster = {FlinkRelOptCluster@6224} id = 441 traitSet = {RelTraitSet@5942} size = 5 1 = {FlinkLogicalCalc@6499} "FlinkLogicalCalc#442" cluster = {FlinkRelOptCluster@6224} calcProgram = {RexProgram@6502} "(expr#0..2=[{inputs}], expr#3=[2], expr#4=[<($t2, $t3)], proj#0..2=[{exprs}], $condition=[$t4])" program = {RexProgram@6502} "(expr#0..2=[{inputs}], expr#3=[2], expr#4=[ " 表示生成的实例类型 * * +-----> "left outer JOIN" (SQL statement) * | * | * SqlParser.parseQuery // SQL 解析阶段,生成AST(抽象语法树),作用是SQL–>SqlNode * | * | * +-----> SqlJoin (SqlNode) * | * | * SqlToRelConverter.convertQuery // 语义分析,生成逻辑计划,作用是SqlNode–>RelNode * | * | * +-----> LogicalProject (RelNode) // Abstract Syntax Tree,未优化的RelNode * | * | * FlinkLogicalJoinConverter (RelOptRule) // Flink定制的优化rules * VolcanoRuleCall.onMatch // 基于Flink定制的一些优化rules去优化 Logical Plan * | * | * +-----> FlinkLogicalJoin (RelNode) // Optimized Logical Plan,逻辑执行计划 * | * | * StreamExecJoinRule (RelOptRule) // Rule that converts FlinkLogicalJoin without window bounds in join condition to StreamExecJoin * VolcanoRuleCall.onMatch // 基于Flink rules将optimized LogicalPlan转成Flink物理执行计划 * | * | * +-----> StreamExecJoin (FlinkRelNode) // Stream physical RelNode,物理执行计划 * | * | * StreamExecJoin.translateToPlanInternal // 作用是生成 StreamOperator, 即Flink算子 * | * | * +-----> StreamingJoinOperator (StreamOperator) // Streaming unbounded Join operator in StreamTask * | * |
Calcite 针对不同的大数据组件,将优化后的plan映射到最终的大数据引擎,如折射成Flink图。
这一块只要是递归调用各个节点DataStreamRel的translateToPlan方法,这个方法利用CodeGen元编程成Flink的各种算子。现在就相当于我们直接利用Flink的DataSet或DataStream API开发的程序。
class StreamPlanner( executor: Executor, config: TableConfig, functionCatalog: FunctionCatalog, catalogManager: CatalogManager) extends PlannerBase(executor, config, functionCatalog, catalogManager, isStreamingMode = true) { override protected def translateToPlan( execNodes: util.List[ExecNode[_, _]]): util.List[Transformation[_]] = { execNodes.map { case node: StreamExecNode[_] => node.translateToPlan(this) case _ => throw new TableException("Cannot generate DataStream due to an invalid logical plan. " + "This is a bug and should not happen. Please file an issue.") } } } package org.apache.Flink.table.planner.plan.nodes.physical.stream class StreamExecUnion( cluster: RelOptCluster, traitSet: RelTraitSet, inputRels: util.List[RelNode], all: Boolean, outputRowType: RelDataType) extends Union(cluster, traitSet, inputRels, all) with StreamPhysicalRel with StreamExecNode[BaseRow] { // 这里就生成了Flink算子 override protected def translateToPlanInternal( planner: StreamPlanner): Transformation[BaseRow] = { val transformations = getInputNodes.map { input => input.translateToPlan(planner).asInstanceOf[Transformation[BaseRow]] } new UnionTransformation(transformations) } } // 调用栈 translateToPlanInternal:85, StreamExecUnion (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:39, StreamExecUnion (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.Flink.table.planner.plan.nodes.exec) translateToPlan:39, StreamExecUnion (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToTransformation:184, StreamExecSink (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:153, StreamExecSink (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:48, StreamExecSink (org.apache.Flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.Flink.table.planner.plan.nodes.exec) translateToPlan:48, StreamExecSink (org.apache.Flink.table.planner.plan.nodes.physical.stream) apply:60, StreamPlanner$$anonfun$translateToPlan$1 (org.apache.Flink.table.planner.delegation) apply:59, StreamPlanner$$anonfun$translateToPlan$1 (org.apache.Flink.table.planner.delegation) apply:234, TraversableLike$$anonfun$map$1 (scala.collection) apply:234, TraversableLike$$anonfun$map$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) map:234, TraversableLike$class (scala.collection) map:104, AbstractTraversable (scala.collection) translateToPlan:59, StreamPlanner (org.apache.Flink.table.planner.delegation) translate:153, PlannerBase (org.apache.Flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:107, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toAppendStream:101, TableConversions (org.apache.Flink.table.api.scala) main:89, StreamSQLExample$ (spendreport) main:-1, StreamSQLExample (spendreport)
此时脉络图补充完全。
// NOTE : 执行顺序是从上至下," -----> " 表示生成的实例类型 * * +-----> "left outer JOIN" (SQL statement) * | * | * SqlParser.parseQuery // SQL 解析阶段,生成AST(抽象语法树),作用是SQL–>SqlNode * | * | * +-----> SqlJoin (SqlNode) * | * | * SqlToRelConverter.convertQuery // 语义分析,生成逻辑计划,作用是SqlNode–>RelNode * | * | * +-----> LogicalProject (RelNode) // Abstract Syntax Tree,未优化的RelNode * | * | * FlinkLogicalJoinConverter (RelOptRule) // Flink定制的优化rules * VolcanoRuleCall.onMatch // 基于Flink定制的一些优化rules去优化 Logical Plan * | * | * +-----> FlinkLogicalJoin (RelNode) // Optimized Logical Plan,逻辑执行计划 * | * | * StreamExecJoinRule (RelOptRule) // Rule that converts FlinkLogicalJoin without window bounds in join condition to StreamExecJoin * VolcanoRuleCall.onMatch // 基于Flink rules将optimized LogicalPlan转成Flink物理执行计划 * | * | * +-----> StreamExecJoin (FlinkRelNode) // Stream physical RelNode,物理执行计划 * | * | * StreamExecJoin.translateToPlanInternal // 作用是生成 StreamOperator, 即Flink算子 * | * | * +-----> StreamingJoinOperator (StreamOperator) // Streaming unbounded Join operator in StreamTask * | * | * StreamTwoInputProcessor.proce***ecord1// 在TwoInputStreamTask调用StreamingJoinOperator,真实的执行 * | * |
运行时候,则会在StreamTask中进行业务操作,这就是我们熟悉的操作了。调用栈举例如下
processElement:150, StreamTaskNetworkInput (org.apache.Flink.streaming.runtime.io) emitNext:128, StreamTaskNetworkInput (org.apache.Flink.streaming.runtime.io) processInput:69, StreamOneInputProcessor (org.apache.Flink.streaming.runtime.io) processInput:311, StreamTask (org.apache.Flink.streaming.runtime.tasks) runDefaultAction:-1, 354713989 (org.apache.Flink.streaming.runtime.tasks.StreamTask$$Lambda$710) runMailboxLoop:187, MailboxProcessor (org.apache.Flink.streaming.runtime.tasks.mailbox) runMailboxLoop:487, StreamTask (org.apache.Flink.streaming.runtime.tasks) invoke:470, StreamTask (org.apache.Flink.streaming.runtime.tasks) doRun:707, Task (org.apache.Flink.runtime.taskmanager) run:532, Task (org.apache.Flink.runtime.taskmanager) run:748, Thread (java.lang)
下面是如何具体生成各种执行计划的代码
import org.apache.Flink.api.java.utils.ParameterTool import org.apache.Flink.api.scala._ import org.apache.Flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment} import org.apache.Flink.table.api.EnvironmentSettings import org.apache.Flink.table.api.scala._ object StreamSQLExample { // ************************************************************************* // PROGRAM // ************************************************************************* def main(args: Array[String]): Unit = { val params = ParameterTool.fromArgs(args) val planner = if (params.has("planner")) params.get("planner") else "Flink" // set up execution environment val env = StreamExecutionEnvironment.getExecutionEnvironment val tEnv = if (planner == "blink") { // use blink planner in streaming mode val settings = EnvironmentSettings.newInstance() .useBlinkPlanner() .inStreamingMode() .build() StreamTableEnvironment.create(env, settings) } else if (planner == "Flink") { // use Flink planner in streaming mode StreamTableEnvironment.create(env) } else { System.err.println("The planner is incorrect. Please run 'StreamSQLExample --planner', " + "where planner (it is either Flink or blink, and the default is Flink) indicates whether the " + "example uses Flink planner or blink planner.") return } val orderA: DataStream[Order] = env.fromCollection(Seq( Order(1L, "beer", 3), Order(1L, "diaper", 4), Order(3L, "rubber", 2))) val orderB: DataStream[Order] = env.fromCollection(Seq( Order(2L, "pen", 3), Order(2L, "rubber", 3), Order(4L, "beer", 1))) // convert DataStream to Table val tableA = tEnv.fromDataStream(orderA, 'user, 'product, 'amount) // register DataStream as Table tEnv.registerDataStream("OrderB", orderB, 'user, 'product, 'amount) // union the two tables val result = tEnv.sqlQuery( s""" |SELECT * FROM $tableA WHERE amount > 2 |UNION ALL |SELECT * FROM OrderB WHERE amount < 2 """.stripMargin) result.toAppendStream[Order].print() print(tEnv.explain(result)) env.execute() } // ************************************************************************* // USER DATA TYPES // ************************************************************************* case class Order(user: Long, product: String, amount: Int) }
整个流程的转换大体就像这样:
== Abstract Syntax Tree == LogicalUnion(all=[true]) :- LogicalProject(user=[$0], product=[$1], amount=[$2]) : +- LogicalFilter(condition=[>($2, 2)]) : +- LogicalTableScan(table=[[default_catalog, default_database, UnnamedTable$0]]) +- LogicalProject(user=[$0], product=[$1], amount=[$2]) +- LogicalFilter(condition=[(amount, 2)]) : +- DataStreamScan(table=[[default_catalog, default_database, UnnamedTable$0]], fields=[user, product, amount]) +- Calc(select=[user, product, amount], where=[ 2)]) ship_strategy : FORWARD Stage 12 : Operator content : SourceConversion(table=[default_catalog.default_database.OrderB], fields=[user, product, amount]) ship_strategy : FORWARD Stage 13 : Operator content : Calc(select=[user, product, amount], where=[(amount < 2)]) ship_strategy : FORWARD
import java.sql.Timestamp import org.apache.Flink.api.java.utils.ParameterTool import org.apache.Flink.api.scala._ import org.apache.Flink.streaming.api.TimeCharacteristic import org.apache.Flink.streaming.api.scala.StreamExecutionEnvironment import org.apache.Flink.table.api.{EnvironmentSettings, TableEnvironment} import org.apache.Flink.table.api.scala._ import org.apache.Flink.types.Row import scala.collection.mutable object SimpleOuterJoin { def main(args: Array[String]): Unit = { val params = ParameterTool.fromArgs(args) val planner = if (params.has("planner")) params.get("planner") else "Flink" val env = StreamExecutionEnvironment.getExecutionEnvironment val tEnv = if (planner == "blink") { // use blink planner in streaming mode val settings = EnvironmentSettings.newInstance() .useBlinkPlanner() .inStreamingMode() .build() StreamTableEnvironment.create(env, settings) } else if (planner == "Flink") { // use Flink planner in streaming mode StreamTableEnvironment.create(env) } else { System.err.println("The planner is incorrect. Please run 'StreamSQLExample --planner', " + "where planner (it is either Flink or blink, and the default is Flink) indicates whether the " + "example uses Flink planner or blink planner.") return } env.setParallelism(1) env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime) // 构造订单数据 val ordersData = new mutable.MutableList[(String, String)] ordersData.+=(("001", "iphone")) ordersData.+=(("002", "mac")) ordersData.+=(("003", "book")) ordersData.+=(("004", "cup")) // 构造付款表 val paymentData = new mutable.MutableList[(String, String)] paymentData.+=(("001", "alipay")) paymentData.+=(("002", "card")) paymentData.+=(("003", "card")) paymentData.+=(("004", "alipay")) val orders = env .fromCollection(ordersData) .toTable(tEnv, 'orderId, 'productName) val ratesHistory = env .fromCollection(paymentData) .toTable(tEnv, 'orderId, 'payType) tEnv.registerTable("Orders", orders) tEnv.registerTable("Payment", ratesHistory) var sqlQuery = """ |SELECT | o.orderId, | o.productName, | p.payType |FROM | Orders AS o left outer JOIN Payment AS p ON o.orderId = p.orderId |""".stripMargin tEnv.registerTable("TemporalJoinResult", tEnv.sqlQuery(sqlQuery)) val result = tEnv.scan("TemporalJoinResult").toRetractStream[Row] result.print() print(tEnv.explain(tEnv.sqlQuery(sqlQuery))) env.execute() } }
整个流程的转换如下:
== Abstract Syntax Tree == LogicalProject(orderId=[$0], productName=[$1], payType=[$3]) +- LogicalJoin(condition=[=($0, $2)], joinType=[left]) :- LogicalTableScan(table=[[default_catalog, default_database, Orders]]) +- LogicalTableScan(table=[[default_catalog, default_database, Payment]]) == Optimized Logical Plan == Calc(select=[orderId, productName, payType]) +- Join(joinType=[LeftOuterJoin], where=[=(orderId, orderId0)], select=[orderId, productName, orderId0, payType], leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey]) :- Exchange(distribution=[hash[orderId]]) : +- DataStreamScan(table=[[default_catalog, default_database, Orders]], fields=[orderId, productName]) +- Exchange(distribution=[hash[orderId]]) +- DataStreamScan(table=[[default_catalog, default_database, Payment]], fields=[orderId, payType]) == Physical Execution Plan == Stage 1 : Data Source content : Source: Collection Source Stage 2 : Data Source content : Source: Collection Source Stage 11 : Operator content : SourceConversion(table=[default_catalog.default_database.Orders], fields=[orderId, productName]) ship_strategy : FORWARD Stage 13 : Operator content : SourceConversion(table=[default_catalog.default_database.Payment], fields=[orderId, payType]) ship_strategy : FORWARD Stage 15 : Operator content : Join(joinType=[LeftOuterJoin], where=[(orderId = orderId0)], select=[orderId, productName, orderId0, payType], leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey]) ship_strategy : HASH Stage 16 : Operator content : Calc(select=[orderId, productName, payType]) ship_strategy : FORWARD 输出结果是 (true,001,iphone,null) (false,001,iphone,null) (true,001,iphone,alipay) (true,002,mac,null) (false,002,mac,null) (true,002,mac,card) (true,003,book,null) (false,003,book,null) (true,003,book,card) (true,004,cup,null) (false,004,cup,null) (true,004,cup,alipay)
下面是调试时候的调用栈,这个可以给大家参考
// 调用Rule进行优化 matches:49, StreamExecJoinRule (org.apache.Flink.table.planner.plan.rules.physical.stream) matchRecurse:263, VolcanoRuleCall (org.apache.calcite.plan.volcano) matchRecurse:370, VolcanoRuleCall (org.apache.calcite.plan.volcano) matchRecurse:370, VolcanoRuleCall (org.apache.calcite.plan.volcano) match:247, VolcanoRuleCall (org.apache.calcite.plan.volcano) fireRules:1534, VolcanoPlanner (org.apache.calcite.plan.volcano) registerImpl:1807, VolcanoPlanner (org.apache.calcite.plan.volcano) register:846, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:868, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:90, VolcanoPlanner (org.apache.calcite.plan.volcano) onRegister:329, AbstractRelNode (org.apache.calcite.rel) registerImpl:1668, VolcanoPlanner (org.apache.calcite.plan.volcano) register:846, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:868, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:90, VolcanoPlanner (org.apache.calcite.plan.volcano) onRegister:329, AbstractRelNode (org.apache.calcite.rel) registerImpl:1668, VolcanoPlanner (org.apache.calcite.plan.volcano) register:846, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:868, VolcanoPlanner (org.apache.calcite.plan.volcano) changeTraits:529, VolcanoPlanner (org.apache.calcite.plan.volcano) run:324, Programs$RuleSetProgram (org.apache.calcite.tools) optimize:64, FlinkVolcanoProgram (org.apache.Flink.table.planner.plan.optimize.program) apply:62, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:58, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.Flink.table.planner.plan.optimize.program) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) foldLeft:157, TraversableOnce$class (scala.collection) foldLeft:104, AbstractTraversable (scala.collection) optimize:57, FlinkChainedProgram (org.apache.Flink.table.planner.plan.optimize.program) optimizeTree:170, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) doOptimize:90, StreamCommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:77, CommonSubGraphBasedOptimizer (org.apache.Flink.table.planner.plan.optimize) optimize:248, PlannerBase (org.apache.Flink.table.planner.delegation) translate:151, PlannerBase (org.apache.Flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toRetractStream:127, StreamTableEnvironmentImpl (org.apache.Flink.table.api.scala.internal) toRetractStream:146, TableConversions (org.apache.Flink.table.api.scala) main:75, SimpleOuterJoin$ (spendreport) main:-1, SimpleOuterJoin (spendreport) // 调用Rule进行转换到Flink逻辑算子 translateToPlanInternal:140, StreamExecJoin (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:51, StreamExecJoin (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.flink.table.planner.plan.nodes.exec) translateToPlan:51, StreamExecJoin (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:54, StreamExecCalc (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:39, StreamExecCalc (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.flink.table.planner.plan.nodes.exec) translateToPlan:38, StreamExecCalcBase (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToTransformation:184, StreamExecSink (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:153, StreamExecSink (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:48, StreamExecSink (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.flink.table.planner.plan.nodes.exec) translateToPlan:48, StreamExecSink (org.apache.flink.table.planner.plan.nodes.physical.stream) apply:60, StreamPlanner$$anonfun$translateToPlan$1 (org.apache.flink.table.planner.delegation) apply:59, StreamPlanner$$anonfun$translateToPlan$1 (org.apache.flink.table.planner.delegation) apply:234, TraversableLike$$anonfun$map$1 (scala.collection) apply:234, TraversableLike$$anonfun$map$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) map:234, TraversableLike$class (scala.collection) map:104, AbstractTraversable (scala.collection) translateToPlan:59, StreamPlanner (org.apache.flink.table.planner.delegation) translate:153, PlannerBase (org.apache.flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.flink.table.api.scala.internal) toRetractStream:127, StreamTableEnvironmentImpl (org.apache.flink.table.api.scala.internal) toRetractStream:146, TableConversions (org.apache.flink.table.api.scala) main:75, SimpleOuterJoin$ (spendreport) main:-1, SimpleOuterJoin (spendreport) // 运行时候 @Internal public final class StreamTwoInputProcessorimplements StreamInputProcessor { private void proce***ecord2( StreamRecordrecord, TwoInputStreamOperatorstreamOperator, Counter numRecordsIn) throws Exception { streamOperator.setKeyContextElement2(record); streamOperator.processElement2(record); postProce***ecord(numRecordsIn); } } // 能看出来,streamOperator就是StreamingJoinOperator streamOperator = {StreamingJoinOperator@10943} leftIsOuter = true rightIsOuter = false outRow = {JoinedRow@10948} "JoinedRow{row1=org.apache.flink.table.dataformat.BinaryRow@dc6a1b67, row2=(+|null,null)}" leftNullRow = {GenericRow@10949} "(+|null,null)" rightNullRow = {GenericRow@10950} "(+|null,null)" leftRecordStateView = {OuterJoinRecordStateViews$InputSideHasNoUniqueKey@10945} rightRecordStateView = {JoinRecordStateViews$InputSideHasNoUniqueKey@10946} generatedJoinCondition = {GeneratedJoinCondition@10951} leftType = {BaseRowTypeInfo@10952} "BaseRow(orderId: STRING, productName: STRING)" rightType = {BaseRowTypeInfo@10953} "BaseRow(orderId: STRING, payType: STRING)" leftInputSideSpec = {JoinInputSideSpec@10954} "NoUniqueKey" rightInputSideSpec = {JoinInputSideSpec@10955} "NoUniqueKey" nullFilterKeys = {int[1]@10956} nullSafe = false filterAllNulls = true minRetentionTime = 0 stateCleaningEnabled = false joinCondition = {AbstractStreamingJoinOperator$JoinConditionWithNullFilters@10947} collector = {TimestampedCollector@10957} chainingStrategy = {ChainingStrategy@10958} "HEAD" container = {TwoInputStreamTask@10959} "Join(joinType=[LeftOuterJoin], where=[(orderId = orderId0)], select=[orderId, productName, orderId0, payType], leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey]) -> Calc(select=[orderId, productName, payType]) -> SinkConversionToTuple2 -> Sink: Print to Std. Out (1/1)" config = {StreamConfig@10960} "\n=======================Stream Config=======================\nNumber of non-chained inputs: 2\nNumber of non-chained outputs: 0\nOutput names: []\nPartitioning:\nChained subtasks: [(Join(joinType=[LeftOuterJoin], where=[(orderId = orderId0)], select=[orderId, productName, orderId0, payType], leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey])-7 -> Calc(select=[orderId, productName, payType])-8, typeNumber=0, selectedNames=[], outputPartitioner=FORWARD, outputTag=null)]\nOperator: SimpleOperatorFactory\nBuffer timeout: 100\nState Monitoring: false\n\n\n---------------------\nChained task configs\n---------------------\n{8=\n=======================Stream Config=======================\nNumber of non-chained inputs: 0\nNumber of non-chained outputs: 0\nOutput names: []\nPartitioning:\nChained subtasks: [(Calc(select=[orderId, productName, payType])-8 -> SinkConversionToTuple2-9, typeNumber=0, selectedNames=[], outputPartitioner=FORWARD, outputTag=null)]\nOperator: CodeGenOperatorFactory\nBuffer timeout: " output = {AbstractStreamOperator$CountingOutput@10961} runtimeContext = {StreamingRuntimeContext@10962} stateKeySelector1 = {BinaryRowKeySelector@10963} stateKeySelector2 = {BinaryRowKeySelector@10964} keyedStateBackend = {HeapKeyedStateBackend@10965} "HeapKeyedStateBackend" keyedStateStore = {DefaultKeyedStateStore@10966} operatorStateBackend = {DefaultOperatorStateBackend@10967} metrics = {OperatorMetricGroup@10968} latencyStats = {LatencyStats@10969} processingTimeService = {ProcessingTimeServiceImpl@10970} timeServiceManager = {InternalTimeServiceManager@10971} combinedWatermark = -9223372036854775808 input1Watermark = -9223372036854775808 input2Watermark = -9223372036854775808 // 处理table 1 processElement1:118, StreamingJoinOperator (org.apache.Flink.table.runtime.operators.join.stream) proce***ecord1:135, StreamTwoInputProcessor (org.apache.Flink.streaming.runtime.io) lambda$new$0:100, StreamTwoInputProcessor (org.apache.Flink.streaming.runtime.io) accept:-1, 169462196 (org.apache.Flink.streaming.runtime.io.StreamTwoInputProcessor$$Lambda$733) emitRecord:362, StreamTwoInputProcessor$StreamTaskNetworkOutput (org.apache.Flink.streaming.runtime.io) processElement:151, StreamTaskNetworkInput (org.apache.Flink.streaming.runtime.io) emitNext:128, StreamTaskNetworkInput (org.apache.Flink.streaming.runtime.io) processInput:182, StreamTwoInputProcessor (org.apache.Flink.streaming.runtime.io) processInput:311, StreamTask (org.apache.Flink.streaming.runtime.tasks) runDefaultAction:-1, 1284793893 (org.apache.Flink.streaming.runtime.tasks.StreamTask$$Lambda$713) runMailboxLoop:187, MailboxProcessor (org.apache.Flink.streaming.runtime.tasks.mailbox) runMailboxLoop:487, StreamTask (org.apache.Flink.streaming.runtime.tasks) invoke:470, StreamTask (org.apache.Flink.streaming.runtime.tasks) doRun:707, Task (org.apache.Flink.runtime.taskmanager) run:532, Task (org.apache.Flink.runtime.taskmanager) run:748, Thread (java.lang) // 处理table 2 processElement2:123, StreamingJoinOperator (org.apache.Flink.table.runtime.operators.join.stream) proce***ecord2:145, StreamTwoInputProcessor (org.apache.Flink.streaming.runtime.io) lambda$new$1:107, StreamTwoInputProcessor (org.apache.Flink.streaming.runtime.io) accept:-1, 76811487 (org.apache.Flink.streaming.runtime.io.StreamTwoInputProcessor$$Lambda$734) emitRecord:362, StreamTwoInputProcessor$StreamTaskNetworkOutput (org.apache.Flink.streaming.runtime.io) processElement:151, StreamTaskNetworkInput (org.apache.Flink.streaming.runtime.io) emitNext:128, StreamTaskNetworkInput (org.apache.Flink.streaming.runtime.io) processInput:185, StreamTwoInputProcessor (org.apache.Flink.streaming.runtime.io) processInput:311, StreamTask (org.apache.Flink.streaming.runtime.tasks) runDefaultAction:-1, 1284793893 (org.apache.Flink.streaming.runtime.tasks.StreamTask$$Lambda$713) runMailboxLoop:187, MailboxProcessor (org.apache.Flink.streaming.runtime.tasks.mailbox) runMailboxLoop:487, StreamTask (org.apache.Flink.streaming.runtime.tasks) invoke:470, StreamTask (org.apache.Flink.streaming.runtime.tasks) doRun:707, Task (org.apache.Flink.runtime.taskmanager) run:532, Task (org.apache.Flink.runtime.taskmanager) run:748, Thread (java.lang) // 处理table 1 proce***ecord1:134, StreamTwoInputProcessor (org.apache.flink.streaming.runtime.io) lambda$new$0:100, StreamTwoInputProcessor (org.apache.flink.streaming.runtime.io) accept:-1, 230607815 (org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor$$Lambda$735) emitRecord:362, StreamTwoInputProcessor$StreamTaskNetworkOutput (org.apache.flink.streaming.runtime.io) processElement:151, StreamTaskNetworkInput (org.apache.flink.streaming.runtime.io) emitNext:128, StreamTaskNetworkInput (org.apache.flink.streaming.runtime.io) processInput:182, StreamTwoInputProcessor (org.apache.flink.streaming.runtime.io) processInput:311, StreamTask (org.apache.flink.streaming.runtime.tasks) runDefaultAction:-1, 33038573 (org.apache.flink.streaming.runtime.tasks.StreamTask$$Lambda$718) runMailboxLoop:187, MailboxProcessor (org.apache.flink.streaming.runtime.tasks.mailbox) runMailboxLoop:487, StreamTask (org.apache.flink.streaming.runtime.tasks) invoke:470, StreamTask (org.apache.flink.streaming.runtime.tasks) doRun:707, Task (org.apache.flink.runtime.taskmanager) run:532, Task (org.apache.flink.runtime.taskmanager) run:748, Thread (java.lang) // 处理table 2 proce***ecord2:144, StreamTwoInputProcessor (org.apache.flink.streaming.runtime.io) lambda$new$1:107, StreamTwoInputProcessor (org.apache.flink.streaming.runtime.io) accept:-1, 212261435 (org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor$$Lambda$736) emitRecord:362, StreamTwoInputProcessor$StreamTaskNetworkOutput (org.apache.flink.streaming.runtime.io) processElement:151, StreamTaskNetworkInput (org.apache.flink.streaming.runtime.io) emitNext:128, StreamTaskNetworkInput (org.apache.flink.streaming.runtime.io) processInput:185, StreamTwoInputProcessor (org.apache.flink.streaming.runtime.io) processInput:311, StreamTask (org.apache.flink.streaming.runtime.tasks) runDefaultAction:-1, 33038573 (org.apache.flink.streaming.runtime.tasks.StreamTask$$Lambda$718) runMailboxLoop:187, MailboxProcessor (org.apache.flink.streaming.runtime.tasks.mailbox) runMailboxLoop:487, StreamTask (org.apache.flink.streaming.runtime.tasks) invoke:470, StreamTask (org.apache.flink.streaming.runtime.tasks) doRun:707, Task (org.apache.flink.runtime.taskmanager) run:532, Task (org.apache.flink.runtime.taskmanager) run:748, Thread (java.lang)
import java.sql.Timestamp import org.apache.flink.api.java.utils.ParameterTool import org.apache.flink.api.scala._ import org.apache.flink.streaming.api.TimeCharacteristic import org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment import org.apache.flink.streaming.api.windowing.time.Time import org.apache.flink.table.api.{EnvironmentSettings, TableEnvironment} import org.apache.flink.table.api.scala._ import org.apache.flink.types.Row import scala.collection.mutable import java.sql.Timestamp import org.apache.flink.api.scala._ import org.apache.flink.streaming.api.TimeCharacteristic import org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment import org.apache.flink.streaming.api.windowing.time.Time import org.apache.flink.table.api.TableEnvironment import org.apache.flink.table.api.scala._ import org.apache.flink.types.Row import scala.collection.mutable object SimpleTimeIntervalJoinA { def main(args: Array[String]): Unit = { val params = ParameterTool.fromArgs(args) val planner = if (params.has("planner")) params.get("planner") else "flink" val env = StreamExecutionEnvironment.getExecutionEnvironment val tEnv = if (planner == "blink") { // use blink planner in streaming mode val settings = EnvironmentSettings.newInstance() .useBlinkPlanner() .inStreamingMode() .build() StreamTableEnvironment.create(env, settings) } else if (planner == "flink") { // use flink planner in streaming mode StreamTableEnvironment.create(env) } else { System.err.println("The planner is incorrect. Please run 'StreamSQLExample --planner', " + "where planner (it is either flink or blink, and the default is flink) indicates whether the " + "example uses flink planner or blink planner.") return } env.setParallelism(1) env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime) // 构造订单数据 val ordersData = new mutable.MutableList[(String, String, Timestamp)] ordersData.+=(("001", "iphone", new Timestamp(1545800002000L))) ordersData.+=(("002", "mac", new Timestamp(1545800003000L))) ordersData.+=(("003", "book", new Timestamp(1545800004000L))) ordersData.+=(("004", "cup", new Timestamp(1545800018000L))) // 构造付款表 val paymentData = new mutable.MutableList[(String, String, Timestamp)] paymentData.+=(("001", "alipay", new Timestamp(1545803501000L))) paymentData.+=(("002", "card", new Timestamp(1545803602000L))) paymentData.+=(("003", "card", new Timestamp(1545803610000L))) paymentData.+=(("004", "alipay", new Timestamp(1545803611000L))) val orders = env .fromCollection(ordersData) .assignTimestampsAndWatermarks(new TimestampExtractor[String, String]()) .toTable(tEnv, 'orderId, 'productName, 'orderTime.rowtime) val ratesHistory = env .fromCollection(paymentData) .assignTimestampsAndWatermarks(new TimestampExtractor[String, String]()) .toTable(tEnv, 'orderId, 'payType, 'payTime.rowtime) tEnv.registerTable("Orders", orders) tEnv.registerTable("Payment", ratesHistory) var sqlQuery = """ |SELECT | o.orderId, | o.productName, | p.payType, | o.orderTime, | cast(payTime as timestamp) as payTime |FROM | Orders AS o left outer JOIN Payment AS p ON o.orderId = p.orderId AND | p.payTime BETWEEN orderTime AND orderTime + INTERVAL '1' HOUR |""".stripMargin tEnv.registerTable("TemporalJoinResult", tEnv.sqlQuery(sqlQuery)) val result = tEnv.scan("TemporalJoinResult").toAppendStream[Row] result.print() print(tEnv.explain(tEnv.sqlQuery(sqlQuery))) env.execute() } } class TimestampExtractor[T1, T2] extends BoundedOutOfOrdernessTimestampExtractor[(T1, T2, Timestamp)](Time.seconds(10)) { override def extractTimestamp(element: (T1, T2, Timestamp)): Long = { element._3.getTime } }
输出如下
== Abstract Syntax Tree == LogicalProject(orderId=[$0], productName=[$1], payType=[$4], orderTime=[$2], payTime=[CAST($5):TIMESTAMP(6)]) +- LogicalJoin(condition=[AND(=($0, $3), >=($5, $2), =(payTime, orderTime), = orderTime) AND (payTime <= (orderTime + 3600000:INTERVAL HOUR)))], select=[orderId, productName, orderTime, orderId0, payType, payTime]) ship_strategy : HASH Stage 18 : Operator content : Calc(select=[orderId, productName, payType, orderTime, CAST(CAST(payTime)) AS payTime]) ship_strategy : FORWARD 001,iphone,alipay,2018-12-26T04:53:22,2018-12-26T05:51:41 002,mac,card,2018-12-26T04:53:23,2018-12-26T05:53:22 004,cup,alipay,2018-12-26T04:53:38,2018-12-26T05:53:31 003,book,null,2018-12-26T04:53:24,null
相关类以及调用栈
class StreamExecWindowJoin { } class StreamExecWindowJoinRule extends ConverterRule( classOf[FlinkLogicalJoin], FlinkConventions.LOGICAL, FlinkConventions.STREAM_PHYSICAL, "StreamExecWindowJoinRule") { } matches:54, StreamExecWindowJoinRule (org.apache.flink.table.planner.plan.rules.physical.stream) matchRecurse:263, VolcanoRuleCall (org.apache.calcite.plan.volcano) match:247, VolcanoRuleCall (org.apache.calcite.plan.volcano) fireRules:1534, VolcanoPlanner (org.apache.calcite.plan.volcano) registerImpl:1807, VolcanoPlanner (org.apache.calcite.plan.volcano) register:846, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:868, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:90, VolcanoPlanner (org.apache.calcite.plan.volcano) onRegister:329, AbstractRelNode (org.apache.calcite.rel) registerImpl:1668, VolcanoPlanner (org.apache.calcite.plan.volcano) register:846, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:868, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:90, VolcanoPlanner (org.apache.calcite.plan.volcano) onRegister:329, AbstractRelNode (org.apache.calcite.rel) registerImpl:1668, VolcanoPlanner (org.apache.calcite.plan.volcano) register:846, VolcanoPlanner (org.apache.calcite.plan.volcano) ensureRegistered:868, VolcanoPlanner (org.apache.calcite.plan.volcano) changeTraits:529, VolcanoPlanner (org.apache.calcite.plan.volcano) run:324, Programs$RuleSetProgram (org.apache.calcite.tools) optimize:64, FlinkVolcanoProgram (org.apache.flink.table.planner.plan.optimize.program) apply:62, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.flink.table.planner.plan.optimize.program) apply:58, FlinkChainedProgram$$anonfun$optimize$1 (org.apache.flink.table.planner.plan.optimize.program) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) apply:157, TraversableOnce$$anonfun$foldLeft$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) foldLeft:157, TraversableOnce$class (scala.collection) foldLeft:104, AbstractTraversable (scala.collection) optimize:57, FlinkChainedProgram (org.apache.flink.table.planner.plan.optimize.program) optimizeTree:170, StreamCommonSubGraphBasedOptimizer (org.apache.flink.table.planner.plan.optimize) doOptimize:90, StreamCommonSubGraphBasedOptimizer (org.apache.flink.table.planner.plan.optimize) optimize:77, CommonSubGraphBasedOptimizer (org.apache.flink.table.planner.plan.optimize) optimize:248, PlannerBase (org.apache.flink.table.planner.delegation) translate:151, PlannerBase (org.apache.flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.flink.table.api.scala.internal) toAppendStream:107, StreamTableEnvironmentImpl (org.apache.flink.table.api.scala.internal) toAppendStream:101, TableConversions (org.apache.flink.table.api.scala) main:93, SimpleTimeIntervalJoinA$ (spendreport) main:-1, SimpleTimeIntervalJoinA (spendreport) translateToPlanInternal:136, StreamExecWindowJoin (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:53, StreamExecWindowJoin (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.flink.table.planner.plan.nodes.exec) translateToPlan:53, StreamExecWindowJoin (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:54, StreamExecCalc (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:39, StreamExecCalc (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.flink.table.planner.plan.nodes.exec) translateToPlan:38, StreamExecCalcBase (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToTransformation:184, StreamExecSink (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:153, StreamExecSink (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlanInternal:48, StreamExecSink (org.apache.flink.table.planner.plan.nodes.physical.stream) translateToPlan:58, ExecNode$class (org.apache.flink.table.planner.plan.nodes.exec) translateToPlan:48, StreamExecSink (org.apache.flink.table.planner.plan.nodes.physical.stream) apply:60, StreamPlanner$$anonfun$translateToPlan$1 (org.apache.flink.table.planner.delegation) apply:59, StreamPlanner$$anonfun$translateToPlan$1 (org.apache.flink.table.planner.delegation) apply:234, TraversableLike$$anonfun$map$1 (scala.collection) apply:234, TraversableLike$$anonfun$map$1 (scala.collection) foreach:891, Iterator$class (scala.collection) foreach:1334, AbstractIterator (scala.collection) foreach:72, IterableLike$class (scala.collection) foreach:54, AbstractIterable (scala.collection) map:234, TraversableLike$class (scala.collection) map:104, AbstractTraversable (scala.collection) translateToPlan:59, StreamPlanner (org.apache.flink.table.planner.delegation) translate:153, PlannerBase (org.apache.flink.table.planner.delegation) toDataStream:210, StreamTableEnvironmentImpl (org.apache.flink.table.api.scala.internal) toAppendStream:107, StreamTableEnvironmentImpl (org.apache.flink.table.api.scala.internal) toAppendStream:101, TableConversions (org.apache.flink.table.api.scala) main:93, SimpleTimeIntervalJoinA$ (spendreport) main:-1, SimpleTimeIntervalJoinA (spendreport)
Flink table&Sql中使用Calcite
Flink sql的实现
Calcite 功能简析及在 Flink 的应用
基于Flink1.8 深入理解Flink Sql执行流程 + Flink Sql语法扩展
使用Flink Table &Sql api来构建批量和流式应用(3)Flink Sql 使用
Flink关系型API: Table API 与SQL
Flink sql的实现
Flink如何实现动态表与静态表的Join操作
一文解析Flink SQL工作流程
Flink1.9-table/SQLAPI
【Flink SQL引擎】:Calcite 功能简析及在 Flink 的应用
Apache Calcite 处理流程详解(一)
Apache Calcite 优化器详解(二)
揭秘 Flink 1.9 新架构,Blink Planner 你会用了吗?
Flink 原理与实现:Table & SQL API | Jark's Blog