生成表头
from pyspark.sql.types import *
from pyspark.sql import Row
schemaString="name course score"
fields=[StructField(field_name,StringType(),True) for field_name in schemaString.split(" ")]
bt=StructType(fields)
bt
生成数据
xssj=spark.sparkContext.textFile("file:///usr/local/spark/mycode/rdd/xs.txt").map(lambda line:line.split(',')).map(lambda x:Row(name=x[0],course=x[1],score=int(x[2])))
xssj.take(3)
拼接
xsb=spark.createDataFrame(xssj,bt)
xsb.show()
用DataFrame的操作或SQL语句完成以下数据分析要求,并和用RDD操作的实现进行对比:
xsb.filter(xsb.name=='Tom').groupBy('course').agg({'score':'mean'}).show()