Java教程

shell脚本调用spark-sql

本文主要是介绍shell脚本调用spark-sql,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

#!/bin/sh
#set -x

#########################
#author : robin
#version : v3.0

#########################
#$1 : start time for business circle
#$2 : end time for business circle
#$3 : start time of slowly changing dimension for SF orginazation
#$4 : spark parameter list, splited by ‘,’
#spark parameter comment:
#1. value of queue name
#2. value of spark job name
#3. value of driver-memory
#4. value of num-executors
#5. value of executor-cores
#6. value of executor-memory
#7. value of spark.yarn.executor.memoryOverhead
#8. value of spark.sql.shuffle.partitions

#deprecated parameters:
#spark.storage.memoryFraction
#spark.shuffle.memoryFraction
#[As of Spark 1.6, execution and storage memory management are unified. All memory fractions used in the old model are now deprecated and no longer read. If you

这篇关于shell脚本调用spark-sql的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!