Appearance
Hadoop3.x集成Spark
下载
https://archive.apache.org/dist/spark/spark-3.0.2/spark-3.0.2-bin-hadoop3.2.tgz
解压、改名
在/data/software目录
bash
tar -zxf spark-3.0.2-bin-hadoop3.2.tgz
mv spark-3.0.2-bin-hadoop3.2 spark修改配置
修改spark-env.sh
bash
cd /data/software/spark/conf
cp spark-env.sh.template spark-env.sh
# 编辑spark-env.sh,并添加相关环境变量
export JAVA_HOME=/data/software/java
export SPARK_MASTER_HOST=hadoop01
export SPARK_MASTER_PORT=7077
# 避免和hadoop其他应用端口冲突(默认就是8081,可不添加该配置)
export SPARK_MASTER_WEBUI_PORT=8081修改slaves文件
bash
cd /data/software/spark/conf
cp slaves.template slaves
# 在文件中添加spark的工作节点,以下3台机器作
hadoop01
hadoop02
hadoop03修改日志等级
bash
cd /data/software/spark/conf
cp log4j.properties.template log4j.properties
# 修改log4j.properties配置文件中的log4j.rootCategory
将 log4j.rootCategory=INFO, console 修改为 log4j.rootCategory=WARN, console添加系统环境变量
bash
$ cat > /etc/profile.d/hadoop-env.sh <<"EOF"
export SPARK_HOME=/data/software/spark
EOF
$ source /etc/profile修改spark-config.sh
在spark启动时可能会报错:failed to launch: nice -n 0 /soft/spark/bin/spark-class org.apache.spark.deploy.worker
修改 $SPARK_HOME/sbin/spark-config.sh,添加 JAVA_HOME 环境变量
bash
export JAVA_HOME=/data/software/java集成hadoop
链接配置文件
链接hadoop的配置文件到spark的conf目录下
bash
ln -s $HADOOP_HOME/etc/hadoop/core-site.xml $SPARK_HOME/conf/core-site.xml
ln -s $HADOOP_HOME/etc/hadoop/hdfs-site.xml $SPARK_HOME/conf/hdfs-site.xml配置spark on yarn
修改/data/software/hadoop/etc/hadoop/yarn-site.xml,添加相关配置
xml
<!-- 配置yarn资源 检查之前是否配置过-->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>true</value>
<description>Whether virtual memory limits will be enforced for containers</description>
</property>
<property>
<name>yarn.nodemanager.vemem-pmem-ration</name>
<value>2</value>
<description>Ration between virtual memory to physical memory when setting memoery limits for containers</description>
</property>
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property>修改/data/software/spark/conf/spark-env.sh,添加hadoop_conf_dir
bash
# 配置spark on yarn
export HADOOP_CONF_DIR=/data/software/hadoop/etc/hadoop链接hadoop的yarn-site.xml到spark配置目录下
bash
ln -s $HADOOP_HOME/etc/hadoop/yarn-site.xml $SPARK_HOME/conf/yarn-site.xml将jar上传到hdfs中
bash
hadoop fs -mkdir /spark-jars
hadoop fs -put $SPARK_HOME/jars/* /spark-jars修改spark-default.conf,添加相关配置
bash
# 填入的hacluster是Hadoop集群名
spark.master yarn
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hacluster/spark-logs
spark.yarn.jars hdfs://hacluster/spark-jars/*
# 解决sparksql报超长问题
spark.sql.debug.maxToStringFields 100
# 监控spark job的端口配置,默认4040,可不配
spark.ui.port 4040配置历史服务器
bash
cd /data/software/spark/conf
cp spark-defaults.conf.template spark-defaults.conf
# 修改该文件中的下面两个配置
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hacluster/spark-logs
# 需要确保hdfs中的该目录存在,如果不存在,可手工创建,执行以下命令:
hadoop fs -mkdir /spark-logs配置Spark HA
修改 $SPARK_HOME/conf/spark-env.sh,添加zookeeper配置
bash
# 将master相关配置注释掉
#export SPARK_MASTER_HOST=hadoop01
#export SPARK_MASTER_PORT=7077
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop01:2181,hadoop02:2181,hadoop03:2181 -Dspark.deploy.zookeeper.dir=/spark"
# 每个work的线程数
export SPARK_WORKER_CORES=2
# 每个work可用的内存数
export SPARK_WORKER_MEMORY=2G启动spark集群
bash
# 在打算作为master的节点执行, 这里start-all.sh会和hadoop的冲突,需要sbin加入环境变量,可以考虑改名等
$SPARK_HOME/sbin/start-all.sh
# 在备用节点上面执行
$SPARK_HOME/sbin/start-master.sh
# 查看zookeeper目录,验证高可用,依次执行
$ZOOKEEPER_HOME/bin/zkCli.sh -server hadoop01
ls /
ls /spark
ls /spark/master_status
ls /spark/leader_election验证spark功能
1、访问web页面,查看集群情况 http://hadoop01:8081/
2、使用spark进行字数统计
bash
$SPARK_HOME/bin/spark-shell
sc.textFile("hdfs://hacluster/sanguo/shuguo.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect配置Spark on Hive
链接/data/software/hive/conf/hive-site.xml到spark的conf下
bash
ln -s /data/software/hive/conf/hive-site.xml hive-site.xml
ln -s $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf/hive-site.xml复制metastore的jdbc驱动包到spark/jars下面
bash
cp /data/software/hive/lib/mysql-connector-java-8.0.22.jar /app/spark-3.0.2/jars/
cp $HIVE_HOME/lib/mysql-connector-java-8.0.22.jar $SPARK_HOME/jars功能验证
bash
$SPARK_HOME/bin/spark-shell
# 在交互命令行执行
scala> spark.sql("select count(*) from test3").show