hadoop spark HA高可用集群搭建

方案

192.168.211.129   elastic    (zookeeper、kafka、hadoop namenode、yarn resourcemanager、hbase hmaster、park master、es master)
192.168.211.130   hbase         (zookeeper、kafka、hadoop namenode、hadoop datanode、yarn resourcemanager、yarn nodemanager、spark worker、es data)    
192.168.211.131   mongodb     (zookeeper、kafka、hadoop datanode、yarn nodemanager、spark worker、es data)    

安装jdk(每台)

rpm -ivh jdk-7u80-linux-x64.rpm

配置ssh(每台)

vi /etc/hosts 添加:
    192.168.211.129   elastic
    192.168.211.130   hbase
    192.168.211.131   mongodb

useradd spark
passwd spark

切换到spark用户:
ssh-keygen -t rsa
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub elastic
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub hbase
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub mongodb

elastic机器上:
cd
mkdir nosql
将要安装的tar包拷贝到nosql目录
tar -zxf hadoop-2.6.2.tar.gz
tar -zxf zookeeper-3.4.6.tar.gz
tar -zxf spark-2.0.2-bin-hadoop2.6.tgz
tar -zxf hbase-1.2.4-bin.tar.gz
tar -zxf kafka_2.10-0.10.1.0.tgz
tar -zxf elasticsearch-5.0.1.tar.gz
tar -zxf mongodb-linux-x86_64-rhel62-3.2.11.tgz
vi .bashrc
    JAVA_HOME=/usr/java/default
    HADOOP_HOME=/home/spark/nosql/hadoop-2.6.2
    SPARK_HOME=/home/spark/nosql/spark-2.0.2-bin-hadoop2.6
    ZOOKEEPER_HOME=/home/spark/nosql/zookeeper-3.4.6
    HBASE_HOME=/home/spark/nosql/hbase-1.2.4
    ELASTICSEARCH_HOME=/home/spark/nosql/elasticsearch-5.0.1
    MONGODB_HOME=/home/spark/nosql/mongodb-linux-x86_64-rhel62-3.2.11
    export JAVA_HOME HADOOP_HOME SPARK_HOME ZOOKEEPER_HOME HBASE_HOME ELASTICSEARCH_HOME MONGODB_HOME
    export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$ELASTICSEARCH_HOME/bin:$MONGODB_HOME/bin:$PATH
source .bashrc

hadoop配置(配置完后复制到各节点)

  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/slaves
    hbase
    mongodb
  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/core-site.xml
    <configuration>
    <property>
           <name>fs.defaultFS</name>
           <value>hdfs://mycluster</value>
           <description>这里的 mycluster为HA集群的逻辑名,与hdfs-site.xml中的dfs.nameservices配置一致</description>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/spark/nosql/data</value>
        <description>这里的路径默认是NameNode、DataNode、JournalNode等存放数据的公共目录. 用户也可单独指定每类数据的存储目录。这里目录结构需要自己先创建好</description>
    </property>
    <property>
          <name>ha.zookeeper.quorum</name>
          <value>elastic:2181,hbase:2181,mongodb:2181</value>
          <description>这里是zk集群配置中各节点的地址和端口。 注意:数量一定是奇数而且和zoo.cfg中配置的一致</description>
    </property>
    <property>
           <name>io.file.buffer.size</name>
           <value>131072</value>
           <description>Size of read/write buffer used inSequenceFiles.</description>
    </property>
    </configuration>
  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/hdfs-site.xml
<configuration>
<property>
    <name>dfs.replication</name>
    <value>2</value>
    <description>配置副本数量</description>
</property>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>/home/spark/nosql/dfs/name</value>
    <description>namenode元数据存储目录</description>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>/home/spark/nosql/dfs/data</value>
    <description>datanode数据存储目录</description>
</property>

<property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
    <description>指定HA命名服务,core-site.xml中fs.defaultFS配置需要引用它</description>
 </property>

<property>
    <name>dfs.namenode.rpc-address.mycluster.nn1</name>
    <value>elastic:9000</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.mycluster.nn2</name>
    <value>hbase:9000</value>
</property>

<property>
    <name>dfs.namenode.http-address.mycluster.nn1</name>
    <value>elastic:50070</value>
</property>
<property>
    <name>dfs.namenode.http-address.mycluster.nn2</name>
    <value>hbase:50070</value>
</property>

<property>
    <name>dfs.namenode.servicerpc-address.mycluster.nn1</name>
    <value>elastic:53310</value>
</property>

<property>
    <name>dfs.namenode.servicerpc-address.mycluster.nn2</name>
    <value>hbase:53310</value>
</property>
<property>
    <name>dfs.ha.automatic-failover.enabled.mycluster</name>  
    <value>true</value>
    <description>故障失败是否自动切换</description>
</property>

<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://elastic:8485;hbase:8485;mongodb:8485/hadoop-journal</value>
    <description>配置JournalNode,包含三部分:
        1.qjournal 前缀表名协议;
        2.然后就是三台部署JournalNode的主机host/ip:端口,三台机器之间用分号分隔
        3.最后的hadoop-journal是journalnode的命名空间,可以随意取名
    </description>
</property>

<property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/home/spark/nosql/dfs/HAjournal</value>
    <description>journalnode的本地数据存放目录</description>
</property>

<property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    <description> 指定mycluster出故障时执行故障切换的类</description>
</property>
<property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
    <description>ssh的操作方式执行故障切换</description>
</property>

<property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/home/spark/.ssh/id_rsa</value>
    <description> 如果使用ssh进行故障切换,使用ssh通信时用的密钥存储的位置</description>
</property>

<property>
    <name>dfs.ha.fencing.ssh.connect-timeout</name>
    <value>1000</value>
</property>
<property>
    <name>dfs.namenode.handler.count</name>
    <value>10</value>
</property>
</configuration>
  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>clusterrm</value>
</property>

<property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
</property>

<property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>elastic</value>
</property>

<property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>hbase</value>
</property>

<property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>

<property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>elastic:2181,hbase:2181,mongodb:2181</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<!-- set the proxy server -->

<!-- set history server -->
<property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
</property>

<!-- set the timeline server -->
<property>
    <description>The hostname of the Timeline service web application.</description>
    <name>yarn.timeline-service.hostname</name>
    <value>elastic</value>
</property>

<property>
    <description>Address for the Timeline server to start the RPC server.</description>
    <name>yarn.timeline-service.address</name>
    <value>elastic:10200</value>
</property>

<property>
    <description>The http address of the Timeline service web application.</description>
    <name>yarn.timeline-service.webapp.address</name>
    <value>elastic:8188</value>
</property>

<property>
    <description>The https address of the Timeline service web application.</description>
    <name>yarn.timeline-service.webapp.https.address</name>
    <value>elastic:8190</value>
</property>

<property>
    <description>Handler thread count to serve the client RPC requests.</description>
    <name>yarn.timeline-service.handler-thread-count</name>
    <value>10</value>
</property>
<property>
    <name>yarn.timeline-service.http-cross-origin.enabled</name>
    <value>false</value>
</property>

<property>
    <description>Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed</description>
    <name>yarn.timeline-service.http-cross-origin.allowed-origins</name>
    <value>*</value>
</property>

<property>
    <description>Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support.</description>
    <name>yarn.timeline-service.http-cross-origin.allowed-methods</name>
    <value>GET,POST,HEAD</value>
</property>

<property>
    <description>Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support.</description>
    <name>yarn.timeline-service.http-cross-origin.allowed-headers</name>
    <value>X-Requested-With,Content-Type,Accept,Origin</value>
</property>

<property>
    <description>The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support.</description>
    <name>yarn.timeline-service.http-cross-origin.max-age</name>
    <value>1800</value>
</property>

<property>
    <description>Indicate to clients whether Timeline service is enabled or not.
            If enabled, the TimelineClient library used by end-users will post entities and events to the Timeline server.</description>
    <name>yarn.timeline-service.enabled</name>
    <value>true</value>
</property>

<property>
    <description>Store class name for timeline store.</description>
    <name>yarn.timeline-service.store-class</name>
    <value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value>
</property>

<property>
    <description>Enable age off of timeline store data.</description>
    <name>yarn.timeline-service.ttl-enable</name>
    <value>true</value>
</property>
<property>
    <description>Time to live for timeline store data in milliseconds.</description>
    <name>yarn.timeline-service.ttl-ms</name>
    <value>604800000</value>
</property>
  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/mapred-site.xml
<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

<!-- set the history -->

<property>
    <name>mapreduce.jobhistory.address</name>
    <value>elastic:10020</value>
</property>

<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>elastic:19888</value>
</property>

<property>
    <name>mapreduce.jobhistory.intermediate-done-dir</name>
    <value>/home/spark/nosql/dfs/mr_history/HAmap</value>
    <description>Directory where history files are written by MapReduce jobs.</description>
</property>

<property>
    <name>mapreduce.jobhistory.done-dir</name>
    <value>/home/spark/nosql/dfs/mr_history/HAdone</value>
    <description>Directory where history files are managed by the MR JobHistory Server.</description>
</property>
</configuration>
scp -r nosql/hadoop-2.6.2 spark@mongodb:/home/spark/nosql/
scp -r nosql/hadoop-2.6.2 spark@hbase:/home/spark/nosql/

zookeeper配置

cd /home/spark/nosql/zookeeper-3.4.6/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/spark/nosql/zookeeper-3.4.6/data
dataLogDir=/home/spark/nosql/zookeeper-3.4.6/logs
clientPort=2181
server.1=elastic:2888:3888
server.2=hbase:2888:3888
server.3=mongodb:2888:3888
cd /home/spark/nosql/zookeeper-3.4.6
mkdir data
scp -r nosql/zookeeper-3.4.6 spark@mongodb:/home/spark/nosql/
scp -r nosql/zookeeper-3.4.6 spark@hbase:/home/spark/nosql/
在elastic节点:
    cd /home/spark/nosql/zookeeper-3.4.6
    echo 1 > data/myid
在hbase节点:
    cd /home/spark/nosql/zookeeper-3.4.6
    echo 2 > data/myid
在mongodb节点:
    cd /home/spark/nosql/zookeeper-3.4.6
    echo 3 > data/myid

spark配置

cd ~/nosql/spark-2.0.2-bin-hadoop2.6/conf
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
    export JAVA_HOME=/usr/java/default
    export HADOOP_CONF_DIR=/home/spark/nosql/hadoop-2.6.2/etc/hadoop
    export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=elastic:2181,hbase:2181,mongodb:2181 -Dspark.deploy.zookeeper.dir=/home/spark/nosql/spark-2.0.2-bin-hadoop2.6/meta"
cp slaves.template slaves
vi slaves
    hbase
    mongodb

scp -r nosql/spark-2.0.2-bin-hadoop2.6 spark@mongodb:/home/spark/nosql/
scp -r nosql/spark-2.0.2-bin-hadoop2.6 spark@hbase:/home/spark/nosql/

启动zookeeper、hadoop、spark

cd /home/spark/nosql/zookeeper-3.4.6(每台)
zkServer.sh start
zkServer.sh status
格式化zk(任一节点)    hdfs zkfc -formatZK
启动zkfc(主备节点elastic/hbase)    hadoop-daemon.sh start zkfc
启动JournalNode(每台)    hadoop-daemon.sh start journalnode
格式化(任一节点,勿重复)  hdfs namenode -format
主节点(elastic)   hadoop-daemon.sh start namenode
备节点(hbase):
    hadoop namenode -bootstrapStandBy
    hadoop-daemon.sh start namenode
查看节点状态:
    hdfs haadmin -getServiceState nn1
    hdfs haadmin -getServiceState nn2
启动数据节点:hadoop-daemons.sh start datanode
启动resourcemanager(主备)  yarn-daemon.sh start resourcemanager
启动nodemanager:yarn-daemons.sh start nodemanager
查看yarn状态:
    yarn rmadmin -getServiceState rm1
    yarn rmadmin -getServiceState rm2
启动mrjobhistoryserver:mr-jobhistory-daemon.sh start historyserver
启动timelineserver:yarn-daemon.sh start timelineserver
启动spark master(主备):sbin/start-master.sh

最终效果

elastic:
    11910 Jps
    11385 JobHistoryServer
    11715 Master
    10518 NameNode
    11521 ApplicationHistoryServer
    10281 JournalNode
    10098 QuorumPeerMain
    10945 ResourceManager
    10216 DFSZKFailoverController
hbase:
    5813 NodeManager
    5250 NameNode
    5606 ResourceManager
    5486 DataNode
    5071 DFSZKFailoverController
    4984 QuorumPeerMain
    6153 Worker
    5136 JournalNode
    5987 Master
    6252 Jps
mongodb:
    3748 JournalNode
    4179 Jps
    4092 Worker
    3701 QuorumPeerMain
    3836 DataNode
    3958 NodeManager
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 157,298评论 4 360
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 66,701评论 1 290
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 107,078评论 0 237
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,687评论 0 202
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,018评论 3 286
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,410评论 1 211
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,729评论 2 310
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,412评论 0 194
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,124评论 1 239
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,379评论 2 242
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 31,903评论 1 257
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,268评论 2 251
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 32,894评论 3 233
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,014评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,770评论 0 192
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,435评论 2 269
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,312评论 2 260

推荐阅读更多精彩内容