11、用Docker构建Hadoop分布式集群

后文用到的配置文件可以从https://github.com/yinbodotcc/-Docker-Hadoop-.git下载

一、环境说明

1、虚拟4台主机,IP地址和主机名如下(以下是dockerfile中表达方式)
RUN echo "172.18.12.4 hadoop-slave3" >> /etc/hosts
RUN echo "172.18.12.3 hadoop-slave2" >> /etc/hosts
RUN echo "172.18.12.2 hadoop-slave1" >> /etc/hosts
RUN echo "172.18.12.1 hadoop-master" >> /etc/hosts
2、不需要单独部署Zookeeper

图片.png

图片.png

二、Ubuntu基础镜像构建

2.1、ubuntubase.dockerfile

############################################
# version : yayubuntubase/withssh:v1
# desc : ubuntu14.04 上安装的ssh
############################################
# 设置继承自ubuntu14.04官方镜像
FROM ubuntu:14.04 

# 下面是一些创建者的基本信息
MAINTAINER yayubuntubase


RUN echo "root:root" | chpasswd


#安装ssh-server
RUN rm -rvf /var/lib/apt/lists/*
RUN apt-get update 
RUN apt-get install -y openssh-server openssh-client vim wget curl sudo



#配置ssh
RUN mkdir /var/run/sshd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

EXPOSE 22

2.2、制作镜像

sudo docker image build --file ubuntubase.dockerfile --tag ubuntuforhadoop/withssh:v1 .

三、加入hadoop的ubuntu基础镜像构建

3.1、hadoop.dockerfile

############################################ 
# desc : 安装JAVA  HADOOP 
############################################
 
FROM ubuntuforhadoop/withssh:v1
 
 
MAINTAINER yay 
  
#为hadoop集群提供dns服务
USER root
RUN sudo apt-get -y install dnsmasq

#安装和配置java环境
#安装和配置java 
ADD jdk-8u191-linux-x64.tar.gz /usr/local/
ENV JAVA_HOME /usr/local/jdk1.8.0_191
ENV CLASSPATH ${JAVA_HOME}/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV PATH $PATH:${JAVA_HOME}/bin


#安装HADOOP
ADD hadoop-3.2.1.tar.gz /usr/local/
RUN cd /usr/local && ln -s ./hadoop-3.2.1 hadoop

ENV HADOOP_PREFIX /usr/local/hadoop
ENV HADOOP_HOME /usr/local/hadoop
ENV HADOOP_COMMON_HOME /usr/local/hadoop
ENV HADOOP_HDFS_HOME /usr/local/hadoop
ENV HADOOP_MAPRED_HOME /usr/local/hadoop
ENV HADOOP_YARN_HOME /usr/local/hadoop
ENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoop
ENV PATH ${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH

RUN mkdir -p /home/hadooptmp


RUN echo "hadoop ALL= NOPASSWD: ALL" >> /etc/sudoers



#在每台主机上输入 ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa 创建一个无密码的公钥,-t是类型的意思,dsa是生成的密钥类型,-P是密码,’’表示无密码,-f后是秘钥生成后保存的位置
RUN ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
RUN cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
CMD ["/usr/sbin/sshd","-D"] 



ADD hadoop-conf/.       /usr/local/hadoop/etc/hadoop/
#ADD start/.             /usr/local/hadoop/sbin/

#在制定行插入
RUN sed -i "1i HADOOP_SECURE_DN_USER=hdfs"  /usr/local/hadoop/sbin/start-dfs.sh
RUN sed -i "1i HDFS_NAMENODE_USER=root"  /usr/local/hadoop/sbin/start-dfs.sh
RUN sed -i "1i HDFS_SECONDARYNAMENODE_USER=root"  /usr/local/hadoop/sbin/start-dfs.sh
RUN sed -i "1i HDFS_DATANODE_USER=root"  /usr/local/hadoop/sbin/start-dfs.sh


RUN sed -i "1i HADOOP_SECURE_DN_USER=hdfs"  /usr/local/hadoop/sbin/stop-dfs.sh
RUN sed -i "1i HDFS_NAMENODE_USER=root"  /usr/local/hadoop/sbin/stop-dfs.sh
RUN sed -i "1i HDFS_SECONDARYNAMENODE_USER=root"  /usr/local/hadoop/sbin/stop-dfs.sh
RUN sed -i "1i HDFS_DATANODE_USER=root"  /usr/local/hadoop/sbin/stop-dfs.sh

RUN sed -i "1i HADOOP_SECURE_DN_USER=yarn"  /usr/local/hadoop/sbin/start-yarn.sh
RUN sed -i "1i YARN_NODEMANAGER_USER=root"  /usr/local/hadoop/sbin/start-yarn.sh
RUN sed -i "1i YARN_RESOURCEMANAGER_USER=root"  /usr/local/hadoop/sbin/start-yarn.sh

RUN sed -i "1i HADOOP_SECURE_DN_USER=yarn"  /usr/local/hadoop/sbin/stop-yarn.sh
RUN sed -i "1i YARN_NODEMANAGER_USER=root"  /usr/local/hadoop/sbin/stop-yarn.sh
RUN sed -i "1i YARN_RESOURCEMANAGER_USER=root"  /usr/local/hadoop/sbin/stop-yarn.sh

3.2、几个配置文件

hadoop-conf/core-site.xml

    <property>
        <name>fs.default.name</name>
        <value>hdfs://hadoop-master:9000</value>
        </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadooptmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
</configuration>

注意,如果你在配置非容器环境,可能需要如下配置
<property>
<name>hadoop.tmp.dir</name>
<value>/home/${user.name}/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>
即使用${user.name}
否则会出现错误:java.io.IOException: Cannot create directory /home/hadoop/dfs/name/current

hadoop-conf/hadoop-env.sh

JAVA_HOME=/usr/local/jdk1.8.0_191

hadoop-conf/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop-master:9001</value>
        <description># 通过web界面来查看HDFS状态 </description>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
        <description># 每个Block有3个备份</description>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
</configuration>

注意,如果你在配置非容器环境,可能需要如下配置
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/${user.name}/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/${user.name}/dfs/data</value>
</property>
<property>
即使用${user.name}
否则会出现错误:java.io.IOException: Cannot create directory /home/hadoop/dfs/name/current

hadoop-conf/mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop-master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop-master:19888</value>
    </property>
</configuration>

hadoop-conf/yarn-env.sh

JAVA_HOME=/usr/local/jdk1.8.0_191

hadoop-conf/yarn-site.xml

<configuration>
    <!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>hadoop-master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>hadoop-master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>hadoop-master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>hadoop-master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>hadoop-master:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>1024</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>1</value>
    </property>
</configuration>

hadoop-conf/workers

hadoop-master
hadoop-slave1
hadoop-slave2
hadoop-slave3

务必注意,Hadoop3之前叫做slaves,Hadoop3之后才改为workers文件的

start/start-dfs.sh

图片.png

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root



# Start hadoop dfs daemons.
# Optinally upgrade or rollback dfs state.
# Run this on master node.

## startup matrix:
#
# if $EUID != 0, then exec
# if $EUID =0 then
#    if hdfs_subcmd_user is defined, su to that user, exec
#    if hdfs_subcmd_user is not defined, error
#
# For secure daemons, this means both the secure and insecure env vars need to be
# defined.  e.g., HDFS_DATANODE_USER=root HDFS_DATANODE_SECURE_USER=hdfs
#

## @description  usage info
## @audience     private
## @stability    evolving
## @replaceable  no
function hadoop_usage
{
  echo "Usage: start-dfs.sh [-upgrade|-rollback] [-clusterId]"
}

this="${BASH_SOURCE-$0}"
bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)

# let's locate libexec...
if [[ -n "${HADOOP_HOME}" ]]; then
  HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else
  HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
fi

HADOOP_LIBEXEC_DIR="${HADOOP_LIBEXEC_DIR:-$HADOOP_DEFAULT_LIBEXEC_DIR}"
# shellcheck disable=SC2034
HADOOP_NEW_CONFIG=true
if [[ -f "${HADOOP_LIBEXEC_DIR}/hdfs-config.sh" ]]; then
  . "${HADOOP_LIBEXEC_DIR}/hdfs-config.sh"
else
  echo "ERROR: Cannot execute ${HADOOP_LIBEXEC_DIR}/hdfs-config.sh." 2>&1
  exit 1
fi

# get arguments
if [[ $# -ge 1 ]]; then
  startOpt="$1"
  shift
  case "$startOpt" in
    -upgrade)
      nameStartOpt="$startOpt"
    ;;
    -rollback)
      dataStartOpt="$startOpt"
    ;;
    *)
      hadoop_exit_with_usage 1
    ;;
  esac
fi


#Add other possible options
nameStartOpt="$nameStartOpt $*"

#---------------------------------------------------------
# namenodes

NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -namenodes 2>/dev/null)

if [[ -z "${NAMENODES}" ]]; then
  NAMENODES=$(hostname)
fi

echo "Starting namenodes on [${NAMENODES}]"
hadoop_uservar_su hdfs namenode "${HADOOP_HDFS_HOME}/bin/hdfs" \
    --workers \
    --config "${HADOOP_CONF_DIR}" \
    --hostnames "${NAMENODES}" \
    --daemon start \
    namenode ${nameStartOpt}

HADOOP_JUMBO_RETCOUNTER=$?

#---------------------------------------------------------
# datanodes (using default workers file)

echo "Starting datanodes"
hadoop_uservar_su hdfs datanode "${HADOOP_HDFS_HOME}/bin/hdfs" \
    --workers \
    --config "${HADOOP_CONF_DIR}" \
    --daemon start \
    datanode ${dataStartOpt}
(( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))

#---------------------------------------------------------
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -secondarynamenodes 2>/dev/null)

if [[ -n "${SECONDARY_NAMENODES}" ]]; then

  if [[ "${NAMENODES}" =~ , ]]; then

    hadoop_error "WARNING: Highly available NameNode is configured."
    hadoop_error "WARNING: Skipping SecondaryNameNode."

  else

    if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
      SECONDARY_NAMENODES=$(hostname)
    fi

    echo "Starting secondary namenodes [${SECONDARY_NAMENODES}]"

    hadoop_uservar_su hdfs secondarynamenode "${HADOOP_HDFS_HOME}/bin/hdfs" \
      --workers \
      --config "${HADOOP_CONF_DIR}" \
      --hostnames "${SECONDARY_NAMENODES}" \
      --daemon start \
      secondarynamenode
    (( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))
  fi
fi

#---------------------------------------------------------
# quorumjournal nodes (if any)

JOURNAL_NODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -journalNodes 2>&-)

if [[ "${#JOURNAL_NODES}" != 0 ]]; then
  echo "Starting journal nodes [${JOURNAL_NODES}]"

  hadoop_uservar_su hdfs journalnode "${HADOOP_HDFS_HOME}/bin/hdfs" \
    --workers \
    --config "${HADOOP_CONF_DIR}" \
    --hostnames "${JOURNAL_NODES}" \
    --daemon start \
    journalnode
   (( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))
fi

#---------------------------------------------------------
# ZK Failover controllers, if auto-HA is enabled
AUTOHA_ENABLED=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey dfs.ha.automatic-failover.enabled | tr '[:upper:]' '[:lower:]')
if [[ "${AUTOHA_ENABLED}" = "true" ]]; then
  echo "Starting ZK Failover Controllers on NN hosts [${NAMENODES}]"

  hadoop_uservar_su hdfs zkfc "${HADOOP_HDFS_HOME}/bin/hdfs" \
    --workers \
    --config "${HADOOP_CONF_DIR}" \
    --hostnames "${NAMENODES}" \
    --daemon start \
    zkfc
  (( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))
fi

exit ${HADOOP_JUMBO_RETCOUNTER}

# eof

start/start-yarn.sh

图片.png

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

## @description  usage info
## @audience     private
## @stability    evolving
## @replaceable  no
function hadoop_usage
{
  hadoop_generate_usage "${MYNAME}" false
}

MYNAME="${BASH_SOURCE-$0}"

bin=$(cd -P -- "$(dirname -- "${MYNAME}")" >/dev/null && pwd -P)

# let's locate libexec...
if [[ -n "${HADOOP_HOME}" ]]; then
  HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else
  HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
fi

HADOOP_LIBEXEC_DIR="${HADOOP_LIBEXEC_DIR:-$HADOOP_DEFAULT_LIBEXEC_DIR}"
# shellcheck disable=SC2034
HADOOP_NEW_CONFIG=true
if [[ -f "${HADOOP_LIBEXEC_DIR}/yarn-config.sh" ]]; then
  . "${HADOOP_LIBEXEC_DIR}/yarn-config.sh"
else
  echo "ERROR: Cannot execute ${HADOOP_LIBEXEC_DIR}/yarn-config.sh." 2>&1
  exit 1
fi

HADOOP_JUMBO_RETCOUNTER=0

# start resourceManager
HARM=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey yarn.resourcemanager.ha.enabled 2>&-)
if [[ ${HARM} = "false" ]]; then
  echo "Starting resourcemanager"
  hadoop_uservar_su yarn resourcemanager "${HADOOP_YARN_HOME}/bin/yarn" \
      --config "${HADOOP_CONF_DIR}" \
      --daemon start \
      resourcemanager
  (( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))
else
  logicals=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey yarn.resourcemanager.ha.rm-ids 2>&-)
  logicals=${logicals//,/ }
  for id in ${logicals}
  do
      rmhost=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey "yarn.resourcemanager.hostname.${id}" 2>&-)
      RMHOSTS="${RMHOSTS} ${rmhost}"
  done
  echo "Starting resourcemanagers on [${RMHOSTS}]"
  hadoop_uservar_su yarn resourcemanager "${HADOOP_YARN_HOME}/bin/yarn" \
      --config "${HADOOP_CONF_DIR}" \
      --daemon start \
      --workers \
      --hostnames "${RMHOSTS}" \
      resourcemanager
  (( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))
fi

# start nodemanager
echo "Starting nodemanagers"
hadoop_uservar_su yarn nodemanager "${HADOOP_YARN_HOME}/bin/yarn" \
    --config "${HADOOP_CONF_DIR}" \
    --workers \
    --daemon start \
    nodemanager
(( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))


# start proxyserver
PROXYSERVER=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey  yarn.web-proxy.address 2>&- | cut -f1 -d:)
if [[ -n ${PROXYSERVER} ]]; then
 hadoop_uservar_su yarn proxyserver "${HADOOP_YARN_HOME}/bin/yarn" \
      --config "${HADOOP_CONF_DIR}" \
      --workers \
      --hostnames "${PROXYSERVER}" \
      --daemon start \
      proxyserver
 (( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))
fi

exit ${HADOOP_JUMBO_RETCOUNTER}

start/stop-dfs.sh

图片.png

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root


# Stop hadoop dfs daemons.
# Run this on master node.

## @description  usage info
## @audience     private
## @stability    evolving
## @replaceable  no
function hadoop_usage
{
  echo "Usage: stop-dfs.sh"
}

this="${BASH_SOURCE-$0}"
bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)

# let's locate libexec...
if [[ -n "${HADOOP_HOME}" ]]; then
  HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else
  HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
fi

HADOOP_LIBEXEC_DIR="${HADOOP_LIBEXEC_DIR:-$HADOOP_DEFAULT_LIBEXEC_DIR}"
# shellcheck disable=SC2034
HADOOP_NEW_CONFIG=true
if [[ -f "${HADOOP_LIBEXEC_DIR}/hdfs-config.sh" ]]; then
  . "${HADOOP_LIBEXEC_DIR}/hdfs-config.sh"
else
  echo "ERROR: Cannot execute ${HADOOP_LIBEXEC_DIR}/hdfs-config.sh." 2>&1
  exit 1
fi

#---------------------------------------------------------
# namenodes

NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -namenodes 2>/dev/null)

if [[ -z "${NAMENODES}" ]]; then
  NAMENODES=$(hostname)
fi

echo "Stopping namenodes on [${NAMENODES}]"

  hadoop_uservar_su hdfs namenode "${HADOOP_HDFS_HOME}/bin/hdfs" \
    --workers \
    --config "${HADOOP_CONF_DIR}" \
    --hostnames "${NAMENODES}" \
    --daemon stop \
    namenode

#---------------------------------------------------------
# datanodes (using default workers file)

echo "Stopping datanodes"

hadoop_uservar_su hdfs datanode "${HADOOP_HDFS_HOME}/bin/hdfs" \
  --workers \
  --config "${HADOOP_CONF_DIR}" \
  --daemon stop \
  datanode

#---------------------------------------------------------
# secondary namenodes (if any)

SECONDARY_NAMENODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -secondarynamenodes 2>/dev/null)

if [[ "${SECONDARY_NAMENODES}" == "0.0.0.0" ]]; then
  SECONDARY_NAMENODES=$(hostname)
fi

if [[ -n "${SECONDARY_NAMENODES}" ]]; then
  echo "Stopping secondary namenodes [${SECONDARY_NAMENODES}]"

  hadoop_uservar_su hdfs secondarynamenode "${HADOOP_HDFS_HOME}/bin/hdfs" \
    --workers \
    --config "${HADOOP_CONF_DIR}" \
    --hostnames "${SECONDARY_NAMENODES}" \
    --daemon stop \
    secondarynamenode
fi

#---------------------------------------------------------
# quorumjournal nodes (if any)

JOURNAL_NODES=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -journalNodes 2>&-)

if [[ "${#JOURNAL_NODES}" != 0 ]]; then
  echo "Stopping journal nodes [${JOURNAL_NODES}]"

  hadoop_uservar_su hdfs journalnode "${HADOOP_HDFS_HOME}/bin/hdfs" \
    --workers \
    --config "${HADOOP_CONF_DIR}" \
    --hostnames "${JOURNAL_NODES}" \
    --daemon stop \
    journalnode
fi

#---------------------------------------------------------
# ZK Failover controllers, if auto-HA is enabled
AUTOHA_ENABLED=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey dfs.ha.automatic-failover.enabled | tr '[:upper:]' '[:lower:]')
if [[ "${AUTOHA_ENABLED}" = "true" ]]; then
  echo "Stopping ZK Failover Controllers on NN hosts [${NAMENODES}]"

  hadoop_uservar_su hdfs zkfc "${HADOOP_HDFS_HOME}/bin/hdfs" \
    --workers \
    --config "${HADOOP_CONF_DIR}" \
    --hostnames "${NAMENODES}" \
    --daemon stop \
    zkfc
fi



# eof

start/stop-yarn.sh

图片.png

#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

## @description  usage info
## @audience     private
## @stability    evolving
## @replaceable  no
function hadoop_usage
{
  hadoop_generate_usage "${MYNAME}" false
}

MYNAME="${BASH_SOURCE-$0}"

bin=$(cd -P -- "$(dirname -- "${MYNAME}")" >/dev/null && pwd -P)

# let's locate libexec...
if [[ -n "${HADOOP_HOME}" ]]; then
  HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
else
  HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
fi

HADOOP_LIBEXEC_DIR="${HADOOP_LIBEXEC_DIR:-$HADOOP_DEFAULT_LIBEXEC_DIR}"
# shellcheck disable=SC2034
HADOOP_NEW_CONFIG=true
if [[ -f "${HADOOP_LIBEXEC_DIR}/yarn-config.sh" ]]; then
  . "${HADOOP_LIBEXEC_DIR}/yarn-config.sh"
else
  echo "ERROR: Cannot execute ${HADOOP_LIBEXEC_DIR}/yarn-config.sh." 2>&1
  exit 1
fi

# stop nodemanager
echo "Stopping nodemanagers"
hadoop_uservar_su yarn nodemanager "${HADOOP_YARN_HOME}/bin/yarn" \
    --config "${HADOOP_CONF_DIR}" \
    --workers \
    --daemon stop \
    nodemanager

# stop resourceManager
HARM=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey yarn.resourcemanager.ha.enabled 2>&-)
if [[ ${HARM} = "false" ]]; then
  echo "Stopping resourcemanager"
  hadoop_uservar_su yarn resourcemanager "${HADOOP_YARN_HOME}/bin/yarn" \
      --config "${HADOOP_CONF_DIR}" \
      --daemon stop \
      resourcemanager
else
  logicals=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey yarn.resourcemanager.ha.rm-ids 2>&-)
  logicals=${logicals//,/ }
  for id in ${logicals}
  do
      rmhost=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey "yarn.resourcemanager.hostname.${id}" 2>&-)
      RMHOSTS="${RMHOSTS} ${rmhost}"
  done
  echo "Stopping resourcemanagers on [${RMHOSTS}]"
  hadoop_uservar_su yarn resourcemanager "${HADOOP_YARN_HOME}/bin/yarn" \
      --config "${HADOOP_CONF_DIR}" \
      --daemon stop \
      --workers \
      --hostnames "${RMHOSTS}" \
      resourcemanager
fi

# stop proxyserver
PROXYSERVER=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey  yarn.web-proxy.address 2>&- | cut -f1 -d:)
if [[ -n ${PROXYSERVER} ]]; then
  echo "Stopping proxy server [${PROXYSERVER}]"
  hadoop_uservar_su yarn proxyserver "${HADOOP_YARN_HOME}/bin/yarn" \
      --config "${HADOOP_CONF_DIR}" \
      --workers \
      --hostnames "${PROXYSERVER}" \
      --daemon stop \
      proxyserver
fi
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

3.3、制作镜像

sudo docker image build --file hadoop.dockerfile --tag hadoop/nozookeeper:v1 .

四、启动容器

4.1、设置一个network

设置一个network的目的是为了在启动容器的时候就能指定容器的IP地址

yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c02cc96fb47c        bridge              bridge              local
827bcf1293c0        host                host                local
2d8cd675265a        none                null                local
yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker network create --driver bridge --subnet=172.18.12.0/16 --gateway=172.18.1.1 mynet
...
yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c02cc96fb47c        bridge              bridge              local
827bcf1293c0        host                host                local
8fccbdc49d54        mynet               bridge              local
2d8cd675265a        none                null                local

4.2、启动hadoop-mast和hadoop-slave1、hadoop-slave2、hadoop-slave3

注意启动的时候 指定容器名主机名网络地址配置/etc/hosts

docker run -it --name master -h hadoop-master --network=mynet --ip 172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave2:172.18.12.3 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2020:22 -p 9870:9870 -p 8088:8088 hadoop/nozookeeper:v1

docker run -it --name slave1 -h hadoop-slave1 --network=mynet --ip 172.18.12.2  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave2:172.18.12.3 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2021:22 hadoop/nozookeeper:v1

docker run -it --name slave2 -h hadoop-slave2 --network=mynet --ip 172.18.12.3  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2022:22 hadoop/nozookeeper:v1

docker run -it --name slave3 -h hadoop-slave3 --network=mynet --ip 172.18.12.4  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave2:172.18.12.3  -d -P -p 2023:22 hadoop/nozookeeper:v1

也可用&&把它们拼接在一起:

docker run -it --name master -h hadoop-master --network=mynet --ip 172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave2:172.18.12.3 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2020:22 -p 9870:9870 -p 8088:8088 hadoop/nozookeeper:v1 && docker run -it --name slave1 -h hadoop-slave1 --network=mynet --ip 172.18.12.2  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave2:172.18.12.3 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2021:22 hadoop/nozookeeper:v1 && docker run -it --name slave2 -h hadoop-slave2 --network=mynet --ip 172.18.12.3  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2022:22 hadoop/nozookeeper:v1 && docker run -it --name slave3 -h hadoop-slave3 --network=mynet --ip 172.18.12.4  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave2:172.18.12.3  -d -P -p 2023:22 hadoop/nozookeeper:v1

4.3、master容器上执行hdfs namenode -format

yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ docker exec -it master /bin/bash
root@hadoop-master:/# ./usr/local/hadoop/bin/hdfs namenode -format

4.4、master容器上执行start-all.sh

root@hadoop-master:/# start-all.sh
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Starting namenodes on [hadoop-master]
hadoop-master: Warning: Permanently added 'hadoop-master,172.18.12.1' (ECDSA) to the list of known hosts.
Starting datanodes
hadoop-slave1: Warning: Permanently added 'hadoop-slave1,172.18.12.2' (ECDSA) to the list of known hosts.
hadoop-slave3: Warning: Permanently added 'hadoop-slave3,172.18.12.4' (ECDSA) to the list of known hosts.
hadoop-slave2: Warning: Permanently added 'hadoop-slave2,172.18.12.3' (ECDSA) to the list of known hosts.
hadoop-slave3: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
hadoop-slave2: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
hadoop-slave1: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
Starting secondary namenodes [hadoop-master]
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
Starting resourcemanager
Starting nodemanagers
root@hadoop-master:/# 

使用jps查看主从节点的状态

root@hadoop-master:/# jps
1392 Jps
900 ResourceManager
1047 NodeManager
407 DataNode
632 SecondaryNameNode
270 NameNode
root@hadoop-master:/# exit
exit
yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ docker exec -it slave1 /bin/bash
root@hadoop-slave1:/# jps
178 NodeManager
295 Jps
60 DataNode
root@hadoop-slave1:/# exit
exit
yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ docker exec -it slave2 /bin/bash
root@hadoop-slave2:/# jps
292 Jps
57 DataNode
175 NodeManager
root@hadoop-slave2:/# exit
exit
yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ docker exec -it slave3 /bin/bash
root@hadoop-slave3:/# jps
292 Jps
57 DataNode
175 NodeManager
root@hadoop-slave3:/# 

说明:至于SSH,我只在dockerfile里面进行了配置,启动容器后没有作别的配置

4.5、master容器上执行stop-all.sh

yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ docker exec -it master /bin/bash
root@hadoop-master:/# stop-all.sh
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Stopping namenodes on [hadoop-master]
Stopping datanodes
Stopping secondary namenodes [hadoop-master]
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
Stopping nodemanagers
Stopping resourcemanager
root@hadoop-master:/# jps
2313 Jps
root@hadoop-master:/# exit
exit
yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ docker exec -it slave3 /bin/bash
root@hadoop-slave3:/# jps
459 Jps
root@hadoop-slave3:/# 

图片.png

MapReduce的管理界面如下:
图片.png

五、停止所有容器(本身和本主题无关)

列出所有容器ID

yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ docker ps -aq
34b959936f6a
32da063264ca
43d836e58198
40d552711425

停止所有容器

yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ docker stop $(docker ps -aq)
34b959936f6a
32da063264ca
43d836e58198
40d552711425
yay@yay-ThinkPad-T470:~/software/dockerfileForHadoop$ 

删除所有容器

docker rm $(docker ps -aq)
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,015评论 4 362
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,262评论 1 292
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,727评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,986评论 0 205
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,363评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,610评论 1 219
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,871评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,582评论 0 198
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,297评论 1 242
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,551评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,053评论 1 260
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,385评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,035评论 3 236
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,079评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,841评论 0 195
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,648评论 2 274
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,550评论 2 270

推荐阅读更多精彩内容