集群配置
假设集群节点为:cat /etc/hosts
172.16.100.101 master
172.16.100.102 node1
172.16.100.103 node2
172.16.100.104 node3
安装kerberos
配置master节点
1.在master节点上执行:
yum install krb5-server krb5-workstation -y
2.修改/etc/krb5.conf,如下:
...
[libdefaults]
default_realm =HADOOP.COM
dns_lookup_realm = false
dns_lookup_kdc = false
...[realms]
HADOOP.COM
= {
kdc =master
admin_server =master
}[domain_realm]
.hadoop.com
=HADOOP.COM
hadoop.com
=HADOOP.COM
3.修改/var/kerberos/krb5kdc/kdc.conf,如下:
...
[realms]
HADOOP.COM
= {
max_renewable_life = 7d
...
}
对于使用 centos5. 6及以上的系统,默认使用 AES-256 来加密的。需要自行安装Java JCE
4.初始化master节点上kdc数据库
kdb5_util create -s -r HADOOP.COM
若要重新初始化数据库,需要先删除principal文件,再执行create命令
rm -rf /var/kerberos/krb5kdc/principal*
kdb5_util create -s -r HADOOP.COM
5.创建admin用户
kadmin.local -q "addprinc admin/admin"
6.配置admin用户权限,修改/var/kerberos/krb5kdc/kadm5.acl,如下
*/admin@HADOOP.COM
*
7.启动以及设置开机自启
service krb5kdc start && service kadmin start
chkconfig krb5kdc on && chkconfig kadmin on
配置node节点
1.在各node节点上执行:
yum install krb5-workstation -y
2.将master节点的/etc/krb5.conf拷贝至各node节点
3.执行
kinit admin/admin
kinit -R
如果没有错误,表示登陆和renew都ok
CDH配置
CDH使用默认配置就可以,注意由CDH接管kerberos
默认加密策略是rc4-hmac,如果要手写jar访问hiveserver2,注意要在/etc/krb5.conf中设置默认加密策略
...
[libdefaults]
default_tkt_enctypes = rc4-hmac
default_tgs_enctypes = rc4-hmac
...
否则在hiveserver2节点会报错
Caused by: GSSException: Failure unspecified at GSS-API level (Mechanism level: AES256 CTS mode with HMAC SHA1-96 encryption type not in permitted_enctypes list)
使用CDH默认keytab文件登陆:
kinit -kt /var/run/cloudera-scm-agent/process/`ls -lrt /var/run/cloudera-scm-agent/process/ | awk '{print $9}' |grep NAMENODE| tail -1`/hdfs.keytab
hdfs/node1@HADOOP.COM
或者在master节点上执行:
kadmin.local -q 'xst -norandkey -k hdfs.keytab
hdfs/node1@HADOOP.COM
'
导出keytab文件,然后登陆