2017年3月25日土曜日

Sandbox HDP 2.5.0でHadoop (HDFS, YARN, MR2) をSSL化する


備考:
事前にDockerをインストールする必要があります。
すでにDocker版のSandboxコンテナが作成済みの場合は、HTTPSに使われるポートをなんとかして開ける必要があります。

1)Sandboxをホストする側(Ubuntuとか)で、スクリプトをダウンロードしてソース
source ./start_hdp.sh

2)もしSandboxをまだインストールしていない場合、Docker版Sandboxをインストール
f_docker_sandbox_install

3)作業用フォルダを作成
mkdir ssl_setup; cd ssl_setup

f_ssl_self_signed_cert "/C=AU/ST=QLD/O=Hortonworks/CN=*.hortonworks.com" "server"

ls -ltr
total 28
-rw-r--r-- 1 root root 1679 Mar 25 08:35 server.key
-rw-r--r-- 1 root root  968 Mar 25 08:35 server.csr
-rw-r--r-- 1 root root 1131 Mar 25 08:35 server.crt
-rw-r--r-- 1 root root 2450 Mar 25 08:35 server.p12
-rw-r--r-- 1 root root 2147 Mar 25 08:35 server.keystore.jks
-rw-r--r-- 1 root root  857 Mar 25 08:35 server.truststore.jks
-rw-r--r-- 1 root root  857 Mar 25 08:35 client.truststore.jks

5)Sandboxに証明書をコピー
ssh root@sandbox.hortonworks.com -t "mkdir -p ${g_SERVER_KEY_LOCATION%/}"

scp ./*.jks root@sandbox.hortonworks.com:${g_SERVER_KEY_LOCATION%/}/

ssh root@sandbox.hortonworks.com -t "chmod 755 $g_SERVER_KEY_LOCATION
chown root:hadoop ${g_SERVER_KEY_LOCATION%/}/*.jks
chmod 440 ${g_SERVER_KEY_LOCATION%/}/$g_KEYSTORE_FILE
chmod 440 ${g_SERVER_KEY_LOCATION%/}/$g_TRUSTSTORE_FILE
chmod 444 ${g_SERVER_KEY_LOCATION%/}/$g_CLIENT_TRUSTSTORE_FILE"


core-site.xml
hadoop.ssl.require.client.cert=false
hadoop.ssl.hostname.verifier=DEFAULT
hadoop.ssl.keystores.factory.class=org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
hadoop.ssl.server.conf=ssl-server.xml
hadoop.ssl.client.conf=ssl-client.xml
hdfs-site.xml
dfs.http.policy=HTTPS_ONLY    # HTTP_AND_HTTPS?
dfs.client.https.need-auth=false # trueの方がいい?
dfs.datanode.https.address=0.0.0.0:50475
dfs.namenode.https-address=sandbox.hortonworks.com:50470    # 0.0.0.0だとダメ?
mapred-site.xml
mapreduce.jobhistory.http.policy=HTTPS_ONLY
mapreduce.jobhistory.webapp.https.address=0.0.0.0:19889
yarn-site.xml
yarn.http.policy=HTTPS_ONLY
yarn.log.server.url=https://sandbox.hortonworks.com:19889/jobhistory/logs
yarn.resourcemanager.webapp.https.address=sandbox.hortonworks.com:8090
yarn.nodemanager.webapp.https.address=0.0.0.0:8044
ssl-server.xml
ssl.server.keystore.password=hadoop
ssl.server.keystore.location=/etc/hadoop/conf/secure/server.keystore.jks
ssl.server.keystore.type=jks
ssl.server.keystore.keypassword=hadoop
ssl.server.truststore.location=/etc/hadoop/conf/secure/server.truststore.jks
ssl.server.truststore.password=changeit
ssl.server.truststore.type=jks
ssl-client.xml
#ssl.client.keystore.location=/etc/security/clientKeys/keystore.jks # 必要ない?
ssl.client.truststore.location=/etc/hadoop/conf/secure/client.truststore.jks
ssl.client.truststore.password=changeit
tez-site.xml
tez.runtime.shuffle.ssl.enable=true
tez.runtime.shuffle.keep-alive.enabled=true

補足:おそらくKnoxからHDFSUIを使うには、NameNodeの証明書(server.crt)をKnoxのTrustStore(cacerts)に入れる必要がある。
 Knoxの証明書をExportしNameNodeのTrustStoreに入れる必要もある?

2017年3月23日木曜日

(Kerberos済みの)WebHDFSをHAProxyをつかってHAにする

http://qiita.com/saka1_p/items/3634ba70f9ecd74b0860
https://www.haproxy.com/doc/aloha/7.0/haproxy/healthchecks.html

Node1がHAProxyサーバ
Node2がNameNode1
Node3がNameNode2

1)HAProxyをインストール
[root@node1 ~]# yum install -y haproxy
[root@node1 ~]# cp -p /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.orig
[root@node1 ~]# vim /etc/haproxy/haproxy.cfg
...
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  main *:50070
    default_backend             app

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
    balance     roundrobin
    option      httpchk GET /webhdfs/v1/?op=CHECKACCESS
    http-check expect rstatus ([23][0-9][0-9]|401)
    server  node2 node2.localdomain:50070 check
    server  node3 node3.localdomain:50070 check

2)KDCサーバ上で(もしくはkadmin -p admin/adminなど)でHAProxy用のSPNEGO Keytabを作成
kadmin.local -q "addprinc -randkey HTTP/node1.localdomain@HO-UBU02"

[root@node1 ~]# mv /etc/security/keytabs/spnego.service.keytab /etc/security/keytabs/spnego.service.keytab.old

注意:ktaddはKvnoをインクリメントする模様
[root@node1 ~]# kadmin -p ambari/admin -q "ktadd -k /etc/security/keytabs/spnego.service.keytab HTTP/node1.localdomain@HO-UBU02"
Authenticating as principal ambari/admin with password.
Password for ambari/admin@HO-UBU02:
Entry for principal HTTP/node1.localdomain@HO-UBU02 with kvno 5, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/etc/security/keytabs/spnego.service.keytab.
Entry for principal HTTP/node1.localdomain@HO-UBU02 with kvno 5, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/etc/security/keytabs/spnego.service.keytab.
Entry for principal HTTP/node1.localdomain@HO-UBU02 with kvno 5, encryption type des3-cbc-sha1 added to keytab WRFILE:/etc/security/keytabs/spnego.service.keytab.
Entry for principal HTTP/node1.localdomain@HO-UBU02 with kvno 5, encryption type arcfour-hmac added to keytab WRFILE:/etc/security/keytabs/spnego.service.keytab.

確認:
[root@node1 ~]# klist -kte /etc/security/keytabs/spnego.service.keytab
Keytab name: FILE:/etc/security/keytabs/spnego.service.keytab
KVNO Timestamp         Principal
---- ----------------- --------------------------------------------------------
   5 03/22/17 08:38:43 HTTP/node1.localdomain@HO-UBU02 (aes256-cts-hmac-sha1-96)
   5 03/22/17 08:38:43 HTTP/node1.localdomain@HO-UBU02 (aes128-cts-hmac-sha1-96)
   5 03/22/17 08:38:43 HTTP/node1.localdomain@HO-UBU02 (des3-cbc-sha1)
   5 03/22/17 08:38:43 HTTP/node1.localdomain@HO-UBU02 (arcfour-hmac)

各NameNodeへ送信:
[root@node1 ~]# scp /etc/security/keytabs/spnego.service.keytab node2.localdomain:/tmp/node1.spnego.service.keytab
spnego.service.keytab                                                               100%  306     0.3KB/s   00:00
[root@node1 ~]# scp /etc/security/keytabs/spnego.service.keytab node3.localdomain:/tmp/node1.spnego.service.keytab
spnego.service.keytab                                                               100%  306     0.3KB/s   00:00

3)両方のNameNodeでキータブをマージ:
まず、SPNEGOキータブファイルの場所を確認
[root@node2 ~]# grep 'dfs.web.authentication.kerberos.keytab' -A1 /etc/hadoop/conf/hdfs-site.xml
      <name>dfs.web.authentication.kerberos.keytab</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
[root@node2 ~]# mv /etc/security/keytabs/spnego.service.keytab /etc/security/keytabs/spnego.service.keytab.orig

二つのキータブをKtutilでマージ:
[root@node2 ~]# ktutil
ktutil:  rkt /etc/security/keytabs/spnego.service.keytab.orig
ktutil:  rkt /tmp/node1.spnego.service.keytab
ktutil:  wkt /etc/security/keytabs/spnego.service.keytab
ktutil:  quit

確認:
[root@node2 ~]# klist -kte /etc/security/keytabs/spnego.service.keytab
Keytab name: FILE:/etc/security/keytabs/spnego.service.keytab
KVNO Timestamp         Principal
---- ----------------- --------------------------------------------------------
   2 03/22/17 08:45:02 HTTP/node2.localdomain@HO-UBU02 (aes256-cts-hmac-sha1-96)
   2 03/22/17 08:45:02 HTTP/node2.localdomain@HO-UBU02 (aes128-cts-hmac-sha1-96)
   2 03/22/17 08:45:02 HTTP/node2.localdomain@HO-UBU02 (des3-cbc-sha1)
   2 03/22/17 08:45:02 HTTP/node2.localdomain@HO-UBU02 (arcfour-hmac)
   5 03/22/17 08:45:02 HTTP/node1.localdomain@HO-UBU02 (aes256-cts-hmac-sha1-96)
   5 03/22/17 08:45:02 HTTP/node1.localdomain@HO-UBU02 (aes128-cts-hmac-sha1-96)
   5 03/22/17 08:45:02 HTTP/node1.localdomain@HO-UBU02 (des3-cbc-sha1)
   5 03/22/17 08:45:02 HTTP/node1.localdomain@HO-UBU02 (arcfour-hmac)

ファイルパーミッションを直す:
[root@node2 ~]# chown root:hadoop /etc/security/keytabs/spnego.service.keytab
[root@node2 ~]# chmod 440 /etc/security/keytabs/spnego.service.keytab

4)上記のステップをNode3でも実行
TODO: KVNOは同じ必要がある? "Specified version of key is not available"

5)Ambariからdfs.web.authentication.kerberos.principalを"*"に変更する
そしてHDFSを再起動
このままだと、"*"がAmbari Alertのkinitに使われてしまうので、全ての/usr/lib/python2.6/site-packages/ambari_agent/alerts/base_alert.pyを下記に変更(またはすべてのAlert JSONを更新):
    if 'kerberos_principal' in uri_structure:
      kerberos_principal = uri_structure['kerberos_principal']
      if kerberos_principal == "*":
        kerberos_principal = 'HTTP/node1.localdomain@HO-UBU02'

Ambariのデータベスにログイン(psql -Uambari ambari)し、UPDATEステートメントを実行:
select label, alert_source from alert_definition where alert_source like '%{hdfs-site/dfs.web.authentication.kerberos.principal}%';

update alert_definition set alert_source = replace(alert_source, '{hdfs-site/dfs.web.authentication.kerberos.principal}', '{hdfs-site/dfs.namenode.kerberos.internal.spnego.principal}') where alert_source like '%{hdfs-site/dfs.web.authentication.kerberos.principal}%' and component_name in ('NAMENODE', 'JOURNALNODE', 'DATANODE');

Ambari Server上で下記のコマンドを実行:
cd /var/lib/ambari-server/resources/common-services/HDFS/2.1.0.2.0/package/alerts
sed -i_$(date +"%Y%m%d%H%M%S").bak 's/dfs.web.authentication.kerberos.principal/dfs.namenode.kerberos.internal.spnego.principal/' *.py
ambari-server restart

Ambari UIから、AlertをDisable/Enableする必要あり

6)テスト
[root@node1 ~]# curl --negotiate -u : -X GET 'http://node3.localdomain:50070/webhdfs/v1/?op=CHECKACCESS'
[root@node1 ~]# curl --negotiate -u : -X GET 'http://node2.localdomain:50070/webhdfs/v1/?op=CHECKACCESS'
{"RemoteException":{"exception":"StandbyException","javaClassName":"org.apache.hadoop.ipc.StandbyException","message":"Operation category READ is not supported in state standby"}}[root@node1 ~]#
[root@node1 ~]# curl -s -I --negotiate -u : 'http://node1.localdomain:50070/webhdfs/v1/?op=CHECKACCESS' | grep ^HTTP
HTTP/1.1 401 Authentication required
HTTP/1.1 200 OK

HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)がでたら、dfs.web.authentication.kerberos.principalをチェック




2017年3月21日火曜日

Ambari 2.4.2でTLSv1.2を使用する => TLSv1.2"のみ”は不可(AMBARI-17666 )

備考:https://issues.apache.org/jira/browse/AMBARI-18910 のためAmbari2.4.2以前のバージョンではできないと思われる。

1)まずAmbariのSSLを有効にする
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/set_up_ssl_for_ambari.html
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/_set_up_truststore_for_ambari_server.html

設定用のスクリプトをダウンロード
[root@sandbox ~]# curl https://raw.githubusercontent.com/hajimeo/samples/master/bash/start_hdp.sh -O
[root@sandbox ~]# source ./start_hdp.sh

証明書を作成
[root@sandbox ~]# cd /etc/ambari-server/conf
[root@sandbox conf]# f_ssl_self_signed_cert "/C=AU/ST=QLD/O=Hortonworks/CN=sandbox.hortonworks.com" "server"

[root@sandbox conf]# ambari-server setup-security
Using python  /usr/bin/python
Security setup options...
===========================================================================
Choose one of the following options:
  [1] Enable HTTPS for Ambari server.
  [2] Encrypt passwords stored in ambari.properties file.
  [3] Setup Ambari kerberos JAAS configuration.
  [4] Setup truststore.
  [5] Import certificate to truststore.
===========================================================================
Enter choice, (1-5): 1
Do you want to configure HTTPS [y/n] (y)?
SSL port [8443] ? 8080     # NOTE: Sandboxの場合はKnoxとポートがかぶるので注意
Enter path to Certificate: /etc/ambari-server/conf/srever.crt
Enter path to Private Key: /etc/ambari-server/conf/srever.key
Please enter password for Private Key:
WARNING: Common Name in Certificate: sandbox.hortonworks.com does not match the server FQDN: ho-ubu03.openstacklocal
WARNING: Unable to validate Certificate hostname
Importing and saving Certificate...done.
Ambari server URL changed. To make use of the Tez View in Ambari please update the property tez.tez-ui.history-url.base in tez-site
Adjusting ambari-server permissions and ownership...

[root@sandbox conf]# ambari-server setup-security
Using python  /usr/bin/python
Security setup options...
===========================================================================
Choose one of the following options:
  [1] Enable HTTPS for Ambari server.
  [2] Encrypt passwords stored in ambari.properties file.
  [3] Setup Ambari kerberos JAAS configuration.
  [4] Setup truststore.
  [5] Import certificate to truststore.
===========================================================================
Enter choice, (1-5): 4
Do you want to configure a truststore [y/n] (y)?
TrustStore type [jks/jceks/pkcs12] (jks):jks
Path to TrustStore file :/etc/ambari-server/conf/ambari-srever.jks
Password for TrustStore: (changeit)
Re-enter password: (changeit)
Ambari Server 'setup-security' completed successfully.

[root@sandbox conf]# keytool -import -file ./server.crt -alias ambari-server -noprompt -storepass changeit -keypass hadoop -keystore /etc/ambari-server/conf/ambari-srever.jks


[root@sandbox conf]# ambari-server restart
この時点でAmbariにSSL(HTTPS)でアクセスできるか確認

2)ambari.propertiesを編集して、またrestart
security.server.disabled.protocols=TLSv1.1|SSLv2Hello|SSLv3
https://issues.apache.org/jira/browse/AMBARI-17666 のため、TLSv1はdisableしない
TODO: security.server.disabled.ciphersは?

3)確認する:
root@ho-ubu03:~# ./test_ciphers.sh sandbox.hortonworks.com 8080 2>/dev/null
Obtaining cipher list from sandbox.hortonworks.com:8080 with OpenSSL 1.0.1f 6 Jan 2014.
ECDHE-RSA-AES256-GCM-SHA384     YES
ECDHE-RSA-AES256-SHA384 YES
ECDHE-RSA-AES256-SHA    YES
DHE-RSA-AES256-GCM-SHA384       YES
DHE-RSA-AES256-SHA256   YES
DHE-RSA-AES256-SHA      YES
AES256-GCM-SHA384       YES
AES256-SHA256   YES
AES256-SHA      YES
ECDHE-RSA-DES-CBC3-SHA  YES
EDH-RSA-DES-CBC3-SHA    YES
DES-CBC3-SHA    YES
ECDHE-RSA-AES128-GCM-SHA256     YES
ECDHE-RSA-AES128-SHA256 YES
ECDHE-RSA-AES128-SHA    YES
DHE-RSA-AES128-GCM-SHA256       YES
DHE-RSA-AES128-SHA256   YES
DHE-RSA-AES128-SHA      YES
AES128-GCM-SHA256       YES
AES128-SHA256   YES
AES128-SHA      YES
root@ho-ubu03:~# echo -n | openssl s_client -connect sandbox.hortonworks.com:8080 -tls1_2 

参考1:設定可能な値
factory.setIncludeProtocols(new String[] {"SSLv2Hello","SSLv3","TLSv1","TLSv1.1","TLSv1.2"});

参考2:TLSv1.2のCipherリストを表示には
[root@sandbox ~]# openssl ciphers -v | grep TLSv1.2 | wc -l
28

参考3:AmbariのJetty: /usr/lib/ambari-server/jetty-server-8.1.19.v20160209.jar

参考4:https://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html
TLS v1.2: The protocol which should be used wherever possible. All CBC based ciphers are supported since Java 7, the new GCM modes are supported since Java 8.
でも、ambari.propertiesのjava.homeを変更すると他のサービスのJAVA_HOMEも変わってしまう。
そこで、/usr/sbin/ambari_server_main.pyの"java_exe = get_java_exe_path()"(280行目あたり)を変更すると、どうやらJavaの実行ファイルを指定できる模様。

参考5:JVMが使えるCipherを表示する
https://confluence.atlassian.com/stashkb/list-ciphers-used-by-jvm-679609085.html
[root@sandbox ~]# wget https://confluence.atlassian.com/stashkb/files/679609085/679772359/1/1414093373406/Ciphers.java
[root@sandbox ~]# grep 'java.home' /etc/ambari-server/conf/ambari.properties
java.home=/usr/jdk64/jdk1.7.0_67
[root@sandbox ~]# /usr/jdk64/jdk1.7.0_67/bin/javac Ciphers.java
[root@sandbox ~]# /usr/jdk64/jdk1.7.0_67/bin/java Ciphers > jdk1.7_ciphers.out



TODO: 別の手段?
grep jdk.tls.disabledAlgorithms /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.101-3.b13.el6_8.x86_64/jre/lib/security/java.security
#   jdk.tls.disabledAlgorithms=MD5, SSLv3, DSA, RSA keySize < 2048
jdk.tls.disabledAlgorithms=SSLv3, RC4, MD5withRSA, DH keySize < 768