2018年2月6日火曜日

hive --version

[root@sandbox-hdp ~]# export CLASSPATH='/usr/hdp/2.6.3.0-235/hadoop/conf:/usr/hdp/2.6.3.0-235/hadoop/lib/*:/usr/hdp/2.6.3.0-235/hadoop/.//*:/usr/hdp/2.6.3.0-235/hadoop-hdfs/./:/usr/hdp/2.6.3.0-235/hadoop-hdfs/lib/*:/usr/hdp/2.6.3.0-235/hadoop-hdfs/.//*:/usr/hdp/2.6.3.0-235/hadoop-yarn/lib/*:/usr/hdp/2.6.3.0-235/hadoop-yarn/.//*:/usr/hdp/2.6.3.0-235/hadoop-mapreduce/lib/*:/usr/hdp/2.6.3.0-235/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.3.0-235/tez/*:/usr/hdp/2.6.3.0-235/tez/lib/*:/usr/hdp/2.6.3.0-235/tez/conf'
[root@sandbox-hdp ~]# /usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.3.0-235/hadoop -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=:/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.3.0-235/hive/lib/hive-exec-1.2.1000.2.6.3.0-235.jar org.apache.hive.common.util.HiveVersionInfo
Hive 1.2.1000.2.6.3.0-235
Subversion git://ctr-e134-1499953498516-254436-01-000004.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive -r 5f360bda08bb5489fbb3189b5aeaaf58029ed4b5
Compiled by jenkins on Mon Oct 30 02:48:31 UTC 2017
From source with checksum 94298cc1f5f5bf0f3470f3ea2e92d646

[root@sandbox-hdp ~]# zipgrep '1.2.1000.2.6.3.0-235' /usr/hdp/2.6.3.0-235/hive/lib/hive-exec-1.2.1000.2.6.3.0-235.jar
META-INF/MANIFEST.MF:Specification-Version: 1.2.1000.2.6.3.0-235
META-INF/MANIFEST.MF:Implementation-Version: 1.2.1000.2.6.3.0-235
META-INF/DEPENDENCIES:  - Hive Ant Utilities (http://hive.apache.org/hive-ant) org.apache.hive:hive-ant:jar:1.2.1000.2.6.3.0-235
META-INF/DEPENDENCIES:  - Hive...
org/apache/hive/common/package-info.class:Binary file (standard input) matches
...

HiveVersionInfo.java
public class HiveVersionInfo {
  private static final Log LOG = LogFactory.getLog(HiveVersionInfo.class);

  private static Package myPackage;
  private static HiveVersionAnnotation version;

  static {
    myPackage = HiveVersionAnnotation.class.getPackage();
    version = myPackage.getAnnotation(HiveVersionAnnotation.class);
  }

https://docs.oracle.com/javase/7/docs/api/java/lang/Package.html
Package objects contain version information about the implementation and specification of a Java package. This versioning information is retrieved and made available by the ClassLoader instance that loaded the class(es). Typically, it is stored in the manifest that is distributed with the classes.

https://docs.oracle.com/javase/tutorial/deployment/jar/packageman.html

strace...
[pid 13244] read(372, "\312\376\272\276\0\0\0004\0\31\1\0#org/apache/hive/common/package-info\7\0\1\1\0\20java/lang/Object\7\0\3\1\0\21package-info.java\1\0.Lorg/apache/hive/common/HiveVersionAnnotation;\1\0\7version\1\0\0212.1.0.2.6.3.0-235\1\0\fshortVersion\1\0\0102.1.2000\1\0\10revision\1\0(a193ce8bbba5814dd743592a854aa0bc26e6809f\1\0\6branch\1\0\32(HEAD detached at a193ce8)\1\0\4user\1\0\7jenkins\1\0\4date\1\0\34Mon Oct 30 02:48:10 UTC 2017\1\0\3url\1\0rgit://ctr-e134-1499953498516-254436-01-000013.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive2\1\0\vsrcChecksum\1\0 eb20828b2f4543b30d85a59e81f61782\1\0\nSourceFile\1\0\31RuntimeVisibleAnnotations\26\0\0\2\0\4\0\0\0\0\0\0\0\2\0\27\0\0\0\2\0\5\0\30\0\0\0.\0\1\0\6\0\10\0\7s\0\10\0\ts\0\n\0\vs\0\f\0\rs\0\16\0\17s\0\20\0\21s\0\22\0\23s\0\24\0\25s\0\26", 632) = 632
[pid 13244] write(1, "Hive 2.1.0.2.6.3.0-235", 22 <unfinished ...>
Hive 2.1.0.2.6.3.0-235[pid 13259] <... futex resumed> )       = -1 EAGAIN (Resource temporarily unavailable)

hive/common/src/scripts/saveVersion.sh
...
cat << EOF | \
  sed -e "s/VERSION/$version/" -e "s/SHORTVERSION/$shortversion/" \
      -e "s/USER/$user/" -e "s/DATE/$date/" \
      -e "s|URL|$url|" -e "s/REV/$revision/" \
      -e "s|BRANCH|$branch|" -e "s/SRCCHECKSUM/$srcChecksum/" \
      > $src_dir/gen/org/apache/hive/common/package-info.java
/*
 * Generated by saveVersion.sh
 */
@HiveVersionAnnotation(version="VERSION", shortVersion="SHORTVERSION",
                         revision="REV", branch="BRANCH",
                         user="USER", date="DATE", url="URL",
                         srcChecksum="SRCCHECKSUM")
package org.apache.hive.common;
EOF

2017年12月29日金曜日

Kerberos済みのHDPで、Ranger Solr PluginのAudit先をAmbari Infraにする (Unofficial)

前提条件:

SolrとRanger Solr Pluginをインストール 
mPackは2.2.9、HDPは2.6.2かそれ以降を推奨(RANGER-1446, RANGER-1658)
Ambari InfraにHDFSのAuditが正常に書き込まれているのを確認


1) NameNodeのサーバーから/etc/hadoop/conf/ranger-hdfs-audit.xmlをSolrサーバの下記ファイルにコピー
/opt/lucidworks-hdpsearch/solr/server/solr-webapp/webapp/WEB-INF/classes/ranger-solr-audit.xml 

2) ranger-solr-audit.xmlを編集後、SolrをAmbariからリスタート 

xasecure.audit.destination.solr.batch.filespool.dir = /var/log/solr/audit/solr/spool 
xasecure.audit.jaas.Client.option.keyTab = /etc/security/keytabs/solr.service.keytab 
xasecure.audit.jaas.Client.option.principal = solr/_HOST@YOUR_PRINCIPAL 
xasecure.audit.solr.solr_url = (empty value) 

3) /var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py を編集(太字の部分を追加):

service_default_principals_map = [('hdfs', 'nn'), ('hbase', 'hbase'), ('hive', 'hive'), ('kafka', 'kafka'), ('kms', 'rangerkms'), 
('knox', 'knox'), ('nifi', 'nifi'), ('storm', 'storm'), ('yanr', 'yarn'), ('solr', 'solr')]

Ambari Agentのキャッシュを変更しているので、Ambari Server側のファイルも変更する必要あり。
.pyc、.pyoファイルがあった場合は削除。 

4) Ranger AdminをAmbariから再起動(する事によって上記ユーザがSolr Roleに追加され、403エラーが発生しなくなる)

Ambari 2.6.0は日本語Localeだともしかしたら動かない場合あり

Ambari 2.6.0からyumrpm.pyが大幅に変わりました。
サービスのインストールやスタート時に、yum list availableとyum list installedを実行する模様です。
その際、標準と異なるアウトプットが出るとサービスのインストールやスタートに失敗する可能性があります。
「標準と異なるアウトプット」とはRedhat Satelliteやyum pluginなどを使っていると、yum list xxxxコマンド実行時に、リストの最初と最後に数行エクストラな情報がでます。
Satelliteに関してはAmbari2.6.2で修正される模様です。

で、問題(になる可能性)は下記のラインのように英語の出力を想定している部分があることです。

    return self._lookup_packages(cmd, 'Available Packages')

日本語環境にしているとyumの出力も日本語になります。したがって、上記の単語にマッチしないわけですが、その場合は_lookup_packages()内で最初の三行を無視するようになっているので、うまくいく場合といかない場合がでてきます。

簡単な回避策としては、/var/lib/ambari-agent/bin/ambari-agentに"export LANG=C"を追加するなどがあります。


備考:

[root@sandbox-hdp ~]# cat /etc/sysconfig/i18n
LANG="en_US.UTF-8"
SYSFONT="latarcyrheb-sun16"

日本語にして見ます。
[root@sandbox-hdp ~]# cat /etc/sysconfig/i18n
LANG="ja_JP.utf8"
SYSFONT="latarcyrheb-sun16"

ログアウト・ログイン後、または
[root@sandbox-hdp ~]# . /etc/sysconfig/i18n
[root@sandbox-hdp ~]# locale
LANG=ja_JP.utf8
LC_CTYPE="ja_JP.utf8"
LC_NUMERIC="ja_JP.utf8"
LC_TIME="ja_JP.utf8"
LC_COLLATE="ja_JP.utf8"
LC_MONETARY="ja_JP.utf8"
LC_MESSAGES="ja_JP.utf8"
LC_PAPER="ja_JP.utf8"
LC_NAME="ja_JP.utf8"
LC_ADDRESS="ja_JP.utf8"
LC_TELEPHONE="ja_JP.utf8"
LC_MEASUREMENT="ja_JP.utf8"
LC_IDENTIFICATION="ja_JP.utf8"
LC_ALL=

TODO: CentOS7の場合は"localectl set-locale LANG=ja_JP.utf8;export LC_CTYPE=ja_JP.UTF-8"?

[root@sandbox-hdp ~]# yum list installed | head
読み込んだプラグイン:fastestmirror, ovl, priorities
インストール済みパッケージ
ConsoleKit.x86_64                       0.4.1-6.el6              @base
ConsoleKit-libs.x86_64                  0.4.1-6.el6              @base
GConf2.x86_64                           2.28.0-7.el6             @base
MAKEDEV.x86_64                          3.24-6.el6               @CentOS/6.8
ORBit2.x86_64                           2.14.17-6.el6_8          @base
PyQt4.x86_64                            4.6.2-9.el6              @base
R.x86_64                                3.4.1-1.el6              @epel
R-core.x86_64                           3.4.1-1.el6              @epel

TODO: もしかしたらワークアラウンド?(どのような影響がでるか不明)
repositories.legacy-override.enabled=true

2017年12月15日金曜日

TODO: Sandbos HDP 2.6.1:Ambari Infraが開始できない

Sandbox作成後にAmbariInfraを開始しようとするとエラー
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 123, in <module>
    InfraSolr().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 46, in start
    self.configure(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 119, in locking_configure
    original_configure(obj, *args, **kw)
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 41, in configure
    setup_infra_solr(name = 'server')
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/setup_infra_solr.py", line 118, in setup_infra_solr
    security_json_location=security_json_file_location
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py", line 159, in setup_kerberos_plugin
    Execute(setup_kerberos_plugin_cmd)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --setup-kerberos-plugin' returned 1. Using default ZkCredentialsProvider
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=sandbox.hortonworks.com
Client environment:java.version=1.8.0_141
Client environment:java.vendor=Oracle Corporation
Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-2.b16.el6_9.x86_64/jre
Client environment:java.class.path=/usr/lib/ambari-infra-solr-client:/usr/lib/ambari-infra-solr-client/libs/log4j-1.2.17.jar:/usr/lib/ambari-infra-solr-client/libs/junit-4.10.jar:/usr/lib/ambari-infra-solr-client/libs/commons-cli-1.3.1.jar:/usr/lib/ambari-infra-solr-client/libs/noggit-0.6.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-core-asl-1.9.9.jar:/usr/lib/ambari-infra-solr-client/libs/stax2-api-3.1.4.jar:/usr/lib/ambari-infra-solr-client/libs/jcl-over-slf4j-1.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/tools-1.7.0.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-api-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/solr-solrj-5.5.2.jar:/usr/lib/ambari-infra-solr-client/libs/guava-16.0.jar:/usr/lib/ambari-infra-solr-client/libs/commons-io-2.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-collections-3.2.2.jar:/usr/lib/ambari-infra-solr-client/libs/httpmime-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/easymock-3.4.jar:/usr/lib/ambari-infra-solr-client/libs/utility-1.0.0.0-SNAPSHOT.jar:/usr/lib/ambari-infra-solr-client/libs/objenesis-2.2.jar:/usr/lib/ambari-infra-solr-client/libs/zookeeper-3.4.6.jar:/usr/lib/ambari-infra-solr-client/libs/antlr-2.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/commons-lang-2.5.jar:/usr/lib/ambari-infra-solr-client/libs/antlr4-runtime-4.5.3.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-log4j12-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/httpclient-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-beanutils-1.9.2.jar:/usr/lib/ambari-infra-solr-client/libs/httpcore-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-logging-1.1.1.jar:/usr/lib/ambari-infra-solr-client/libs/woodstox-core-asl-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-codec-1.8.jar:/usr/lib/ambari-infra-solr-client/libs/checkstyle-6.19.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-mapper-asl-1.9.13.jar:/usr/lib/ambari-infra-solr-client/libs/hamcrest-core-1.1.jar:/usr/lib/ambari-infra-solr-client/libs/ambari-logsearch-solr-client-2.5.1.0.159.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=3.13.0-86-generic
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/var/lib/ambari-agent
Initiating client connection, connectString=sandbox.hortonworks.com:2181 sessionTimeout=15000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@5e91993f
Waiting for client to connect to ZooKeeper
Opening socket connection to server sandbox.hortonworks.com/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to sandbox.hortonworks.com/172.18.0.2:2181, initiating session
Session establishment complete on server sandbox.hortonworks.com/172.18.0.2:2181, sessionid = 0x160548c0ea90005, negotiated timeout = 15000
Watcher org.apache.solr.common.cloud.ConnectionManager@350d2264 name:ZooKeeperConnection Watcher:sandbox.hortonworks.com:2181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Setup kerberos plugin in security.json
KeeperErrorCode = NoAuth for /infra-solr/security.json
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /infra-solr/security.json
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1270)
 at org.apache.solr.common.cloud.SolrZkClient$8.execute(SolrZkClient.java:362)
 at org.apache.solr.common.cloud.SolrZkClient$8.execute(SolrZkClient.java:359)
 at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
 at org.apache.solr.common.cloud.SolrZkClient.setData(SolrZkClient.java:359)
 at org.apache.solr.common.cloud.SolrZkClient.setData(SolrZkClient.java:546)
 at org.apache.ambari.logsearch.solr.commands.EnableKerberosPluginSolrZkCommand.putFileContent(EnableKerberosPluginSolrZkCommand.java:63)
 at org.apache.ambari.logsearch.solr.commands.EnableKerberosPluginSolrZkCommand.executeZkCommand(EnableKerberosPluginSolrZkCommand.java:54)
 at org.apache.ambari.logsearch.solr.commands.EnableKerberosPluginSolrZkCommand.executeZkCommand(EnableKerberosPluginSolrZkCommand.java:32)
 at org.apache.ambari.logsearch.solr.commands.AbstractZookeeperRetryCommand.createAndProcessRequest(AbstractZookeeperRetryCommand.java:38)
 at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
 at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
 at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.setupKerberosPlugin(AmbariSolrCloudClient.java:162)
 at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:518)
... (snip) ...
Maximum retries exceeded: 5
Return code: 1
stdout:   /var/lib/ambari-agent/data/output-187.txt
2017-12-15 04:05:17,819 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-12-15 04:05:17,973 - Stack Feature Version Info: stack_version=2.6, version=2.6.1.0-129, current_cluster_version=2.6.1.0-129 -> 2.6.1.0-129
2017-12-15 04:05:17,974 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-12-15 04:05:17,975 - Group['livy'] {}
2017-12-15 04:05:17,976 - Group['spark'] {}
2017-12-15 04:05:17,976 - Group['ranger'] {}
2017-12-15 04:05:17,977 - Group['zeppelin'] {}
2017-12-15 04:05:17,977 - Group['hadoop'] {}
2017-12-15 04:05:17,977 - Group['users'] {}
2017-12-15 04:05:17,977 - Group['knox'] {}
2017-12-15 04:05:17,978 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,978 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,979 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,983 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,985 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-12-15 04:05:17,986 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,987 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-12-15 04:05:17,988 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2017-12-15 04:05:17,988 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-12-15 04:05:17,989 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop']}
2017-12-15 04:05:17,990 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,991 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,992 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-12-15 04:05:17,992 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,993 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,994 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,995 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,995 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,996 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,997 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,998 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,998 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,999 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-15 04:05:18,001 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-12-15 04:05:18,052 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-12-15 04:05:18,055 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-12-15 04:05:18,056 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-15 04:05:18,057 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-12-15 04:05:18,106 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-12-15 04:05:18,107 - Group['hdfs'] {}
2017-12-15 04:05:18,107 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-12-15 04:05:18,108 - FS Type: 
2017-12-15 04:05:18,108 - Directory['/etc/hadoop'] {'mode': 0755}
2017-12-15 04:05:18,130 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-15 04:05:18,131 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-12-15 04:05:18,152 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-12-15 04:05:18,206 - Skipping Execute[('setenforce', '0')] due to not_if
2017-12-15 04:05:18,207 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-12-15 04:05:18,209 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-12-15 04:05:18,210 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-12-15 04:05:18,214 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-12-15 04:05:18,217 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-12-15 04:05:18,224 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-12-15 04:05:18,236 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-15 04:05:18,237 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-12-15 04:05:18,238 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-12-15 04:05:18,244 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-12-15 04:05:18,293 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-12-15 04:05:18,702 - Directory['/var/log/ambari-infra-solr'] {'owner': 'infra-solr', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-15 04:05:18,704 - Directory['/var/run/ambari-infra-solr'] {'owner': 'infra-solr', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-15 04:05:18,705 - Directory['/opt/ambari_infra_solr/data'] {'owner': 'infra-solr', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-15 04:05:18,706 - Directory['/opt/ambari_infra_solr/data/resources'] {'owner': 'infra-solr', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2017-12-15 04:05:18,707 - Directory['/usr/lib/ambari-infra-solr'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'recursive_ownership': True, 'owner': 'infra-solr', 'mode': 0755}
2017-12-15 04:05:18,707 - Changing owner for /usr/lib/ambari-infra-solr from 1025 to infra-solr
2017-12-15 04:05:18,707 - Changing group for /usr/lib/ambari-infra-solr from 1025 to hadoop
2017-12-15 04:05:19,030 - Directory['/etc/ambari-infra-solr/conf'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'mode': 0755, 'owner': 'infra-solr', 'recursive_ownership': True}
2017-12-15 04:05:19,031 - File['/var/log/ambari-infra-solr/solr-install.log'] {'content': '', 'owner': 'infra-solr', 'group': 'hadoop', 'mode': 0644}
2017-12-15 04:05:19,031 - Writing File['/var/log/ambari-infra-solr/solr-install.log'] because it doesn't exist
2017-12-15 04:05:19,031 - Changing owner for /var/log/ambari-infra-solr/solr-install.log from 0 to infra-solr
2017-12-15 04:05:19,032 - Changing group for /var/log/ambari-infra-solr/solr-install.log from 0 to hadoop
2017-12-15 04:05:19,045 - File['/etc/ambari-infra-solr/conf/infra-solr-env.sh'] {'owner': 'infra-solr', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0755}
2017-12-15 04:05:19,047 - File['/opt/ambari_infra_solr/data/solr.xml'] {'owner': 'infra-solr', 'content': InlineTemplate(...), 'group': 'hadoop'}
2017-12-15 04:05:19,049 - File['/etc/ambari-infra-solr/conf/log4j.properties'] {'owner': 'infra-solr', 'content': InlineTemplate(...), 'group': 'hadoop'}
2017-12-15 04:05:19,055 - File['/etc/ambari-infra-solr/conf/custom-security.json'] {'owner': 'infra-solr', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0640}
2017-12-15 04:05:19,056 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --create-znode --retry 30 --interval 5'] {}
2017-12-15 04:05:19,744 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --cluster-prop --property-name urlScheme --property-value http'] {}
2017-12-15 04:05:20,411 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --setup-kerberos-plugin'] {}

Command failed after 1 tries

調査:
[root@sandbox ~]# ls -ltra /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf/solrconfig.xml
-r-xr--r-- 1 ranger ranger 73711 May 31  2017 /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf/solrconfig.xml

Kerberosが必要?

2017年11月30日木曜日

HDP 2.6 Sandboxで Knox Demo LDAPをつかってHadoop Group Mappingを設定する

参考:https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-security/content/setting_up_hadoop_group_mappping_for_ldap_ad.html

1. 必要であれば、Ambariから/etc/knox/conf/users.ldifにユーザやグループを追加
Knox => Configs => Advanced users-ldif

2. Knox DEMO LDAPを開始
コマンドで開始したい場合は:
sudo -u knox -i /usr/hdp/current/knox-server/bin/ldap.sh start
または
sudo -u knox -i java -jar /usr/hdp/current/knox-server/bin/ldap.jar /usr/hdp/current/knox-server/conf &

確認:
yum install -y openldap-clients
ldapsearch -H 'ldap://sandbox-hdp.hortonworks.com:33389/' -x -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' -w admin-password '(objectclass=person)' uid
ldapsearch -H 'ldap://sandbox-hdp.hortonworks.com:33389/' -x -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' -w admin-password '(objectclass=groupOfNames)' member cn
ldapsearch -H 'ldap://sandbox-hdp.hortonworks.com:33389/' -x -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' -w admin-password -b 'dc=hadoop,dc=apache,dc=org' '(&(objectclass=person)(uid=sam))'

3. Ambariから、下記のプロパティをHDFS => Configs => Custom core-site に追加
hadoop.security.group.mapping=org.apache.hadoop.security.LdapGroupsMapping
hadoop.security.group.mapping.ldap.bind.user=uid=admin,ou=people,dc=hadoop,dc=apache,dc=org
hadoop.security.group.mapping.ldap.url=ldap://sandbox-hdp.hortonworks.com:33389/dc=hadoop,dc=apache,dc=org
#hadoop.security.group.mapping.ldap.base=
hadoop.security.group.mapping.ldap.search.filter.user=(&(objectclass=person)(uid={0}))
hadoop.security.group.mapping.ldap.search.filter.group=(objectclass=groupofnames)
hadoop.security.group.mapping.ldap.search.attr.member=member
hadoop.security.group.mapping.ldap.search.attr.group.name=cn
# PASSWORDタイプで
hadoop.security.group.mapping.ldap.bind.password=admin-password


コンポジットの場合:
hadoop.security.group.mapping=org.apache.hadoop.security.CompositeGroupsMapping
hadoop.security.group.mapping.providers=shell4services,ldap-demo4users
hadoop.security.group.mapping.provider.shell4services=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
hadoop.security.group.mapping.provider.ldap-demo4users=org.apache.hadoop.security.LdapGroupsMapping
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.url=ldap://sandbox-hdp.hortonworks.com:33389/dc=hadoop,dc=apache,dc=org
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.bind.user=uid=admin,ou=people,dc=hadoop,dc=apache,dc=org
#hadoop.security.group.mapping.provider.ldap-demo4users.ldap.base=
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.search.filter.user=(&(objectclass=person)(uid={0}))
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.search.filter.group=(objectclass=groupofnames)
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.search.attr.member=member
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.search.attr.group.name=cn
# PASSWORDタイプで
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.bind.password=admin-password

4. HDFS, YARN, MapReduce2を再起動
再起動するので下記のコマンドは必要ないはず
sudo -u hdfs -i hdfs dfsadmin -refreshUserToGroupsMappings
sudo -u yarn -i yarn rmadmin -refreshUserToGroupsMappings

5. 確認:うまく行っていたら、下記コマンドに何か出てくるはずです
sudo -u hdfs -i hdfs groups sam

Ambari: failed with database inconsistency errors

Ambariでたまに、下記のようなエラーが出ますが、何が原因(Inconsistent)?

ERROR - Required config(s): ranger-hbase-security,ranger-hbase-policymgr-ssl,ranger-hbase-audit,ranger-hbase-plugin-properties is(are) not available for service HBASE with service config version 12 in cluster TEST

DatabaseConsistencyCheckHelper.java

# エラーの出るところ
                  serviceConfigsFromStack.removeAll(serviceConfigsFromDB);
                  if (!serviceConfigsFromStack.isEmpty()) {
                    LOG.error("Required config(s): {} is(are) not available for service {} with service config version {} in cluster {}",
                            StringUtils.join(serviceConfigsFromStack, ","), serviceName, Integer.toString(serviceVersion), clusterName);
                    errorAvailable = true;
                  }

# チェックに使われるクエリー
    String GET_SERVICES_WITH_CONFIGS_QUERY = "
SELECT  c.cluster_name,
  cs.service_name,
  cc.type_name,
  sc.versionFROM clusterservices cs
  JOIN serviceconfig sc ON cs.service_name = sc.service_name AND cs.cluster_id = sc.cluster_id  JOIN serviceconfigmapping scm ON sc.service_config_id = scm.service_config_id  JOIN clusterconfig cc ON scm.config_id = cc.config_id AND sc.cluster_id = cc.cluster_id  JOIN clusters c ON cc.cluster_id = c.cluster_id AND sc.stack_id = c.desired_stack_idWHERE sc.group_id IS NULL      AND sc.service_config_id = (SELECT max(service_config_id)
                                  FROM serviceconfig sc2
                                  WHERE sc2.service_name = sc.service_name AND sc2.cluster_id = sc.cluster_id)
GROUP BY c.cluster_name, cs.service_name, cc.type_name, sc.version
";

# Ambariログ
07 Feb 2017 14:48:36,889  INFO [main] ClusterImpl:352 - Service config types loaded: {PIG=[pig-properties, pig-env, pig-log4j], KAFKA=[ranger-kafka-policymgr-ssl, kafka-log4j, kafka-env, kafka-broker, ranger-kafka-security, ranger-kafka-plugin-properties, ranger-kafka-audit], LOGSEARCH=[logsearch-service_logs-solrconfig, logfeeder-log4j, logsearch-admin-json, logsearch-env, logfeeder-env, logsearch-audit_logs-solrconfig, logfeeder-properties, logsearch-properties, logsearch-log4j], RANGER_KMS=[kms-properties, ranger-kms-security, ranger-kms-site, kms-site, kms-env, dbks-site, ranger-kms-audit, ranger-kms-policymgr-ssl, kms-log4j], MAPREDUCE2=[mapred-site, mapred-env], SLIDER=[slider-log4j, slider-env, slider-client], HIVE=[webhcat-env, ranger-hive-plugin-properties, hive-exec-log4j, ranger-hive-policymgr-ssl, hive-env, webhcat-site, hive-log4j, ranger-hive-audit, hive-site, webhcat-log4j, hiveserver2-site, hcat-env, ranger-hive-security], TEZ=[tez-env, tez-site], HBASE=[ranger-hbase-security, hbase-policy, hbase-env, hbase-log4j, hbase-site, ranger-hbase-policymgr-ssl, ranger-hbase-audit, ranger-hbase-plugin-properties], RANGER=[admin-properties, ranger-admin-site, usersync-properties, ranger-site, ranger-env, ranger-ugsync-site], OOZIE=[oozie-log4j, oozie-env, oozie-site], FLUME=[flume-env, flume-conf], MAHOUT=[mahout-log4j, mahout-env], HDFS=[ssl-server, hdfs-log4j, ranger-hdfs-audit, ranger-hdfs-plugin-properties, ssl-client, hdfs-site, ranger-hdfs-policymgr-ssl, hadoop-policy, ranger-hdfs-security, hadoop-env, core-site], AMBARI_METRICS=[ams-ssl-client, ams-ssl-server, ams-hbase-log4j, ams-hbase-policy, ams-hbase-security-site, ams-grafana-env, ams-hbase-env, ams-env, ams-log4j, ams-grafana-ini, ams-site, ams-hbase-site], SPARK=[spark-thrift-sparkconf, spark-log4j-properties, spark-defaults, spark-javaopts-properties, spark-metrics-properties, spark-hive-site-override, spark-env], SMARTSENSE=[hst-log4j, hst-server-conf, activity-zeppelin-shiro, activity-log4j, activity-zeppelin-site, anonymization-rules, activity-zeppelin-env, activity-zeppelin-interpreter, activity-env, activity-conf, hst-agent-conf], AMBARI_INFRA=[infra-solr-client-log4j, infra-solr-env, infra-solr-xml, infra-solr-log4j], YARN=[ranger-yarn-policymgr-ssl, yarn-site, ranger-yarn-audit, ranger-yarn-security, yarn-env, ranger-yarn-plugin-properties, capacity-scheduler, yarn-log4j], FALCON=[falcon-runtime.properties, falcon-log4j, falcon-client.properties, falcon-startup.properties, falcon-env], SQOOP=[sqoop-site, sqoop-env], ATLAS=[atlas-log4j, atlas-env, application-properties], ZOOKEEPER=[zoo.cfg, zookeeper-log4j, zookeeper-env], STORM=[ranger-storm-plugin-properties, storm-site, ranger-storm-audit, storm-cluster-log4j, storm-worker-log4j, ranger-storm-policymgr-ssl, ranger-storm-security, storm-env], GANGLIA=[ganglia-env], KNOX=[knoxsso-topology, ranger-knox-security, users-ldif, knox-env, ranger-knox-plugin-properties, gateway-log4j, gateway-site, ranger-knox-policymgr-ssl, ranger-knox-audit, topology, admin-topology, ldap-log4j], KERBEROS=[kerberos-env, krb5-conf], ACCUMULO=[accumulo-env, accumulo-log4j, client, accumulo-site]}

ちなみに、Ambari2.4.2ではJavaコマンドがチェックに使われます。
java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/postgresql-jdbc.jar' org.apache.ambari.server.checks.DatabaseConsistencyChecker
Ambari2.2.xだと、CheckDatabaseHelper.

直すのに使うSQL:
SELECT  service_name,
  max(service_config_id) AS service_config_id,
  max(version)           AS versionFROM serviceconfig
WHERE service_name IN ('HBASE') AND version IN (12)
GROUP BY service_name;


HDP 2.6.1 , Ambari 2.5.1.0 (Sandbox) で Knox SSOを試す

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/setting_up_knox_sso_for_ambari.html

Knox SSOに必要なもの

-  Ambari ServerはLDAPの設定が必要 (ambari-server setup-ldap)
- そしてユーザがインポートされている必要あり(ambari-server sync-ldap)
- KnoxとAmbari Serverは同じドメインである必要あり
- Hostnameが"{somehost}.{someorganisation}.{someTLD}"である必要あり
だからsandbox.hortonworks.comはOK、でも、node1.localdomainはNG

1. Knox設定

knoxsso.redirect.whitelist.regexを変更する必要あり
^https?:\/\/(sandbox-hdp\.hortonworks\.com|172\.18\.0\.2|172\.26\.74\.244|localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*$

knoxsso.cookie.secure.onlyはfalseでないとうまくいかないかも?

Knoxの証明書をエクスポートしておく(ambari-server setup-sso中に必要)
keytool -export -alias gateway-identity -rfc -file ./gateway.crt -keystore /usr/hdp/current/knox-server/data/security/keystores/gateway.jks
or
echo -n | openssl s_client -connect sandbox-hdp.hortonworks.com:8443 -showcerts 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | tee ./gateway.crt

念のためCNを確認:
openssl x509 -noout -subject -in ./gateway.crt
subject= /C=US/ST=Test/L=Test/O=Hadoop/OU=Test/CN=sandbox-hdp.hortonworks.com

ここで、Ambariの"admin"(LOCAL)ユーザをどうするかを決めます。
普通にインポートすると同じ名前なのでEXTERNALユーザになります、が、そうするとあとでSSOを止めた時や、Demo LDAPが止まっている場合に面倒です。
そこで、Knox => Advanced users-ldif から、別のAdmin候補を作ります。

# entry for sample user adminldap
dn: uid=adminldap,ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: adminldap
sn: adminldap
uid: adminldap
userPassword:admin-password

Knox Demo LDAPをAmbariから再起動(Stop/Start)たまに止まったように見えて、止まってない場合あり。
さらに、
#sudo -u hdfs kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-sandbox
sudo -u hdfs hdfs dfs -mkdir /user/adminldap
useradd adminldap

Ambari LDAP設定

Knox Demo LDAPを開始してから、
_LDAP_SERVER="sandbox.hortonworks.com:33389"
ambari-server setup-ldap --ldap-url=${_LDAP_SERVER} --ldap-user-class=person --ldap-user-attr=uid --ldap-group-class=groupofnames --ldap-ssl=false --ldap-secondary-url="" --ldap-referral="" --ldap-group-attr=cn --ldap-member-attr=member --ldap-dn=dn --ldap-base-dn=dc=hadoop,dc=apache,dc=org --ldap-bind-anonym=false --ldap-manager-dn=uid=admin,ou=people,dc=hadoop,dc=apache,dc=org --ldap-manager-password=admin-password --ldap-sync-username-collisions-behavior=skip --ldap-save-settings

* "skip"にしないと、"admin"がLDAPユーザの"admin"に換えられてしまいます。(それでもよければSkipでも可)

さらに、下記のプロパティを追加する必要あり
echo "authentication.ldap.pagination.enabled=false" >> /etc/ambari-server/conf/ambari.properties
で、Ambari Serverの再起動
ambari-server restart

Ambari sync-ldap

ambari-server sync-ldap --ldap-sync-admin-name=admin --ldap-sync-admin-password=admin --all
"--ldap-sync-"と言いながら、AmbariローカルユーザのAdminです。

LDAPユーザでログインできるかテスト。
また、Admin候補を作った場合は、Adminにしておく。

"ambari-server setup-sso"

Provider URL = https://sandbox-hdp.hortonworks.com:8443/gateway/knoxsso/api/v1/websso

証明書を貼り付けする場合は、最初と最後の行はペーストしない。すると:
/var/log/ambari-server/ambari-server.log:08 Nov 2017 09:34:33,680 ERROR [ambari-client-thread-1470] Configuration:5133 - Unable to parse public certificate file. JWT auth will be disabled.

ambari-server setup-sso後は下記のようになります。
[root@sandbox ~]# grep -iw jwt /etc/ambari-server/conf/*
/etc/ambari-server/conf/ambari.properties:authentication.jwt.enabled=true
/etc/ambari-server/conf/ambari.properties:authentication.jwt.providerUrl=https://sandbox.hortonworks.com:8443/gateway/knoxsso/api/v1/websso
/etc/ambari-server/conf/ambari.properties:authentication.jwt.publicKey=/etc/ambari-server/conf/jwt-cert.pem

ambari-server restart