2017年12月29日金曜日

Kerberos済みのHDPで、Ranger Solr PluginのAudit先をAmbari Infraにする (Unofficial)

前提条件:

SolrとRanger Solr Pluginをインストール 
mPackは2.2.9、HDPは2.6.2かそれ以降を推奨(RANGER-1446, RANGER-1658)
Ambari InfraにHDFSのAuditが正常に書き込まれているのを確認


1) NameNodeのサーバーから/etc/hadoop/conf/ranger-hdfs-audit.xmlをSolrサーバの下記ファイルにコピー
/opt/lucidworks-hdpsearch/solr/server/solr-webapp/webapp/WEB-INF/classes/ranger-solr-audit.xml 

2) ranger-solr-audit.xmlを編集後、SolrをAmbariからリスタート 

xasecure.audit.destination.solr.batch.filespool.dir = /var/log/solr/audit/solr/spool 
xasecure.audit.jaas.Client.option.keyTab = /etc/security/keytabs/solr.service.keytab 
xasecure.audit.jaas.Client.option.principal = solr/_HOST@YOUR_PRINCIPAL 
xasecure.audit.solr.solr_url = (empty value) 

3) /var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py を編集(太字の部分を追加):

service_default_principals_map = [('hdfs', 'nn'), ('hbase', 'hbase'), ('hive', 'hive'), ('kafka', 'kafka'), ('kms', 'rangerkms'), 
('knox', 'knox'), ('nifi', 'nifi'), ('storm', 'storm'), ('yanr', 'yarn'), ('solr', 'solr')]

Ambari Agentのキャッシュを変更しているので、Ambari Server側のファイルも変更する必要あり。
.pyc、.pyoファイルがあった場合は削除。 

4) Ranger AdminをAmbariから再起動(する事によって上記ユーザがSolr Roleに追加され、403エラーが発生しなくなる)

Ambari 2.6.0は日本語Localeだともしかしたら動かない場合あり

Ambari 2.6.0からyumrpm.pyが大幅に変わりました。
サービスのインストールやスタート時に、yum list availableとyum list installedを実行する模様です。
その際、標準と異なるアウトプットが出るとサービスのインストールやスタートに失敗する可能性があります。
「標準と異なるアウトプット」とはRedhat Satelliteやyum pluginなどを使っていると、yum list xxxxコマンド実行時に、リストの最初と最後に数行エクストラな情報がでます。
Satelliteに関してはAmbari2.6.2で修正される模様です。

で、問題(になる可能性)は下記のラインのように英語の出力を想定している部分があることです。

    return self._lookup_packages(cmd, 'Available Packages')

日本語環境にしているとyumの出力も日本語になります。したがって、上記の単語にマッチしないわけですが、その場合は_lookup_packages()内で最初の三行を無視するようになっているので、うまくいく場合といかない場合がでてきます。

簡単な回避策としては、/var/lib/ambari-agent/bin/ambari-agentに"export LANG=C"を追加するなどがあります。


備考:

[root@sandbox-hdp ~]# cat /etc/sysconfig/i18n
LANG="en_US.UTF-8"
SYSFONT="latarcyrheb-sun16"

日本語にして見ます。
[root@sandbox-hdp ~]# cat /etc/sysconfig/i18n
LANG="ja_JP.utf8"
SYSFONT="latarcyrheb-sun16"

ログアウト・ログイン後、または
[root@sandbox-hdp ~]# . /etc/sysconfig/i18n
[root@sandbox-hdp ~]# locale
LANG=ja_JP.utf8
LC_CTYPE="ja_JP.utf8"
LC_NUMERIC="ja_JP.utf8"
LC_TIME="ja_JP.utf8"
LC_COLLATE="ja_JP.utf8"
LC_MONETARY="ja_JP.utf8"
LC_MESSAGES="ja_JP.utf8"
LC_PAPER="ja_JP.utf8"
LC_NAME="ja_JP.utf8"
LC_ADDRESS="ja_JP.utf8"
LC_TELEPHONE="ja_JP.utf8"
LC_MEASUREMENT="ja_JP.utf8"
LC_IDENTIFICATION="ja_JP.utf8"
LC_ALL=

TODO: CentOS7の場合は"localectl set-locale LANG=ja_JP.utf8;export LC_CTYPE=ja_JP.UTF-8"?

[root@sandbox-hdp ~]# yum list installed | head
読み込んだプラグイン:fastestmirror, ovl, priorities
インストール済みパッケージ
ConsoleKit.x86_64                       0.4.1-6.el6              @base
ConsoleKit-libs.x86_64                  0.4.1-6.el6              @base
GConf2.x86_64                           2.28.0-7.el6             @base
MAKEDEV.x86_64                          3.24-6.el6               @CentOS/6.8
ORBit2.x86_64                           2.14.17-6.el6_8          @base
PyQt4.x86_64                            4.6.2-9.el6              @base
R.x86_64                                3.4.1-1.el6              @epel
R-core.x86_64                           3.4.1-1.el6              @epel

TODO: もしかしたらワークアラウンド?(どのような影響がでるか不明)
repositories.legacy-override.enabled=true

2017年12月15日金曜日

TODO: Sandbos HDP 2.6.1:Ambari Infraが開始できない

Sandbox作成後にAmbariInfraを開始しようとするとエラー
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 123, in <module>
    InfraSolr().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 46, in start
    self.configure(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 119, in locking_configure
    original_configure(obj, *args, **kw)
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 41, in configure
    setup_infra_solr(name = 'server')
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/setup_infra_solr.py", line 118, in setup_infra_solr
    security_json_location=security_json_file_location
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py", line 159, in setup_kerberos_plugin
    Execute(setup_kerberos_plugin_cmd)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --setup-kerberos-plugin' returned 1. Using default ZkCredentialsProvider
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=sandbox.hortonworks.com
Client environment:java.version=1.8.0_141
Client environment:java.vendor=Oracle Corporation
Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-2.b16.el6_9.x86_64/jre
Client environment:java.class.path=/usr/lib/ambari-infra-solr-client:/usr/lib/ambari-infra-solr-client/libs/log4j-1.2.17.jar:/usr/lib/ambari-infra-solr-client/libs/junit-4.10.jar:/usr/lib/ambari-infra-solr-client/libs/commons-cli-1.3.1.jar:/usr/lib/ambari-infra-solr-client/libs/noggit-0.6.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-core-asl-1.9.9.jar:/usr/lib/ambari-infra-solr-client/libs/stax2-api-3.1.4.jar:/usr/lib/ambari-infra-solr-client/libs/jcl-over-slf4j-1.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/tools-1.7.0.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-api-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/solr-solrj-5.5.2.jar:/usr/lib/ambari-infra-solr-client/libs/guava-16.0.jar:/usr/lib/ambari-infra-solr-client/libs/commons-io-2.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-collections-3.2.2.jar:/usr/lib/ambari-infra-solr-client/libs/httpmime-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/easymock-3.4.jar:/usr/lib/ambari-infra-solr-client/libs/utility-1.0.0.0-SNAPSHOT.jar:/usr/lib/ambari-infra-solr-client/libs/objenesis-2.2.jar:/usr/lib/ambari-infra-solr-client/libs/zookeeper-3.4.6.jar:/usr/lib/ambari-infra-solr-client/libs/antlr-2.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/commons-lang-2.5.jar:/usr/lib/ambari-infra-solr-client/libs/antlr4-runtime-4.5.3.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-log4j12-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/httpclient-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-beanutils-1.9.2.jar:/usr/lib/ambari-infra-solr-client/libs/httpcore-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-logging-1.1.1.jar:/usr/lib/ambari-infra-solr-client/libs/woodstox-core-asl-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-codec-1.8.jar:/usr/lib/ambari-infra-solr-client/libs/checkstyle-6.19.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-mapper-asl-1.9.13.jar:/usr/lib/ambari-infra-solr-client/libs/hamcrest-core-1.1.jar:/usr/lib/ambari-infra-solr-client/libs/ambari-logsearch-solr-client-2.5.1.0.159.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=3.13.0-86-generic
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/var/lib/ambari-agent
Initiating client connection, connectString=sandbox.hortonworks.com:2181 sessionTimeout=15000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@5e91993f
Waiting for client to connect to ZooKeeper
Opening socket connection to server sandbox.hortonworks.com/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
Socket connection established to sandbox.hortonworks.com/172.18.0.2:2181, initiating session
Session establishment complete on server sandbox.hortonworks.com/172.18.0.2:2181, sessionid = 0x160548c0ea90005, negotiated timeout = 15000
Watcher org.apache.solr.common.cloud.ConnectionManager@350d2264 name:ZooKeeperConnection Watcher:sandbox.hortonworks.com:2181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Setup kerberos plugin in security.json
KeeperErrorCode = NoAuth for /infra-solr/security.json
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /infra-solr/security.json
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1270)
 at org.apache.solr.common.cloud.SolrZkClient$8.execute(SolrZkClient.java:362)
 at org.apache.solr.common.cloud.SolrZkClient$8.execute(SolrZkClient.java:359)
 at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
 at org.apache.solr.common.cloud.SolrZkClient.setData(SolrZkClient.java:359)
 at org.apache.solr.common.cloud.SolrZkClient.setData(SolrZkClient.java:546)
 at org.apache.ambari.logsearch.solr.commands.EnableKerberosPluginSolrZkCommand.putFileContent(EnableKerberosPluginSolrZkCommand.java:63)
 at org.apache.ambari.logsearch.solr.commands.EnableKerberosPluginSolrZkCommand.executeZkCommand(EnableKerberosPluginSolrZkCommand.java:54)
 at org.apache.ambari.logsearch.solr.commands.EnableKerberosPluginSolrZkCommand.executeZkCommand(EnableKerberosPluginSolrZkCommand.java:32)
 at org.apache.ambari.logsearch.solr.commands.AbstractZookeeperRetryCommand.createAndProcessRequest(AbstractZookeeperRetryCommand.java:38)
 at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
 at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
 at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.setupKerberosPlugin(AmbariSolrCloudClient.java:162)
 at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:518)
... (snip) ...
Maximum retries exceeded: 5
Return code: 1
stdout:   /var/lib/ambari-agent/data/output-187.txt
2017-12-15 04:05:17,819 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-12-15 04:05:17,973 - Stack Feature Version Info: stack_version=2.6, version=2.6.1.0-129, current_cluster_version=2.6.1.0-129 -> 2.6.1.0-129
2017-12-15 04:05:17,974 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-12-15 04:05:17,975 - Group['livy'] {}
2017-12-15 04:05:17,976 - Group['spark'] {}
2017-12-15 04:05:17,976 - Group['ranger'] {}
2017-12-15 04:05:17,977 - Group['zeppelin'] {}
2017-12-15 04:05:17,977 - Group['hadoop'] {}
2017-12-15 04:05:17,977 - Group['users'] {}
2017-12-15 04:05:17,977 - Group['knox'] {}
2017-12-15 04:05:17,978 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,978 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,979 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,983 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,985 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-12-15 04:05:17,986 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,987 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-12-15 04:05:17,988 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2017-12-15 04:05:17,988 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-12-15 04:05:17,989 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop']}
2017-12-15 04:05:17,990 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,991 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,992 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-12-15 04:05:17,992 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,993 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,994 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,995 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,995 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,996 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,997 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,998 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,998 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-12-15 04:05:17,999 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-15 04:05:18,001 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-12-15 04:05:18,052 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-12-15 04:05:18,055 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-12-15 04:05:18,056 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-15 04:05:18,057 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-12-15 04:05:18,106 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-12-15 04:05:18,107 - Group['hdfs'] {}
2017-12-15 04:05:18,107 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-12-15 04:05:18,108 - FS Type: 
2017-12-15 04:05:18,108 - Directory['/etc/hadoop'] {'mode': 0755}
2017-12-15 04:05:18,130 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-15 04:05:18,131 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-12-15 04:05:18,152 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-12-15 04:05:18,206 - Skipping Execute[('setenforce', '0')] due to not_if
2017-12-15 04:05:18,207 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-12-15 04:05:18,209 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-12-15 04:05:18,210 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-12-15 04:05:18,214 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-12-15 04:05:18,217 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-12-15 04:05:18,224 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-12-15 04:05:18,236 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-15 04:05:18,237 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-12-15 04:05:18,238 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-12-15 04:05:18,244 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-12-15 04:05:18,293 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-12-15 04:05:18,702 - Directory['/var/log/ambari-infra-solr'] {'owner': 'infra-solr', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-15 04:05:18,704 - Directory['/var/run/ambari-infra-solr'] {'owner': 'infra-solr', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-15 04:05:18,705 - Directory['/opt/ambari_infra_solr/data'] {'owner': 'infra-solr', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-15 04:05:18,706 - Directory['/opt/ambari_infra_solr/data/resources'] {'owner': 'infra-solr', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2017-12-15 04:05:18,707 - Directory['/usr/lib/ambari-infra-solr'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'recursive_ownership': True, 'owner': 'infra-solr', 'mode': 0755}
2017-12-15 04:05:18,707 - Changing owner for /usr/lib/ambari-infra-solr from 1025 to infra-solr
2017-12-15 04:05:18,707 - Changing group for /usr/lib/ambari-infra-solr from 1025 to hadoop
2017-12-15 04:05:19,030 - Directory['/etc/ambari-infra-solr/conf'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'mode': 0755, 'owner': 'infra-solr', 'recursive_ownership': True}
2017-12-15 04:05:19,031 - File['/var/log/ambari-infra-solr/solr-install.log'] {'content': '', 'owner': 'infra-solr', 'group': 'hadoop', 'mode': 0644}
2017-12-15 04:05:19,031 - Writing File['/var/log/ambari-infra-solr/solr-install.log'] because it doesn't exist
2017-12-15 04:05:19,031 - Changing owner for /var/log/ambari-infra-solr/solr-install.log from 0 to infra-solr
2017-12-15 04:05:19,032 - Changing group for /var/log/ambari-infra-solr/solr-install.log from 0 to hadoop
2017-12-15 04:05:19,045 - File['/etc/ambari-infra-solr/conf/infra-solr-env.sh'] {'owner': 'infra-solr', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0755}
2017-12-15 04:05:19,047 - File['/opt/ambari_infra_solr/data/solr.xml'] {'owner': 'infra-solr', 'content': InlineTemplate(...), 'group': 'hadoop'}
2017-12-15 04:05:19,049 - File['/etc/ambari-infra-solr/conf/log4j.properties'] {'owner': 'infra-solr', 'content': InlineTemplate(...), 'group': 'hadoop'}
2017-12-15 04:05:19,055 - File['/etc/ambari-infra-solr/conf/custom-security.json'] {'owner': 'infra-solr', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0640}
2017-12-15 04:05:19,056 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --create-znode --retry 30 --interval 5'] {}
2017-12-15 04:05:19,744 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181/infra-solr --cluster-prop --property-name urlScheme --property-value http'] {}
2017-12-15 04:05:20,411 - Execute['ambari-sudo.sh JAVA_HOME=/usr/lib/jvm/java /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string sandbox.hortonworks.com:2181 --znode /infra-solr --setup-kerberos-plugin'] {}

Command failed after 1 tries

調査:
[root@sandbox ~]# ls -ltra /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf/solrconfig.xml
-r-xr--r-- 1 ranger ranger 73711 May 31  2017 /usr/hdp/current/ranger-admin/contrib/solr_for_audit_setup/conf/solrconfig.xml

Kerberosが必要?

2017年11月30日木曜日

HDP 2.6 Sandboxで Knox Demo LDAPをつかってHadoop Group Mappingを設定する

参考:https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-security/content/setting_up_hadoop_group_mappping_for_ldap_ad.html

1. 必要であれば、Ambariから/etc/knox/conf/users.ldifにユーザやグループを追加
Knox => Configs => Advanced users-ldif

2. Knox DEMO LDAPを開始
コマンドで開始したい場合は:
sudo -u knox -i /usr/hdp/current/knox-server/bin/ldap.sh start
または
sudo -u knox -i java -jar /usr/hdp/current/knox-server/bin/ldap.jar /usr/hdp/current/knox-server/conf &

確認:
yum install -y openldap-clients
ldapsearch -H 'ldap://sandbox-hdp.hortonworks.com:33389/' -x -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' -w admin-password '(objectclass=person)' uid
ldapsearch -H 'ldap://sandbox-hdp.hortonworks.com:33389/' -x -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' -w admin-password '(objectclass=groupOfNames)' member cn
ldapsearch -H 'ldap://sandbox-hdp.hortonworks.com:33389/' -x -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' -w admin-password -b 'dc=hadoop,dc=apache,dc=org' '(&(objectclass=person)(uid=sam))'

3. Ambariから、下記のプロパティをHDFS => Configs => Custom core-site に追加
hadoop.security.group.mapping=org.apache.hadoop.security.LdapGroupsMapping
hadoop.security.group.mapping.ldap.bind.user=uid=admin,ou=people,dc=hadoop,dc=apache,dc=org
hadoop.security.group.mapping.ldap.url=ldap://sandbox-hdp.hortonworks.com:33389/dc=hadoop,dc=apache,dc=org
#hadoop.security.group.mapping.ldap.base=
hadoop.security.group.mapping.ldap.search.filter.user=(&(objectclass=person)(uid={0}))
hadoop.security.group.mapping.ldap.search.filter.group=(objectclass=groupofnames)
hadoop.security.group.mapping.ldap.search.attr.member=member
hadoop.security.group.mapping.ldap.search.attr.group.name=cn
# PASSWORDタイプで
hadoop.security.group.mapping.ldap.bind.password=admin-password


コンポジットの場合:
hadoop.security.group.mapping=org.apache.hadoop.security.CompositeGroupsMapping
hadoop.security.group.mapping.providers=shell4services,ldap-demo4users
hadoop.security.group.mapping.provider.shell4services=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
hadoop.security.group.mapping.provider.ldap-demo4users=org.apache.hadoop.security.LdapGroupsMapping
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.url=ldap://sandbox-hdp.hortonworks.com:33389/dc=hadoop,dc=apache,dc=org
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.bind.user=uid=admin,ou=people,dc=hadoop,dc=apache,dc=org
#hadoop.security.group.mapping.provider.ldap-demo4users.ldap.base=
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.search.filter.user=(&(objectclass=person)(uid={0}))
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.search.filter.group=(objectclass=groupofnames)
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.search.attr.member=member
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.search.attr.group.name=cn
# PASSWORDタイプで
hadoop.security.group.mapping.provider.ldap-demo4users.ldap.bind.password=admin-password

4. HDFS, YARN, MapReduce2を再起動
再起動するので下記のコマンドは必要ないはず
sudo -u hdfs -i hdfs dfsadmin -refreshUserToGroupsMappings
sudo -u yarn -i yarn rmadmin -refreshUserToGroupsMappings

5. 確認:うまく行っていたら、下記コマンドに何か出てくるはずです
sudo -u hdfs -i hdfs groups sam

Ambari: failed with database inconsistency errors

Ambariでたまに、下記のようなエラーが出ますが、何が原因(Inconsistent)?

ERROR - Required config(s): ranger-hbase-security,ranger-hbase-policymgr-ssl,ranger-hbase-audit,ranger-hbase-plugin-properties is(are) not available for service HBASE with service config version 12 in cluster TEST

DatabaseConsistencyCheckHelper.java

# エラーの出るところ
                  serviceConfigsFromStack.removeAll(serviceConfigsFromDB);
                  if (!serviceConfigsFromStack.isEmpty()) {
                    LOG.error("Required config(s): {} is(are) not available for service {} with service config version {} in cluster {}",
                            StringUtils.join(serviceConfigsFromStack, ","), serviceName, Integer.toString(serviceVersion), clusterName);
                    errorAvailable = true;
                  }

# チェックに使われるクエリー
    String GET_SERVICES_WITH_CONFIGS_QUERY = "
SELECT  c.cluster_name,
  cs.service_name,
  cc.type_name,
  sc.versionFROM clusterservices cs
  JOIN serviceconfig sc ON cs.service_name = sc.service_name AND cs.cluster_id = sc.cluster_id  JOIN serviceconfigmapping scm ON sc.service_config_id = scm.service_config_id  JOIN clusterconfig cc ON scm.config_id = cc.config_id AND sc.cluster_id = cc.cluster_id  JOIN clusters c ON cc.cluster_id = c.cluster_id AND sc.stack_id = c.desired_stack_idWHERE sc.group_id IS NULL      AND sc.service_config_id = (SELECT max(service_config_id)
                                  FROM serviceconfig sc2
                                  WHERE sc2.service_name = sc.service_name AND sc2.cluster_id = sc.cluster_id)
GROUP BY c.cluster_name, cs.service_name, cc.type_name, sc.version
";

# Ambariログ
07 Feb 2017 14:48:36,889  INFO [main] ClusterImpl:352 - Service config types loaded: {PIG=[pig-properties, pig-env, pig-log4j], KAFKA=[ranger-kafka-policymgr-ssl, kafka-log4j, kafka-env, kafka-broker, ranger-kafka-security, ranger-kafka-plugin-properties, ranger-kafka-audit], LOGSEARCH=[logsearch-service_logs-solrconfig, logfeeder-log4j, logsearch-admin-json, logsearch-env, logfeeder-env, logsearch-audit_logs-solrconfig, logfeeder-properties, logsearch-properties, logsearch-log4j], RANGER_KMS=[kms-properties, ranger-kms-security, ranger-kms-site, kms-site, kms-env, dbks-site, ranger-kms-audit, ranger-kms-policymgr-ssl, kms-log4j], MAPREDUCE2=[mapred-site, mapred-env], SLIDER=[slider-log4j, slider-env, slider-client], HIVE=[webhcat-env, ranger-hive-plugin-properties, hive-exec-log4j, ranger-hive-policymgr-ssl, hive-env, webhcat-site, hive-log4j, ranger-hive-audit, hive-site, webhcat-log4j, hiveserver2-site, hcat-env, ranger-hive-security], TEZ=[tez-env, tez-site], HBASE=[ranger-hbase-security, hbase-policy, hbase-env, hbase-log4j, hbase-site, ranger-hbase-policymgr-ssl, ranger-hbase-audit, ranger-hbase-plugin-properties], RANGER=[admin-properties, ranger-admin-site, usersync-properties, ranger-site, ranger-env, ranger-ugsync-site], OOZIE=[oozie-log4j, oozie-env, oozie-site], FLUME=[flume-env, flume-conf], MAHOUT=[mahout-log4j, mahout-env], HDFS=[ssl-server, hdfs-log4j, ranger-hdfs-audit, ranger-hdfs-plugin-properties, ssl-client, hdfs-site, ranger-hdfs-policymgr-ssl, hadoop-policy, ranger-hdfs-security, hadoop-env, core-site], AMBARI_METRICS=[ams-ssl-client, ams-ssl-server, ams-hbase-log4j, ams-hbase-policy, ams-hbase-security-site, ams-grafana-env, ams-hbase-env, ams-env, ams-log4j, ams-grafana-ini, ams-site, ams-hbase-site], SPARK=[spark-thrift-sparkconf, spark-log4j-properties, spark-defaults, spark-javaopts-properties, spark-metrics-properties, spark-hive-site-override, spark-env], SMARTSENSE=[hst-log4j, hst-server-conf, activity-zeppelin-shiro, activity-log4j, activity-zeppelin-site, anonymization-rules, activity-zeppelin-env, activity-zeppelin-interpreter, activity-env, activity-conf, hst-agent-conf], AMBARI_INFRA=[infra-solr-client-log4j, infra-solr-env, infra-solr-xml, infra-solr-log4j], YARN=[ranger-yarn-policymgr-ssl, yarn-site, ranger-yarn-audit, ranger-yarn-security, yarn-env, ranger-yarn-plugin-properties, capacity-scheduler, yarn-log4j], FALCON=[falcon-runtime.properties, falcon-log4j, falcon-client.properties, falcon-startup.properties, falcon-env], SQOOP=[sqoop-site, sqoop-env], ATLAS=[atlas-log4j, atlas-env, application-properties], ZOOKEEPER=[zoo.cfg, zookeeper-log4j, zookeeper-env], STORM=[ranger-storm-plugin-properties, storm-site, ranger-storm-audit, storm-cluster-log4j, storm-worker-log4j, ranger-storm-policymgr-ssl, ranger-storm-security, storm-env], GANGLIA=[ganglia-env], KNOX=[knoxsso-topology, ranger-knox-security, users-ldif, knox-env, ranger-knox-plugin-properties, gateway-log4j, gateway-site, ranger-knox-policymgr-ssl, ranger-knox-audit, topology, admin-topology, ldap-log4j], KERBEROS=[kerberos-env, krb5-conf], ACCUMULO=[accumulo-env, accumulo-log4j, client, accumulo-site]}

ちなみに、Ambari2.4.2ではJavaコマンドがチェックに使われます。
java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/postgresql-jdbc.jar' org.apache.ambari.server.checks.DatabaseConsistencyChecker
Ambari2.2.xだと、CheckDatabaseHelper.

直すのに使うSQL:
SELECT  service_name,
  max(service_config_id) AS service_config_id,
  max(version)           AS versionFROM serviceconfig
WHERE service_name IN ('HBASE') AND version IN (12)
GROUP BY service_name;


HDP 2.6.1 , Ambari 2.5.1.0 (Sandbox) で Knox SSOを試す

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/setting_up_knox_sso_for_ambari.html

Knox SSOに必要なもの

-  Ambari ServerはLDAPの設定が必要 (ambari-server setup-ldap)
- そしてユーザがインポートされている必要あり(ambari-server sync-ldap)
- KnoxとAmbari Serverは同じドメインである必要あり
- Hostnameが"{somehost}.{someorganisation}.{someTLD}"である必要あり
だからsandbox.hortonworks.comはOK、でも、node1.localdomainはNG

1. Knox設定

knoxsso.redirect.whitelist.regexを変更する必要あり
^https?:\/\/(sandbox-hdp\.hortonworks\.com|172\.18\.0\.2|172\.26\.74\.244|localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*$

knoxsso.cookie.secure.onlyはfalseでないとうまくいかないかも?

Knoxの証明書をエクスポートしておく(ambari-server setup-sso中に必要)
keytool -export -alias gateway-identity -rfc -file ./gateway.crt -keystore /usr/hdp/current/knox-server/data/security/keystores/gateway.jks
or
echo -n | openssl s_client -connect sandbox-hdp.hortonworks.com:8443 -showcerts 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | tee ./gateway.crt

念のためCNを確認:
openssl x509 -noout -subject -in ./gateway.crt
subject= /C=US/ST=Test/L=Test/O=Hadoop/OU=Test/CN=sandbox-hdp.hortonworks.com

ここで、Ambariの"admin"(LOCAL)ユーザをどうするかを決めます。
普通にインポートすると同じ名前なのでEXTERNALユーザになります、が、そうするとあとでSSOを止めた時や、Demo LDAPが止まっている場合に面倒です。
そこで、Knox => Advanced users-ldif から、別のAdmin候補を作ります。

# entry for sample user adminldap
dn: uid=adminldap,ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: adminldap
sn: adminldap
uid: adminldap
userPassword:admin-password

Knox Demo LDAPをAmbariから再起動(Stop/Start)たまに止まったように見えて、止まってない場合あり。
さらに、
#sudo -u hdfs kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-sandbox
sudo -u hdfs hdfs dfs -mkdir /user/adminldap
useradd adminldap

Ambari LDAP設定

Knox Demo LDAPを開始してから、
_LDAP_SERVER="sandbox.hortonworks.com:33389"
ambari-server setup-ldap --ldap-url=${_LDAP_SERVER} --ldap-user-class=person --ldap-user-attr=uid --ldap-group-class=groupofnames --ldap-ssl=false --ldap-secondary-url="" --ldap-referral="" --ldap-group-attr=cn --ldap-member-attr=member --ldap-dn=dn --ldap-base-dn=dc=hadoop,dc=apache,dc=org --ldap-bind-anonym=false --ldap-manager-dn=uid=admin,ou=people,dc=hadoop,dc=apache,dc=org --ldap-manager-password=admin-password --ldap-sync-username-collisions-behavior=skip --ldap-save-settings

* "skip"にしないと、"admin"がLDAPユーザの"admin"に換えられてしまいます。(それでもよければSkipでも可)

さらに、下記のプロパティを追加する必要あり
echo "authentication.ldap.pagination.enabled=false" >> /etc/ambari-server/conf/ambari.properties
で、Ambari Serverの再起動
ambari-server restart

Ambari sync-ldap

ambari-server sync-ldap --ldap-sync-admin-name=admin --ldap-sync-admin-password=admin --all
"--ldap-sync-"と言いながら、AmbariローカルユーザのAdminです。

LDAPユーザでログインできるかテスト。
また、Admin候補を作った場合は、Adminにしておく。

"ambari-server setup-sso"

Provider URL = https://sandbox-hdp.hortonworks.com:8443/gateway/knoxsso/api/v1/websso

証明書を貼り付けする場合は、最初と最後の行はペーストしない。すると:
/var/log/ambari-server/ambari-server.log:08 Nov 2017 09:34:33,680 ERROR [ambari-client-thread-1470] Configuration:5133 - Unable to parse public certificate file. JWT auth will be disabled.

ambari-server setup-sso後は下記のようになります。
[root@sandbox ~]# grep -iw jwt /etc/ambari-server/conf/*
/etc/ambari-server/conf/ambari.properties:authentication.jwt.enabled=true
/etc/ambari-server/conf/ambari.properties:authentication.jwt.providerUrl=https://sandbox.hortonworks.com:8443/gateway/knoxsso/api/v1/websso
/etc/ambari-server/conf/ambari.properties:authentication.jwt.publicKey=/etc/ambari-server/conf/jwt-cert.pem

ambari-server restart

2017年11月1日水曜日

HDP 2.6.1 SandboxのデータベースをPostgreSQLからMySQLに変更して見る

うまくいくかはわかりませんが、挑戦です。

参考:https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-administration/content/using_ambari_with_mysql.html

MySQL Connectorがあるか確認

[root@sandbox ambari-server]# lsof -nPp `cat /var/run/ambari-server/ambari-server.pid` | grep mysql
java    20058 root  160r   REG              252,1    819803  1052266 /usr/share/java/mysql-connector-java-5.1.17.jar


[root@sandbox ambari-server]# zipgrep 'Bundle-Version' /usr/share/java/mysql-connector-java.jar

META-INF/MANIFEST.MF:Bundle-Version: 5.1.17
[root@sandbox ambari-server]# rm /usr/share/java/mysql-connector-java.jar
rm: remove symbolic link `/usr/share/java/mysql-connector-java.jar'? y
[root@sandbox ambari-server]# ln -s /usr/share/java/mysql-connector-java-5.1.37.jar /usr/share/java/mysql-connector-java.jar

Ambari Serverを停止する

[root@sandbox ~]# ambari-server stop
...

DatabaseをMySQLに変更

[root@sandbox ~]# ambari-server setup
Using python  /usr/bin/python
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking firewall status...
WARNING: iptables is running. Confirm the necessary Ambari ports are accessible. Refer to the Ambari documentation for more details on ports.
OK to continue [y/n] (y)?
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)?
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? y
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL / MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
[7] - BDB
==============================================================================
Enter choice (1): 3
Hostname (localhost):
Port (3306):
Database name (ambari):
Username (ambari):
Enter Database Password (bigdata):
Configuring ambari database...
Configuring remote database connection properties...
WARNING: Before starting Ambari Server, you must run the following DDL against the database to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
Proceed with configuring remote database connection properties [y/n] (y)?
Extracting system views...
............
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.

TODO: PostgreSQLとMySQLではテーブル名の大文字小文字が違う?(ClusterHostMapping)

[root@sandbox ~]# vim /etc/my.cnf
...
lower_case_table_names = 1
[root@sandbox ~]# service mysqld restart
Stopping mysqld:                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[root@sandbox ~]#
追記:lower_case_table_namesは他のサービスに影響があるかも(Hive,Oozie)

AmbariデータベースとAmbariユーザを作る

[root@sandbox ~]# mysql -u root -phadoop
...
mysql>
CREATE DATABASE ambari;
CREATE USER 'ambari'@'%' IDENTIFIED BY 'bigdata';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%'; 
CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'bigdata';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';
CREATE USER 'ambari'@'sandbox.hortonworks.com' IDENTIFIED BY 'bigdata';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'sandbox.hortonworks.com';
FLUSH PRIVILEGES;

デフォルトのスキーマをロードする

[root@sandbox ~]# mysql -u ambari -pbigdata ambari < /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql

興味のないテーブルを除いて、ダンプを作る

[root@sandbox ~]# pg_dump -a --column-inserts  -T alert_history -T host_role_command -T execution_command -T request -T role_success_criteria -T stage -T requestresourcefilter -T requestoperationlevel -T upgrade_item -T upgrade_item -T servicecomponent_history  -T upgrade  -T topology_logical_task -T topology_host_task -T topology_host_request -T topology_host_request -T topology_host_request -T topology_host_request -T topology_request -Uambari ambari -f ambari_data.sql

加工する

[root@sandbox ~]# grep -oE '^INSERT INTO [^ ]+' ambari_data.sql | sort | uniq | sed 's/INSERT INTO/TRUNCATE/g' | sed 's/$/;/g' > ambari_delete.sql
[root@sandbox ~]# sed -i '1s/^/SET FOREIGN_KEY_CHECKS=0;\n/' ambari_delete.sql

[root@sandbox ~]# grep -vw '^SET' ambari_data.sql > ambari_data_no_SET.sql
[root@sandbox ~]# sed 's/(key, value)/(`key`, `value`)/g' ambari_data_no_SET.sql > ambari_data_no_SET_no_keywords.sql
[root@sandbox ~]# sed -i '1s/^/SET FOREIGN_KEY_CHECKS=0;\n/' ambari_data_no_SET_no_keywords.sql

削除してからインサート

[root@sandbox ~]# mysql -u ambari -pbigdata ambari < ambari_delete.sql
[root@sandbox ~]# mysql -u ambari -pbigdata ambari < ambari_data_no_SET_no_keywords.sql

Ambari Serverを開始する

[root@sandbox ~]# ambari-server start
...
[root@sandbox ~]# service postgresql stop

Stopping postgresql service:                               [  OK  ]


問題:

01 Nov 2017 09:42:17,606  WARN [C3P0PooledConnectionPoolManager[identityToken->1bqrg1u9rx02fyjjc9tfe|5d708ef6]-HelperThread-#1] StatementUtils:223 - Statement close FAILED.
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
        at sun.reflect.GeneratedConstructorAccessor209.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
        at com.mysql.jdbc.Util.getInstance(Util.java:386)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619)
...


JDBC DriverがMySQLサーバと合わないとこの問題が出る模様。
JDBC Driverをアップデートすることで止まりました。

2017年10月27日金曜日

MacからWindows ADユーザをldapsearch + Kerberos(GSSAPI)で検索してみる

krb5.confを編集

[realms]にADのRealmとサーバアドレスを追加(ホスト名の方がベターかも)
$ sudo vim /private/etc/krb5.conf
  HDP.LOCALDOMAIN = {
    admin_server = 192.168.0.21
    kdc = 192.168.0.21
  }

ログインしてチケットを確認

$ kinit
hosako@HDP.LOCALDOMAIN's password:
HW11970:~ hosako$ klist
Credentials cache: API:301A5EDD-3897-4E1E-A4CE-52C35E56D494
        Principal: hosako@HDP.LOCALDOMAIN

  Issued                Expires               Principal
Oct 27 08:09:07 2017  Oct 27 18:09:07 2017  krbtgt/HDP.LOCALDOMAIN@HDP.LOCALDOMAIN

自分を検索してみる

IPだとうまく行きません
$ ldapsearch -Y GSSAPI -R HDP.LOCALDOMAIN -U "hosako@hdp.localdomain" -b "dc=hdp,dc=localdomain" -h 192.168.0.21 "(sAMAccountName=hosako)"
SASL/GSSAPI authentication started
ldap_sasl_interactive_bind_s: Local error (-2)
additional info: SASL(-1): generic failure: GSSAPI Error:  Miscellaneous failure (see text (Server (krbtgt/168.0.21@HDP.LOCALDOMAIN) unknown while looking up ...

FQDNだとうまく行きます。
$ ldapsearch -LL -Y GSSAPI -b "dc=hdp,dc=localdomain" -h WIN-TEST.hdp.localdomain "(sAMAccountName=hosako)" dn
SASL/GSSAPI authentication started
SASL username: hosako@HDP.LOCALDOMAIN
SASL SSF: 112
SASL data security layer installed.
version: 1

dn: CN=Hajime Osako,CN=Users,DC=hdp,DC=localdomain
...

Service Principal Nameでも検索してみる
$ ldapsearch -LL -Y GSSAPI -b "dc=hdp,dc=localdomain" -h WIN-TEST.hdp.localdomain "(serviceprincipalname=HTTP*)" dn
SASL/GSSAPI authentication started
SASL username: hosako@HDP.LOCALDOMAIN
SASL SSF: 112
SASL data security layer installed.
version: 1

dn: CN=HTTP/sandbox.hortonworks.com,OU=Hadoop,DC=hdp,DC=localdomain
...




2017年10月20日金曜日

検証中:HDPとADのテスト環境を作る(LDAPS/Forest)


1. Create 1 Linux VM (Sandbox), 2 Windows VMs

Use private IPs (10.1.0.x) to connect each other (means need to edit hosts file)

2. Set up AD as a new Forest on one Windows (HDP.LOCALDOMAIN)

Add "Active Directory Domain Services" (shouldn't install AD CS at same time)
Then when configure, select Add a new forest

3. Set up ldaps (AD CS Configuration)

Ref: http://gregtechnobabble.blogspot.in/2012/11/enabling-ldap-ssl-in-windows-2012-part-1.html

  • Open Server Manager
  • Add roles and features
  • Role-based or feature-based installation
  • Tick "Active Directory Certificate Services" and required services
  • Nothing to add in "Select features" page, so "Next" twice
  • In Select role services, select "Certification Authority"
  • In Confirm installation selections, tick "Restart the destination server ..."
  • Install! and close
  • In Server Manager, in the top right, should see the flag with a warning icon
  • Click this, and select "Post-deployment Configuration", or click "AD CS"
  • Click on "Configure Active Directory Certificate Services..."
  • Check the credential which will be used to configure, then Next
  • Tick "Certification Authority"
  • Somehow can't click "Enterprise CA"
    If you are not able to select “Enterprise CA”, add “Enterprise Admins” for your user. Then you will be able to select.
  • Root CA, and Next
  • Create private key, Next
  • Default (sha1), Next
  • In "CA Name" page, review, Next.... until Confirmation page.
  • Click Configure
  • This shouldn't take long (a few seconds), then Close.
  • Reboot

Export root CA certificate (to use in truststore/browser etc.)

  • Start certsrv (Certification Authority console)
  • Right click the server, then Properties
  • From General tab, click "View Certificate" button
  • From Details tab, click "Copy to File" button

確認:

ldapsearch -h 192.168.0.21 -D Hortonworks@hdp.localdomain -W -b 'DC=HDP,DC=LOCALDOMAIN' '(sAMAccountName=Hortonworks*)'
ldapsearch -H ldaps://192.168.0.21:636 -D Hortonworks@hdp.localdomain -W -b 'DC=HDP,DC=LOCALDOMAIN' '(sAMAccountName=Hortonworks*)'

If you get certification error because of self-cert, AD is serving on LDAPS port.


4. Create a new container "hadoop"

ref: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Enable%20Kerberos%20in%20Ambari%20with%20Existing%20Active%20Directory


参考:

https://gist.github.com/magnetikonline/0ccdabfec58eb1929c997d22e7341e45
https://rms-digicert.ne.jp/howto/install/install_directory-ldap-2012.html

証明書の出力方法
openssl s_client -showcerts -connect 192.168.8.21:636

2017年10月12日木曜日

HDP Sandbox 2.6.0でKnox LDAP DemoをRanger Usersyncに使ってみる

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/ranger_user_sync_settings.html

まず、Demo LDAPが起動しているか確認

[root@sandbox ~]# netstat -lopen | grep 33389
tcp        0      0 0.0.0.0:33389               0.0.0.0:*                   LISTEN      522        1325846    84375/java          off (0.00/0/0)

[root@sandbox ~]# ldapsearch -x -h `hostname -f`:33389 -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' -w admin-password -s sub '(uid=admin)'
# extended LDIF
#
# LDAPv3
# base <> (default) with scope subtree
# filter: (uid=admin)
# requesting: ALL
#

# admin, people, hadoop.apache.org
dn: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org
sn: Admin
cn: Admin
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
userpassword:: YWRtaW4tcGFzc3dvcmQ=
uid: admin

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1



Ambari UIで、UserSyncを設定する

http://sandbox.hortonworks.com:8080/api/v1/clusters/Sandbox/configurations/service_config_versions?service_name=RANGER&is_current=true
...
"type" : "ranger-ugsync-site",
...
"properties" : {
    "ranger.usersync.credstore.filename" : "/usr/hdp/current/ranger-usersync/conf/ugsync.jceks",
    "ranger.usersync.enabled" : "true",
    ...
    "ranger.usersync.group.searchenabled" : "false",
    ...
    "ranger.usersync.group.usermapsyncenabled" : "false",
... "ranger.usersync.ldap.bindalias" : "testldapalias", "ranger.usersync.ldap.binddn" : "uid=admin,ou=people,dc=hadoop,dc=apache,dc=org", "ranger.usersync.ldap.bindkeystore" : "", "ranger.usersync.ldap.deltasync" : "true", "ranger.usersync.ldap.groupname.caseconversion" : "none", "ranger.usersync.ldap.ldapbindpassword" : "SECRET:ranger-ugsync-site:4:ranger.usersync.ldap.ldapbindpassword", "ranger.usersync.ldap.referral" : "ignore", "ranger.usersync.ldap.searchBase" : "dc=hadoop,dc=apache,dc=org", "ranger.usersync.ldap.url" : "ldap://sandbox.hortonworks.com:33389", "ranger.usersync.ldap.user.groupnameattribute" : "memberof, ismemberof", "ranger.usersync.ldap.user.nameattribute" : "uid", "ranger.usersync.ldap.user.objectclass" : "person", "ranger.usersync.ldap.user.searchbase" : "dc=hadoop,dc=apache,dc=org", "ranger.usersync.ldap.user.searchfilter" : "(objectclass=person)", "ranger.usersync.ldap.user.searchscope" : "sub", "ranger.usersync.ldap.username.caseconversion" : "none", ... "ranger.usersync.user.searchenabled" : "false" },

確認
sudo -u ranger java -cp "/usr/hdp/current/ranger-admin/cred/lib/*" org.apache.ranger.credentialapi.buildks list -provider /usr/hdp/current/ranger-usersync/conf/ugsync.jceks


2017年10月9日月曜日

HDPのKnox LDAP Demoを単独で実行する

mkdir conf
cp /etc/knox/conf/users.ldif ./conf/
cp /usr/hdp/current/knox-server/bin/ldap.jar ./
cp /usr/hdp/current/knox-server/lib/gateway-demo-ldap-*.jar ./
cp /usr/hdp/current/knox-server/dep/apacheds-all-*.jar ./

# わざと失敗して、ldap.cfgファイルを作成する
java -jar ./ldap.jar

grep class.path ldap.cfg
class.path=./*.jar;../lib/*.jar;../dep/*.jar;../ext;../ext/*.jar

ls -l
total 10356
-rw-r--r-- 1 root root 10527192 Oct  9 01:20 apacheds-all-2.0.0-M16.jar
drwxr-xr-x 2 root root     4096 Oct  9 01:11 conf
-rw-r--r-- 1 root root    37589 Oct  9 01:18 gateway-demo-ldap-0.12.0.2.6.0.3-8.jar
-rw-r--r-- 1 root root      321 Oct  9 01:20 ldap.cfg
-rw-r--r-- 1 root root    23750 Oct  9 01:11 ldap.jar

nohup java -jar ./ldap.jar &


Ref:
https://repo.hortonworks.com/service/local/repositories/central/content/org/apache/knox/gateway-demo-ldap-launcher/1.4.0/gateway-demo-ldap-launcher-1.4.0.jar
https://repo.hortonworks.com/service/local/repositories/central/content/org/apache/knox/gateway-demo-ldap/1.4.0/gateway-demo-ldap-1.4.0.jar

2017年9月27日水曜日

HDP Sandbox (2.6.1)のAmbariでOracle XEを使う

https://community.hortonworks.com/content/supportkb/49135/how-to-install-oracle-express-xe-on-sandbox.html
https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_ambari_reference_guide/content/_using_ambari_with_oracle.html
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldev/r31/datapump_OBE/datapump.html


1. Oracle XEのインストール

Dockerホスト(Ubuntu)でSwapが2GB以上あるか確認して、内容であれば追加する
dd if=/dev/zero of=/var/swap.file count=2560 bs=1M
chmod go= /var/swap.file
mkswap /var/swap.file
grep -qw swap /etc/fstab || echo "/var/swap.file swap swap defaults 0 0" >> /etc/fstab
swapon /var/swap.file

適当にあるCentOS系イメージからコンテナーを作成する(Sandboxのもので可)
docker run --name oracle --hostname "node100.localdomain" --network=hdp --ip=172.17.130.100  --privileged -d hdp/base:6.8  /usr/sbin/sshd -D

Container上でOracleのウエブサイトからOracle XE Linux 64bit rpm zipファイルをダウンロードし、適当なところに解凍する
/dev/shmが2GB以上あることを確認する。
ない場合は
mount -t tmpfs shmfs -o size=2g /dev/shm

Oracle rpmをインストール
cd ./Disk1
rpm -ivh oracle-xe-11.2.0-*.0.x86_64.rpm
/etc/init.d/oracle-xe configure

確認

su - oracle
. /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh
sqlplus / as sysdba


2. AmbariようにOracleを設定する

CREATE USER ambari IDENTIFIED BY bigdata default tablespace USERS temporary tablespace TEMP;
GRANT unlimited tablespace to ambari;
GRANT create session to ambari;
GRANT create TABLE to ambari;
GRANT create SEQUENCE to ambari;

新規にDBを作成する場合は

sqlplus ambari/bigdata < /var/lib/ambari-server/resources/Ambari-DDL-Oracle-CREATE.sql

既存のDBをImportするには

SELECT directory_name, directory_path FROM dba_directories WHERE directory_name='DATA_PUMP_DIR';
/u01/app/oracle/admin/XE/dpdump/

mv ambari.dmp /u01/app/oracle/admin/XE/dpdump/

もし、1521ポートがマッピングされていない場合は
. ./start_hdp.sh
f_port_forward 1521 sandbox.hortonworks.com 1521

Oracle SQL DeveloperをPC/Macから開始
New Connectionを作成(SYSTEMユーザで)
View => DBAからもコネクション作成ボタンを押して上記で作ったコネクションを使用
Data Pumpを右クリックしてData Pump Import Wizardを選択
Step 1でFile Namesに.dmpファイル名をタイプ(Type of ImportはTablesかSchema)
Step 2で全テーブルを選択
Step 3でRe-Map SchemasでSchemaをAMBARI、Re-Map TablespacesでDestinationをUSERS
Step 4でAction On Table if Table ExistsをReplace

tail -f /u01/app/oracle/admin/XE/dpdump/IMPORT.LOG

うまくいった場合はSQL DeveloperでOther Users => AMBARI => Tables (Filtered)にテーブルがたくさんあるはずです。

新規作成かインポート後ambari-server setup

その前に、
ln -s /u01/app/oracle/product/11.2.0/xe/jdbc/lib/ojdbc6.jar /usr/share/java/ojdbc6.jar
echo 'server.jdbc.driver.path=/usr/share/java/ojdbc6.jar' >> /etc/ambari-server/conf/ambari.properties

もしインポートした場合はadminユーザのパスワードを変更する必要があります。
[root@sandbox ~]# su - oracle
-bash-4.1$ . /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh
-bash-4.1$ sqlplus ambari/bigdata

SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 27 06:23:08 2017

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

SQL> UPDATE users SET user_password='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' WHERE user_name='admin' and user_type='LOCAL';

1 row updated.

Ambariセットアップの開始
ambari-server stop
ambari-server setup
Using python  /usr/bin/python
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking firewall status...
WARNING: iptables is running. Confirm the necessary Ambari ports are accessible. Refer to the Ambari documentation for more details on ports.
OK to continue [y/n] (y)?
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)?
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? y
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL / MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
[7] - BDB
==============================================================================
Enter choice (2): 2
Hostname (localhost):
Port (1521):
Select Oracle identifier type:
1 - Service Name
2 - SID
(1): 2
SID (XE):
Username (ambari):
Enter Database Password (bigdata):
Configuring ambari database...
Configuring remote database connection properties...
WARNING: Before starting Ambari Server, you must run the following DDL against the database to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-Oracle-CREATE.sql'
Proceed with configuring remote database connection properties [y/n] (y)?
Extracting system views...
............
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.

ambari-server start

補足:コンテナを止めてしまって、再度Oracleをスタートするには
[root@sandbox ~]# rm -rf /var/tmp/.oracle
[root@sandbox ~]# mount -t tmpfs shmfs -o size=2g /dev/shm
[root@sandbox ~]# su - oracle
-bash-4.1$ . /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh
-bash-4.1$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Tue Oct 3 12:30:23 2017

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2233344 bytes
Variable Size             729811968 bytes
Database Buffers          331350016 bytes
Redo Buffers                5541888 bytes
Database mounted.
Database opened.
SQL> Disconnected from Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
-bash-4.1$ lsnrctl start


補足2:HDFだと変なエラーが出る
30 Jan 2018 01:23:53,156  WARN [Stack Version Loading Thread] RepoVdfCallable:142 - Could not load version definition for HDP-2.6 identified by http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/updates/2.6.4.0/HDP-2.6.4.0-91.xml. null
javax.xml.bind.UnmarshalException
 - with linked exception:
[org.xml.sax.SAXParseException; lineNumber: 54; columnNumber: 15; cvc-complex-type.2.4.d: Invalid content was found starting with element 'tags'. No child element is expected at this point.]
        at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.handleStreamException(UnmarshallerImpl.java:431)
        at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:368)
        at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:338)
        at org.apache.ambari.server.state.repository.VersionDefinitionXml.load(VersionDefinitionXml.java:442)
...
30 Jan 2018 01:23:53,238 ERROR [main] AmbariServer:1073 - Failed to run the Ambari Server
org.apache.ambari.server.AmbariException: An error occured during updating current repository versions with stack repositories.
        at org.apache.ambari.server.stack.UpdateActiveRepoVersionOnStartup.process(UpdateActiveRepoVersionOnStartup.java:99)
        at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:128)
        at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:1061)
Caused by: java.lang.NullPointerException
        at org.apache.ambari.server.stack.UpdateActiveRepoVersionOnStartup.updateRepoVersion(UpdateActiveRepoVersionOnStartup.java:106)
        at org.apache.ambari.server.stack.UpdateActiveRepoVersionOnStartup.process(UpdateActiveRepoVersionOnStartup.java:92)
        ... 2 more

回避策
ambari-server install-mpack --mpack=http://public-repo-1.hortonworks.com/HDF/centos6/3.x/updates/3.0.2.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.0.2.0-76.tar.gz --verbose