2019年3月5日火曜日

Ambari HDP > HDFS NameNodeをスタートできない

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
    NameNode().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 141, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 173, in namenode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 277, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
    returns=self.resource.returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.5.0-292/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.0-292/hadoop/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-node1.sethdesktop.localdomain.out
Error occurred during initialization of VM
GC triggered before VM initialization completed. Try increasing NewSize, current value 192K.

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 408, in <module>
    NameNode().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 141, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 121, in namenode
    format_namenode()
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 341, in format_namenode
    logoutput=True
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
    returns=self.resource.returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'hdfs --config /usr/hdp/2.6.5.0-292/hadoop/conf namenode -format -nonInteractive' returned 1. Error occurred during initialization of VM
GC triggered before VM initialization completed. Try increasing NewSize, current value 192K.


Ambari 2.7.3とHDP 2.6.5

回避策として、export HADOOP_CLIENT_OPTS="-XX:NewSize=1024k"をhadoop-env templateの最初に追加。





2018年12月21日金曜日

Grav on Docker

HadoopやHDPには関係ないですが、GravがLMS がわりになるかテスト。

https://getgrav.org/
https://github.com/getgrav/docker-grav

インストール(初回のみ)

mkdir getgrav
cd getgrav
curl -O https://raw.githubusercontent.com/getgrav/docker-grav/master/Dockerfile
docker build -t grav:latest .

開始

docker run -t -i -d \
  -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /var/tmp/share:/var/tmp/share \
  -p 8888:80/tcp --name=grav grav:latest

-vは多分必要なし。

最初にした事

  • learn2のテーマを追加、Enable、Activate
  • Configuration => Site、Site Titleなどをアップデート
  • apt-get install vim -y
  • https://github.com/getgrav/grav-theme-learn2/blob/develop/templates/partials/logo.html.twig
    https://maketext.io/ でも結局:
    <text fill="white" x="0" y="85" font-size="124">Xxxxx</text>


備忘録:Google BigQueryを使う前に設定すること

Ref: https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui
https://www.youtube.com/watch?v=qqbYrQGSibQ
https://codelabs.developers.google.com/codelabs/cloud-bigquery-wikipedia/#0

最初にクラウド側を設定

[ ENABLE THE API ]をクリックすると、プロジェクトを作れと言われる。
Google Cloud Platformコンソールから、New Projectを作ろうとすると

Google Cloud Platform service has been disabled. Please contact your administrator to restore service in G Suite Admin console.
https://stackoverflow.com/questions/45603145/unable-to-create-project-in-google-cloud-cloud-service-disabled-by-admin-plea
Check in your GSuite Admin console Apps -> Additional Google services if Google Developers Console is enabled:


データセット(スキーマ)を作る

https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?hl=en_US&_ga=2.91297939.-865685195.1545114774#create_a_dataset

最初の時は、上のリンクから、OPEN THE SERVICE ACCOUNTS PAGEを開く。
Roleを変更するときは、IAMのページから。
BiqQuery AdminとStorage Adminが必要かも。

DataSetのロケーションは後から変更できない?

DataSet毎のパーミッションを設定するには(テーブル毎にはできない模様):
https://cloud.google.com/bigquery/docs/dataset-access-controls


ログイン用のJsonファイルを作る

めんどくさい。
https://cloud.google.com/iam/docs/creating-managing-service-accounts
https://www.magellanic-clouds.com/blocks/guide/create-gcp-service-account-key/

Google Cloud Bucketを作る

How-to: https://cloud.google.com/storage/docs/creating-buckets

ちなみに、ファイルのアップロードは












テーブルをファイルから作る場合は、スキーマを選んでから、Create Table










実際に使用すると下記のエラー。バケットにロケーションをRegionalに設定しているのが悪い?
com.google.cloud.bigquery.BigQueryException: Cannot read in location: asia-east1
        at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:102)
        at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.getQueryResults(HttpBigQueryRpc.java:428)
        at com.google.cloud.bigquery.BigQueryImpl$23.call(BigQueryImpl.java:909)
        at com.google.cloud.bigquery.BigQueryImpl$23.call(BigQueryImpl.java:904)
        at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
        at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
        at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
        at com.google.cloud.bigquery.BigQueryImpl.getQueryResults(BigQueryImpl.java:903)
        at com.google.cloud.bigquery.BigQueryImpl.getQueryResults(BigQueryImpl.java:887)
        at com.google.cloud.bigquery.Job$1.call(Job.java:329)
        at com.google.cloud.bigquery.Job$1.call(Job.java:326)
        at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
        at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
        at com.google.cloud.RetryHelper.poll(RetryHelper.java:64)
        at com.google.cloud.bigquery.Job.waitForQueryResults(Job.java:325)
        at com.google.cloud.bigquery.Job.waitFor(Job.java:240)

https://issuetracker.google.com/issues/76127552#comment11
未だに、us-central1を使う必要あり?

TODO: Quotaを設定する

How-to: https://cloud.google.com/bigquery/docs/custom-quotas

クォータ管理画面を開く https://console.cloud.google.com/iam-admin/quotas
ServiceプルダウンからBigQuery APIを選択して、必要なLimitを変更。

"bq"コマンドをインストール

How-to: https://cloud.google.com/sdk/docs/

ダウンロード: https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-260.0.0-darwin-x86_64.tar.gz
解凍したフォルダを適当な場所に移動しといたほうがいい。Downloadsとかに置いたままだと、そこがPATHに追加される。。。

コマンドリファレンス: https://cloud.google.com/bigquery/docs/reference/bq-cli-reference
クィックスタート: https://cloud.google.com/bigquery/docs/quickstarts/quickstart-command-line

ヘルプ例: "bq show --help" とか "bq ls --help | less"

テーブル一覧: bq ls --max_results=1000 --format=prettyjson project_id:dataset_name    # FriendlyNameはだめ

none:       ...
pretty:     formatted table output  
sparse:     simpler table output  
prettyjson: easy-to-read JSON format  
json:       maximally compact JSON  
csv:        csv format with header

Useful queries

https://cloud.google.com/bigquery/docs/information-schema-datasets
https://cloud.google.com/bigquery/docs/information-schema-tables

-- SHOW DATABASES
SELECT catalog_name, schema_name, location FROM INFORMATION_SCHEMA.SCHEMATA;

-- SHOW TABLES
SELECT * FROM <table_schema>.INFORMATION_SCHEMA.TABLES;


Java API

https://cloud.google.com/bigquery/docs/quickstarts/quickstart-client-libraries#client-libraries-install-java
https://cloud.google.com/bigquery/docs/tables

java -classpath ...  com.google.cloud.examples.bigquery.BigQueryExample


2018年11月16日金曜日

Docker (remote) API

Ref: https://docs.docker.com/develop/sdk/
Ref: https://docs.docker.com/develop/sdk/examples/

root@ubuntu:~# docker -v
Docker version 17.05.0-ce, build 89658be

Dockerのバージョンが17.05だと、APIバージョンは1.29?
Ref: https://docs.docker.com/engine/api/v1.29/


特定のコンテナのログを出力

https://docs.docker.com/engine/api/v1.29/#operation/ContainerLogs

curl --unix-socket /var/run/docker.sock "http:/v1.29/containers/21b9f7a811ee/logs?stdout=1"

コンテナリスト

curl -s -f --unix-socket /var/run/docker.sock http:/v1.29/containers/json | python -m json.tool

フィルターしたい

Ref: https://stackoverflow.com/questions/28054203/docker-remote-api-filter-exited

curl -s -f --unix-socket /var/run/docker.sock "http:/v1.29/containers/json" -G --data-urlencode 'all=1' --data-urlencode 'filters={"status":["running"]}' | python -m json.tool

ダブルクォートじゃないと500エラーになった。

2018年10月31日水曜日

HAProxy 備忘録

Check version and used config file

[root@node1 ~]# ps -elf | grep haproxy
4 S root     25713     1  0  80   0 - 11182 do_wai Oct30 ?        00:00:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
0 S root     25714 25713  0  80   0 - 12730 do_wai Oct30 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
1 S root     25715 25714  0  80   0 - 12730 ep_pol Oct30 ?        00:01:14 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

[root@node1 ~]# /usr/sbin/haproxy -v
HA-Proxy version 1.5.18 2016/05/10

Copyright 2000-2016 Willy Tarreau <willy@haproxy.org>



Enabling logging:
[root@node1 log]# vim /etc/rsyslog.conf
    ...
    $ModLoad imudp
    $UDPServerRun 514

[root@node1 log]# service rsyslog restart
Redirecting to /bin/systemctl restart rsyslog.service
[root@node1 log]# service haproxy restart

Redirecting to /bin/systemctl restart haproxy.service

How to interpret the health check log format, and log related config options

Refs:
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#8.2
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#8.2
https://www.haproxy.com/documentation/aloha/8-0/traffic-management/lb-layer7/health-checks/


Example config (part):
frontend postgres_tcp_front
  mode tcp
  bind *:<some port>
  option tcplog
  log global

  default_backend postgres_tcp_backend

backend postgres_tcp_backend
  option log-health-checks
  mode tcp
  option httpchk GET / HTTP/1.1
  http-check expect status 200

    server <server 1> <server 2>:5432 resolvers some_dns check inter 2000 rise 2 fall 3 port XXXX

tcplog = "TCP format, which is more advanced. This format is enabled when "option
    tcplog" is set on the frontend"
inter = interval
rize = "Number of consecutive valid health checks before considering the server as UP"
port = "TCP port where the TCP connection is established. If not set, uses the server's line port; if set, uses the port on the configured server"


Example log line:
[WARNING] 302/165834 (9050) : Health check for server postgres_tcp_backend/<server name> failed, reason: Layer7 wrong status, code: 503, info: "HTTP status check returned code <3C>503<3E>", check duration: 12ms, status: 0/2 DOWN.

302 = days since 1st Jan
165834 = hhmmss
(9050) = PID
0/2 DOWN = hasn't been down for past two times (coming from 'rise' and 'fail' keywords)










2018年10月10日水曜日

Scalaのコードまたはクラスをちょっとだけ変えてみる

上記と同様の手段で、Scalaも変えてみる。

Scalaをインストールする

function f_setup_scala() {
    local _ver="${1:-2.12.3}"
    local _extract_dir="${2:-/opt}}"
    local _inst_dir="${3:-/usr/local/scala}"

    if [ ! -d "${_extract_dir%/}/scala-${_ver}" ]; then
        if [ ! -s "${_extract_dir%/}/scala-${_ver}.tgz" ]; then
            curl --retry 3 -C - -o "${_extract_dir%/}/scala-${_ver}.tgz" "https://downloads.lightbend.com/scala/${_ver}/scala-${_ver}.tgz" || return $?
        fi
        tar -xf "${_extract_dir%/}/scala-${_ver}.tgz" -C "${_extract_dir%/}/" || return $?
        chmod a+x ${_extract_dir%/}/bin/*
    fi
    [ -d "${_inst_dir%/}" ] || ln -s "${_extract_dir%/}/scala-${_ver}" "${_inst_dir%/}"
    export SCALA_HOME=${_inst_dir%/}
    export PATH=$PATH:$SCALA_HOME/bin
}

書き換えたいクラスファイルを作成したあと、コンパイル

vim TestUtils.scala

export CLASSPATH=...(snip)...

scalac TestUtils.scala

CLASSPATHは走っているプロセスと同じのであれば、下記のファンクションが利用可

function f_javaenvs() {
    local _port="${1}"
    local _p=`lsof -ti:${_port}`
    if [ -z "${_p}" ]; then
        echo "Nothing running on port ${_port}"
        return 11
    fi
    local _user="`stat -c '%U' /proc/${_p}`"
    local _dir="$(dirname `readlink /proc/${_p}/exe` 2>/dev/null)"
    export JAVA_HOME="$(dirname $_dir)"
    export CLASSPATH=".:`sudo -u ${_user} $JAVA_HOME/bin/jcmd ${_p} VM.system_properties | sed -nr 's/^java.class.path=(.+$)/\1/p' | sed 's/[\]:/:/g'`"
}

差し替えたいJarファイルを見つける

function f_jargrep() {
    local _cmd="jar -tf"
    which jar &>/dev/null || _cmd="less"
    find -L ${2:-./} -type f -name '*.jar' -print0 | xargs -0 -n1 -I {} bash -c ''${_cmd}' {} | grep -w '$1' && echo {}'
}

バックアップを作成後(重要!)差し替える

$JAVA_HOME/bin/jar -uf /usr/local/test/lib/test.jar com/test/utils/TestUtils*.class

確認する

$JAVA_HOME/bin/jar -tvf /usr/local/test/lib/test.jar | grep TestUtils
$JAVA_HOME/bin/javap -cp /usr/local/test/lib/test.jar -private com.test.utils.TestUtils

2018年8月6日月曜日

FreeIPA with Ambari 2.7.0

Ambari 2.7.0からFreeIPAが正式サポート!


自作Dockerスクリプト用にFreeIPAインストールコマンドを作ってみました。
https://github.com/hajimeo/samples/blob/558d31e7f4030f18b34ede7738ec57fe4ad5b7a7/bash/setup_security.sh#L1154-L1181

LoopbackのIPv6を有効にしないと、インストールが成功せず、またD-busサービスを再起動しないとインストール内のコマンドがタイムアウトしてしまいました(Dockerのバグ?)


インストール後はこんな感じになります。

あとは、Ambariからいつも通りのウィザードでKerberos化します。
TODO:「A password policy is in place that sets no expiry for created principals」はよくやり方が解らなかったのでとりあえず10年にして見ました。
あと、Admin Principalにはadminとタイプ。

感想:
2、3日たったら動かなくなってしまった。KDCのデータベースファイルが見つからない模様。
再インストールして確認中。


参考:

https://github.com/apache/ambari/blob/6fce9b1ed5686814aa13454144e4b3ce89ad9b31/ambari-server/docs/security/kerberos/kerberos_service.md
Ambariからプリンシパルを作るところ(org.apache.ambari.server.serveraction.kerberos.IPAKerberosOperationHandler#createPrincipal)
https://github.com/apache/ambari/blob/6f223af4b1cfe73b57041453e998a70619c394d0/ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/IPAKerberosOperationHandler.java#L151
IPAコマンドを見つけるところ
https://github.com/apache/ambari/blob/6f223af4b1cfe73b57041453e998a70619c394d0/ambari-server/src/main/java/org/apache/ambari/server/serveraction/kerberos/IPAKerberosOperationHandler.java#L83