2016年11月29日火曜日

SqoopをJDBでデバッグ

下記の例では、JDBでisOraOopEnabledが何を返すのか確認しようとしています。

​1)
vim /usr/hdp/current/hadoop-client/bin/hadoop.distro

2) 下記のラインを探します:
    exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"

3) このように変更します:
    if [ -n "$HADOOP_JDB" ]; then
      echo "export CLASSPATH=$CLASSPATH"
      echo "${JAVA_HOME}/bin/jdb" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
    else
      exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
    fi


execではなくechoを使っているのはexecではうまくいかなかったからです…

4) 実行します
[sqoop@node4 ~]$ HADOOP_JDB="Y" sqoop import > jdb_sqoop_import.sh

5) "jdb_sqoop_import.sh"を開いて要らないラインを削除します。 ("Warning: /usr/hdp/..." and "Please set $ACCUMLO_HOME .." etc.)
また、最後のラインの最後に、"$@"を追加します。

6) 実行します。

bash ./jdb_sqoop_import.sh --direct --verbose --connect jdbc:oracle:thin:@192.168.8.22:1521/XE --username ambari --password bigdata --query 'SELECT * FROM ambari.hosts WHERE $CONDITIONS' --num-mappers 2 --split-by 'ORA_HASH(ROWID)' --target-dir ambari.hosts

7) JDBが起動するはずですので、"help”などを実行

8) ブレークポイントを isOraOopEnabled に指定して、run

> stop in org.apache.sqoop.manager.oracle.OraOopManagerFactory.isOraOopEnabled
> run


> stop at org.apache.sqoop.manager.oracle.OraOopManagerFactory:101
> run # or cont
> step
> eval OraOopUtilities.getMinNumberOfImportMappersAcceptedByOraOop(sqoopOptions.getConf())


9) isOraOopEnabledで止まるはずです。
その後は、step, next, locals, where, print, evalを駆使します。JDB usage

CurlでWebHDFSへアクセス、ただしKerberosはON、でDEBUGログを見てみる


[hdfs@node3 hdfs]$ export HADOOP_OPTS="$HADOOP_OPTS -Dsun.security.krb5.debug=true -Djava.security.debug=gssloginconfig,configfile,configparser,logincontext"
[hdfs@node3 hdfs]$ kill `cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid`; sleep 3; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode
[hdfs@node3 hdfs]$ tail -f hadoop-hdfs-namenode-node3.localdomain.out

違うノードから
[hajime@node1 ~]$ curl -sS -L -v -w '%{http_code}' -X GET --negotiate -u : 'http://node3.localdomain:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=incorrect_user'

Node3(NameNode)にもどって:
Found KeyTab /etc/security/keytabs/spnego.service.keytab for HTTP/node3.localdomain@HO-UBU02
Found KeyTab /etc/security/keytabs/spnego.service.keytab for HTTP/node3.localdomain@HO-UBU02
Entered Krb5Context.acceptSecContext with state=STATE_NEW
>>> KeyTabInputStream, readName(): HO-UBU02
>>> KeyTabInputStream, readName(): HTTP
>>> KeyTabInputStream, readName(): node3.localdomain
>>> KeyTab: load() entry length: 66; type: 17
>>> KeyTabInputStream, readName(): HO-UBU02
>>> KeyTabInputStream, readName(): HTTP
>>> KeyTabInputStream, readName(): node3.localdomain
>>> KeyTab: load() entry length: 66; type: 23
>>> KeyTabInputStream, readName(): HO-UBU02
>>> KeyTabInputStream, readName(): HTTP
>>> KeyTabInputStream, readName(): node3.localdomain
>>> KeyTab: load() entry length: 58; type: 3
>>> KeyTabInputStream, readName(): HO-UBU02
>>> KeyTabInputStream, readName(): HTTP
>>> KeyTabInputStream, readName(): node3.localdomain
>>> KeyTab: load() entry length: 82; type: 18
>>> KeyTabInputStream, readName(): HO-UBU02
>>> KeyTabInputStream, readName(): HTTP
>>> KeyTabInputStream, readName(): node3.localdomain
>>> KeyTab: load() entry length: 74; type: 16
Looking for keys for: HTTP/node3.localdomain@HO-UBU02
Added key: 16version: 1
Added key: 18version: 1
Found unsupported keytype (3) for HTTP/node3.localdomain@HO-UBU02
Added key: 23version: 1
Added key: 17version: 1
>>> EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType
Using builtin default etypes for permitted_enctypes
default etypes for permitted_enctypes: 18 17 16 23.
>>> EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType
MemoryCache: add 1480393332/553854/1D11869D8DDC6C3FDAE645FD45DEA27B/hajime@HO-UBU02 to hajime@HO-UBU02|HTTP/node3.localdomain@HO-UBU02
>>> KrbApReq: authenticate succeed.
Krb5Context setting peerSeqNumber to: 1055634594
Krb5Context setting mySeqNumber to: 1055634594
Nov 29, 2016 4:22:11 AM com.sun.jersey.api.core.PackagesResourceConfig init
INFO: Scanning for root resource and provider classes in the packages:
  org.apache.hadoop.hdfs.server.namenode.web.resources
  org.apache.hadoop.hdfs.web.resources
Found ticket for nn/node3.localdomain@HO-UBU02 to go to krbtgt/HO-UBU02@HO-UBU02 expiring on Tue Nov 29 14:18:04 UTC 2016
Entered Krb5Context.initSecContext with state=STATE_NEW
Found ticket for nn/node3.localdomain@HO-UBU02 to go to krbtgt/HO-UBU02@HO-UBU02 expiring on Tue Nov 29 14:18:04 UTC 2016
Found ticket for nn/node3.localdomain@HO-UBU02 to go to jn/node2.localdomain@HO-UBU02 expiring on Tue Nov 29 14:18:04 UTC 2016
Found ticket for nn/node3.localdomain@HO-UBU02 to go to jn/node3.localdomain@HO-UBU02 expiring on Tue Nov 29 14:18:04 UTC 2016
Found ticket for nn/node3.localdomain@HO-UBU02 to go to jn/node1.localdomain@HO-UBU02 expiring on Tue Nov 29 14:18:04 UTC 2016
Found ticket for nn/node3.localdomain@HO-UBU02 to go to nn/node2.localdomain@HO-UBU02 expiring on Tue Nov 29 14:18:04 UTC 2016
Found service ticket in the subjectTicket (hex) =
0000: 61 82 01 5E 30 82 01 5A   A0 03 02 01 05 A1 0A 1B  a..^0..Z........
...
0160: C7 7D                                              ..

Client Principal = nn/node3.localdomain@HO-UBU02
Server Principal = nn/node2.localdomain@HO-UBU02
Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)=
0000: B3 F2 F3 5D 03 A2 01 B6   E7 D8 B2 87 82 FC 2B 6A  ...]..........+j
0010: A8 FD 37 68 E7 EC 74 68   22 D6 AD 63 C3 F5 06 E0  ..7h..th"..c....


Forwardable Ticket true
Forwarded Ticket false
Proxiable Ticket false
Proxy Ticket false
Postdated Ticket false
Renewable Ticket false
Initial Ticket false
Auth Time = Tue Nov 29 04:18:04 UTC 2016
Start Time = Tue Nov 29 04:20:11 UTC 2016
End Time = Tue Nov 29 14:18:04 UTC 2016
Renew Till = null
Client Addresses  Null
...

Ambari HostCleanup.pyを走らせてみた

[root@node5 ~]# python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent --skip=users --verbose
INFO:HostCleanup:
Killing pid's: ['']
INFO:HostCleanup:Deleting packages: ['']
INFO:HostCleanup:
Deleting directories: ['/etc/hadoop', '/etc/ambari-metrics-monitor', '/var/run/hadoop', '/var/run/ambari-metrics-monitor', '/var/log/hadoop', '/var/log/ambari-metrics-monitor', '/usr/lib/flume', '/usr/lib/storm', '/tmp/hadoop-hdfs']
INFO:HostCleanup:
Deleting additional directories: ['/etc/hadoop', '/etc/ambari-metrics-monitor', '/var/run/hadoop', '/var/run/ambari-metrics-monitor', '/var/log/hadoop', '/var/log/ambari-metrics-monitor', '/usr/lib/flume', '/usr/lib/storm', '/tmp/hadoop-hdfs']
INFO:HostCleanup:Path doesn't exists: /tmp/hadoop-hdfs
INFO:HostCleanup:
Deleting repo files: ['/etc/yum.repos.d/ambari.repo']
INFO:HostCleanup:
Erasing alternatives:{'symlink_list': [''], 'target_list': ['']}
INFO:HostCleanup:Path doesn't exists:
INFO:HostCleanup:Clean-up completed. The output is at /var/lib/ambari-agent/data/hostcleanup.result

この時点では、Ambariからサービスやコンポーネントは削除されない、がコンフィグファイルは削除される模様。
Ambariからクライアントを再インストールして、コンポーネントを再起動すると、問題ないようにみえる?

jcmd ManagementAgentを試してみる

Note: Java 7u4 かそれ以上のバージョンが必要

[hdfs@node2 ~]$ /usr/jdk64/jdk1.8.0_60/bin/jcmd 31219 help
31219:
The following commands are available:
JFR.stop
JFR.start
JFR.dump
JFR.check
VM.native_memory
VM.check_commercial_features
VM.unlock_commercial_features
ManagementAgent.stop
ManagementAgent.start_local
ManagementAgent.start
GC.rotate_log
Thread.print
GC.class_stats
GC.class_histogram
GC.heap_dump
GC.run_finalization
GC.run
VM.uptime
VM.flags
VM.system_properties
VM.command_line
VM.version
help
For more information about a specific command use 'help <command>'.
[hdfs@node2 ~]$

[hdfs@node2 ~]$ /usr/jdk64/jdk1.8.0_60/bin/jcmd 31219 ManagementAgent.start
31219:
java.lang.RuntimeException: Invalid option specified


[hdfs@node2 ~]$ /usr/jdk64/jdk1.8.0_60/bin/jcmd 31219 ManagementAgent.start_local
31219:
Command executed successfully

[hdfs@node2 ~]$ /usr/jdk64/jdk1.8.0_60/bin/jstat -J-Djstat.showUnsupported=true -snap 31219 | grep sun.management.JMXConnectorServer.address
sun.management.JMXConnectorServer.address="service:jmx:rmi://127.0.0.1/stub/rO0ABXNyAC5qYXZheC5tYW5hZ2VtZW50LnJlbW90ZS5ybWkuUk1JU2VydmVySW1wbF9TdHViAAAAAAAAAAICAAB4cgAaamF2YS5ybWkuc2VydmVyLlJlbW90ZVN0dWLp/tzJi+FlGgIAAHhyABxqYXZhLnJtaS5zZXJ2ZXIuUmVtb3RlT2JqZWN002G0kQxhMx4DAAB4cHc3AAtVbmljYXN0UmVmMgAADDE3Mi4xNy4xMDAuMgAAtag+Sx7jYZTOeW5ym7MAAAFX1UxwhIABAHg="

[hdfs@node2 ~]$ hdfs jmxget -localVM "service:jmx:rmi://127.0.0.1/stub/rO0ABXNyAC5qYXZheC5tYW5hZ2VtZW50LnJlbW90ZS5ybWkuUk1JU2VydmVySW1wbF9TdHViAAAAAAAAAAICAAB4cgAaamF2YS5ybWkuc2VydmVyLlJlbW90ZVN0dWLp/tzJi+FlGgIAAHhyABxqYXZhLnJtaS5zZXJ2ZXIuUmVtb3RlT2JqZWN002G0kQxhMx4DAAB4cHc3AAtVbmljYXN0UmVmMgAADDE3Mi4xNy4xMDAuMgAAtag+Sx7jYZTOeW5ym7MAAAFX1UxwhIABAHg=" 2>&1 | head
init: server=localhost;port=;service=NameNode;localVMUrl=service:jmx:rmi://127.0.0.1/stub/rO0ABXNyAC5qYXZheC5tYW5hZ2VtZW50LnJlbW90ZS5ybWkuUk1JU2VydmVySW1wbF9TdHViAAAAAAAAAAICAAB4cgAaamF2YS5ybWkuc2VydmVyLlJlbW90ZVN0dWLp/tzJi+FlGgIAAHhyABxqYXZhLnJtaS5zZXJ2ZXIuUmVtb3RlT2JqZWN002G0kQxhMx4DAAB4cHc3AAtVbmljYXN0UmVmMgAADDE3Mi4xNy4xMDAuMgAAtag+Sx7jYZTOeW5ym7MAAAFX1UxwhIABAHg=
url string for local pid = service:jmx:rmi://127.0.0.1/stub/rO0ABXNyAC5qYXZheC5tYW5hZ2VtZW50LnJlbW90ZS5ybWkuUk1JU2VydmVySW1wbF9TdHViAAAAAAAAAAICAAB4cgAaamF2YS5ybWkuc2VydmVyLlJlbW90ZVN0dWLp/tzJi+FlGgIAAHhyABxqYXZhLnJtaS5zZXJ2ZXIuUmVtb3RlT2JqZWN002G0kQxhMx4DAAB4cHc3AAtVbmljYXN0UmVmMgAADDE3Mi4xNy4xMDAuMgAAtag+Sx7jYZTOeW5ym7MAAAFX1UxwhIABAHg= = service:jmx:rmi://127.0.0.1/stub/rO0ABXNyAC5qYXZheC5tYW5hZ2VtZW50LnJlbW90ZS5ybWkuUk1JU2VydmVySW1wbF9TdHViAAAAAAAAAAICAAB4cgAaamF2YS5ybWkuc2VydmVyLlJlbW90ZVN0dWLp/tzJi+FlGgIAAHhyABxqYXZhLnJtaS5zZXJ2ZXIuUmVtb3RlT2JqZWN002G0kQxhMx4DAAB4cHc3AAtVbmljYXN0UmVmMgAADDE3Mi4xNy4xMDAuMgAAtag+Sx7jYZTOeW5ym7MAAAFX1UxwhIABAHg=
Create RMI connector and connect to the RMI connector serverservice:jmx:rmi://127.0.0.1/stub/rO0ABXNyAC5qYXZheC5tYW5hZ2VtZW50LnJlbW90ZS5ybWkuUk1JU2VydmVySW1wbF9TdHViAAAAAAAAAAICAAB4cgAaamF2YS5ybWkuc2VydmVyLlJlbW90ZVN0dWLp/tzJi+FlGgIAAHhyABxqYXZhLnJtaS5zZXJ2ZXIuUmVtb3RlT2JqZWN002G0kQxhMx4DAAB4cHc3AAtVbmljYXN0UmVmMgAADDE3Mi4xNy4xMDAuMgAAtag+Sx7jYZTOeW5ym7MAAAFX1UxwhIABAHg=
Get an MBeanServerConnection
Domains:
        Domain = Hadoop
        Domain = JMImplementation
        Domain = com.sun.management

[root@sandbox-hdp ~]# jcmd `cat /var/run/ambari-server/ambari-server.pid` ManagementAgent.start jmxremote.port=5005 jmxremote.authenticate=false jmxremote.ssl=false
51141:
Command executed successfully
[root@sandbox-hdp ~]# jstat -J-Djstat.showUnsupported=true -snap `cat /var/run/ambari-server/ambari-server.pid` | grep -i jmx
sun.management.JMXConnectorServer.0.authenticate="false"
sun.management.JMXConnectorServer.0.remoteAddress="service:jmx:rmi:///jndi/rmi://sandbox-hdp.hortonworks.com:5005/jmxrmi"
sun.management.JMXConnectorServer.0.ssl="false"
sun.management.JMXConnectorServer.0.sslNeedClientAuth="false"
sun.management.JMXConnectorServer.0.sslRegistry="false"

But above doesn't allow jconsole to connect...

2016年11月28日月曜日

Hello world spark sbt on sandbox (HDP 2.5.0)

前準備

1)DockerバージョンのHDP Sandboxにログイン
ssh -p 2222 root@sandbox.hortonworks.com

2)SBTとVimをインストール
http://www.scala-sbt.org/release/docs/Installing-sbt-on-Linux.html
curl https://bintray.com/sbt/rpm/rpm | tee /etc/yum.repos.d/bintray-sbt-rpm.repo
yum install -y sbt vim

2.1)Vimがそのままだと見づらいので、ちょっと変更
http://bsnyderblog.blogspot.com.au/2012/12/vim-syntax-highlighting-for-scala-bash.html
mkdir -p ~/.vim/{ftdetect,indent,syntax} && for d in ftdetect indent syntax ; do curl -o ~/.vim/$d/scala.vim https://raw.githubusercontent.com/derekwyatt/vim-scala/master/syntax/scala.vim; done

実作業

1)作業用フォルダを作成し、必要なファイルを編集
http://spark.apache.org/docs/1.6.2/quick-start.html#self-contained-applications
mkdir scala && cd ./scala
mkdir -p ./src/main/scala
vim simple.sbt
name := "Simple Project"

version := "1.0"

scalaVersion := "2.10.5"

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.2"

vim ./src/main/scala/SimpleApp.scala
/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object SimpleApp {
  def main(args: Array[String]) {
    val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
    val conf = new SparkConf().setAppName("Simple Application")
    val sc = new SparkContext(conf)
    val logData = sc.textFile(logFile, 2).cache()
    val numAs = logData.filter(line => line.contains("a")).count()
    val numBs = logData.filter(line => line.contains("b")).count()
    println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
  }
}

2)パッケージ化
sbt package
...
[info] Packaging /root/scala/target/scala-2.10/simple-project_2.10-1.0.jar ...
[info] Done packaging.
[success] Total time: 98 s, completed Nov 24, 2016 11:35:26 PM

2.1)HDFS側の用意(プログラムを変えるのが面倒なので、変なフォルダ名)
hdfs dfs -mkdir YOUR_SPARK_HOME
locate README.md
hdfs dfs -put /usr/lib/hue/ext/thirdparty/js/test-runner/mootools-runner/README.md YOUR_SPARK_HOME

3)ジョブをサブミット!
[root@sandbox hdfs]# spark-submit --class "SimpleApp" --master local[1] --driver-memory 512m --executor-memory 512m --executor-cores 1 /root/scala/target/scala-2.10/simple-project_2.10-1.0.jar 2>/dev/null
Lines with a: 23, Lines with b: 10


3.1)Windowsでもトライ
http://www.ics.uci.edu/~shantas/Install_Spark_on_Windows10.pdf
https://wiki.apache.org/hadoop/WindowsProblems
Set the environment variable %HADOOP_HOME% to point to the directory above the BIN dir containing WINUTILS.EXE.

C:\Apps\spark-1.6.2-bin-hadoop2.6\bin>spark-submit --class "HdfsDeleteApp" c:\Users\Hajime\Desktop\hdfsdeleteapp-project_2.10-1.0.jar 2>nul 

2016年11月21日月曜日

パッチが適用されているかを簡単に確認する

以前のトピックに似てますが、似たような方法でIDEなどを使わずにパッチが適用されているかを確認します。

例:https://issues.apache.org/jira/secure/attachment/12790079/AMBARI-15100-trunk_4.patch
上記パッチだと、putMetricが追加されたことがわかります。

Ambari Metrics Systemがインストールされたノードにログインしps auxwww | grep metricsなどでAMSのPIDを見つけます。
それらしいのが2つ見つかったので、procからJarファイルを探します。

ls -l /proc/{3195,3241}/fd | grep .jar$

または、パッチのパスからたぶんambari-metrics-commonがファイル名に含まれると思いますので、

[root@node1 ~]# ls -l /proc/{3195,3241}/fd | grep -E ambari-metrics-common.*\.jar$
lr-x------ 1 ams hadoop 64 Nov 21 02:42 86 -> /usr/lib/ambari-metrics-collector/ambari-metrics-common-2.2.2.0.460.jar
[root@node1 ~]# less /usr/lib/ambari-metrics-collector/ambari-metrics-common-2.2.2.0.460.jar | grep TimelineMetricsCache
-rw-r--r--  2.0 unx     5208 b- defN 16-May-05 18:35 org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsCache.class
-rw-r--r--  2.0 unx     3120 b- defN 16-May-05 18:35 org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsCache$TimelineMetricHolder.class
-rw-r--r--  2.0 unx     3251 b- defN 16-May-05 18:35 org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsCache$TimelineMetricWrapper.class 
[root@node1 ~]# /usr/jdk64/jdk1.8.0_60/bin/javap -classpath /usr/lib/ambari-metrics-collector/ambari-metrics-common-2.2.2.0.460.jar org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache
Compiled from "TimelineMetricsCache.java"
public class org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache {
  public static final int MAX_RECS_PER_NAME_DEFAULT;
  public static final int MAX_EVICTION_TIME_MILLIS;
  public org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache(int, int);
  public org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache(int, int, boolean);
  public org.apache.hadoop.metrics2.sink.timeline.TimelineMetric getTimelineMetric(java.lang.String);
  public int getMaxEvictionTimeInMillis();
  public void putTimelineMetric(org.apache.hadoop.metrics2.sink.timeline.TimelineMetric);
  public void putTimelineMetric(org.apache.hadoop.metrics2.sink.timeline.TimelineMetric, boolean);
  static int access$000(org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache);
  static int access$100(org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache);
  static org.apache.commons.logging.Log access$200();
  static {};
}
あれ、putMetricないですね。

[root@node1 ~]# zipgrep putMetric /usr/lib/ambari-metrics-collector/ambari-metrics-common-2.2.2.0.460.jar
org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsCache$TimelineMetricHolder.class:Binary file (standard input) matches
org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsCache$TimelineMetricWrapper.class:Binary file (standard input) matches
[root@node1 ~]# /usr/jdk64/jdk1.8.0_60/bin/javap -classpath /usr/lib/ambari-metrics-collector/ambari-metrics-common-2.2.2.0.460.jar org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache\$TimelineMetricHolder
Compiled from "TimelineMetricsCache.java"
class org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache$TimelineMetricHolder extends java.util.concurrent.ConcurrentSkipListMap<java.lang.String, org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache$TimelineMetricWrapper> {
  final org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache this$0;
  org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache$TimelineMetricHolder(org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache);
  public org.apache.hadoop.metrics2.sink.timeline.TimelineMetric evict(java.lang.String);
  public void put(java.lang.String, org.apache.hadoop.metrics2.sink.timeline.TimelineMetric);
}
[root@node1 ~]# /usr/jdk64/jdk1.8.0_60/bin/javap -classpath /usr/lib/ambari-metrics-collector/ambari-metrics-common-2.2.2.0.460.jar org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache\$TimelineMetricWrapper
Compiled from "TimelineMetricsCache.java"
class org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache$TimelineMetricWrapper {
  final org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache this$0;
  org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache$TimelineMetricWrapper(org.apache.hadoop.metrics2.sink.timeline.cache.TimelineMetricsCache, org.apache.hadoop.metrics2.sink.timeline.TimelineMetric);
  public synchronized void putMetric(org.apache.hadoop.metrics2.sink.timeline.TimelineMetric);
  public synchronized long getTimeDiff();
  public synchronized org.apache.hadoop.metrics2.sink.timeline.TimelineMetric getTimelineMetric();
}


2016年11月9日水曜日

Ambari APIでサービスを特定ホストにインストールする

Grafanaを例に

curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST http://localhost:8080/api/v1/clusters/${_CLS}/services/AMBARI_METRICS/components/METRICS_GRAFANA
curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST -d '{"host_components":[{"HostRoles":{"component_name":"METRICS_GRAFANA"}}]}' \
http://localhost:8080/api/v1/clusters/${_CLS}/hosts?Hosts/host_name=${_HOST}

PostgreSQL log:
LOG:  execute <unnamed>: INSERT INTO servicecomponentdesiredstate (component_name, desired_state, service_name, cluster_id, desired_stack_id) VALUES ($1, $2, $3, $4, $5)
DETAIL:  parameters: $1 = 'METRICS_GRAFANA', $2 = 'INSTALLED', $3 = 'AMBARI_METRICS', $4 = '2', $5 = '4'
LOG:  execute <unnamed>: INSERT INTO hostcomponentdesiredstate (admin_state, desired_state, maintenance_state, restart_required, security_state, host_id, desired_stack_id, service_name, cluster_id, component_name) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
DETAIL:  parameters: $1 = NULL, $2 = 'INIT', $3 = 'OFF', $4 = '0', $5 = 'UNSECURED', $6 = '4', $7 = '4', $8 = 'AMBARI_METRICS', $9 = '2', $10 = 'METRICS_GRAFANA'
LOG:  execute <unnamed>: INSERT INTO hostcomponentstate (id, current_state, security_state, upgrade_state, version, host_id, service_name, cluster_id, component_name, current_stack_id) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
DETAIL:  parameters: $1 = '453', $2 = 'INIT', $3 = 'UNSECURED', $4 = 'NONE', $5 = 'UNKNOWN', $6 = '4', $7 = 'AMBARI_METRICS', $8 = '2', $9 = 'METRICS_GRAFANA', $10 = '4'

インストール:
curl -u admin:admin -H "X-Requested-By:ambari" -X PUT -d '{"RequestInfo":{"context":"Install Grafana","operation_level":{"level":"HOST_COMPONENT","cluster_name":"'${_CLS}'","host_name":"'${_HOST}'","service_name":"AMBARI_METRICS"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}' http://localhost:8080/api/v1/clusters/bne_c1/hosts/node1.localdomain/host_components/METRICS_GRAFANA


削除してみる:
curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE  http://localhost:8080/api/v1/clusters/${_CLS}/services/AMBARI_METRICS/components/METRICS_GRAFANA

LOG:  execute <unnamed>: DELETE FROM hostcomponentdesiredstate WHERE ((((host_id = $1) AND (cluster_id = $2)) AND (component_name = $3)) AND (service_name = $4))
DETAIL:  parameters: $1 = '4', $2 = '2', $3 = 'METRICS_GRAFANA', $4 = 'AMBARI_METRICS'
LOG:  execute <unnamed>: DELETE FROM hostcomponentstate WHERE (id = $1)
DETAIL:  parameters: $1 = '453'
LOG:  execute <unnamed>: DELETE FROM servicecomponentdesiredstate WHERE (((cluster_id = $1) AND (component_name = $2)) AND (service_name = $3))
DETAIL:  parameters: $1 = '2', $2 = 'METRICS_GRAFANA', $3 = 'AMBARI_METRICS'