2017年2月24日金曜日

HostCleanupでうまくパッケージが消せない(アンインストールできない)

基本情報:

https://cwiki.apache.org/confluence/display/AMBARI/Host+Cleanup+for+Ambari+and+Stack

HostCleanup.pyは下記のResultファイルを作ります

[root@node4 ~]# ls -ltr /var/lib/ambari-agent/data/*.result
-rw-r--r-- 1 root root   855 Feb 24 02:20 /var/lib/ambari-agent/data/hostcheck.result
-rw-r--r-- 1 root root   911 Feb 24 02:20 /var/lib/ambari-agent/data/hostcheck_custom_actions.result
-rw-r--r-- 1 root root 21052 Feb 24 03:46 /var/lib/ambari-agent/data/hostcleanup.result

hostcheckfile

[root@node4 ~]# cat /var/lib/ambari-agent/data/hostcheck.result
[metadata]
created = 2017-02-24 02:20:05.757245

[users]
usr_list = hive,zookeeper,ambari-qa,tez,hdfs,yarn,hcat,mapred
usr_homedir_list = /home/hive,/home/zookeeper,/home/ambari-qa,/home/tez,/home/hdfs,/home/yarn,/home/hcat,/home/mapred

[alternatives]
symlink_list =
target_list =

[directories]
dir_list = /etc/hadoop,/etc/hive,/etc/zookeeper,/etc/hive-hcatalog,/etc/tez,/etc/hive-webhcat,/etc/pig,/var/run/hadoop,/var/run/hive,/var/run/zookeeper,/var/run/hive-hcatalog,/var/run/webhcat,/var/run/hadoop-yarn,/var/run/hadoop-mapreduce,/var/log/hadoop,/var/log/hive,/var/log/zookeeper,/var/log/hive-hcatalog,/var/log/hadoop-yarn,/var/log/hadoop-mapreduce,/var/lib/hive,/var/lib/hadoop-hdfs,/var/lib/hadoop-yarn,/var/lib/hadoop-mapreduce,/tmp/hadoop-hdfs,/hadoop/zookeeper,/hadoop/hdfs,/hadoop/yarn,/usr/hdp/current

[processes]
proc_list = 15757,17427


hostcheckfileca

[root@node4 ~]# cat /var/lib/ambari-agent/data/hostcheck_custom_actions.result
[metadata]
created = 2017-02-24 02:20:26.754150

[packages]
pkg_list = hadoop_2_5_3_0_37-hdfs.x86_64,zookeeper_2_5_3_0_37.noarch,hdp-select.noarch,hadoop_2_5_3_0_37-yarn.x86_64,hadoop_2_5_3_0_37-libhdfs.x86_64,atlas-metadata_2_5_3_0_37-hive-plugin.noarch,ranger_2_5_3_0_37-yarn-plugin.x86_64,hive_2_5_3_0_37-hcatalog.noarch,pig_2_5_3_0_37.noarch,spark_2_5_3_0_37-yarn-shuffle.noarch,hive2_2_5_3_0_37.noarch,ranger_2_5_3_0_37-hdfs-plugin.x86_64,zookeeper_2_5_3_0_37-server.noarch,tez_hive2_2_5_3_0_37.noarch,hive_2_5_3_0_37.noarch,hadoop_2_5_3_0_37.x86_64,hive_2_5_3_0_37-webhcat.noarch,spark2_2_5_3_0_37-yarn-shuffle.noarch,tez_2_5_3_0_37.noarch,hadoop_2_5_3_0_37-mapreduce.x86_64,hadoop_2_5_3_0_37-client.x86_64,datafu_2_5_3_0_37.noarch,hive2_2_5_3_0_37-jdbc.noarch,hive_2_5_3_0_37-jdbc.noarch,ranger_2_5_3_0_37-hive-plugin.x86_64,bigtop-jsvc.x86_64

[repositories]
repo_list = HDP-2.5,Updates-ambari-2.4.2.0

Resultファイルを消してみると

[root@node4 ~]# rm /var/lib/ambari-agent/data/hostcheck_custom_actions.result
rm: remove regular file `/var/lib/ambari-agent/data/hostcheck_custom_actions.result'? y
[root@node4 ~]# python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --skip=users
Host Check results not found. There is no /var/lib/ambari-agent/data/hostcheck_custom_actions.result. Do you want to run host checks [ /n] (y)

INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh /var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py ACTIONEXECUTE /tmp/tmpSni9Le /var/lib/ambari-agent/cache/custom_actions /tmp/tmpRX36M4 INFO /var/lib/ambari-agent/tmp

TODO: Host Checksとは?

[root@node4 ~]# cat /tmp/tmpSni9Le
{"commandParams": {"check_execute_list": "*BEFORE_CLEANUP_HOST_CHECKS*"}}

[root@node4 ~]# cat /tmp/tmpRX36M4{"transparentHugePage": {"message": "", "exit_code": 0}, "last_agent_env_check": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1487908261777, "activeJavaProcs": [], "liveServices": [{"status": "Unhealthy", "name": "ntpd", "desc": "ntpd: unrecognized service\n"}]}, "reverseLookup": true, "alternatives": [], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}], "firewallRunning": true}, "installed_packages": [], "existing_repos": ["Updates-ambari-2.4.2.0"]}

注意:
1. HDFS dfs.datanode.data.dir dfs.namenode.name.dir dfs.journalnode.edits.dir
2. 再インストール時に、/etc/hadoop/<version>/0 が作成されなかった
3. /etc/hadoop/conf が /usr/hdp/current/hadoop-client/conf ヘシムリンクされなかった

2017年2月15日水曜日

HDP 2.5 SandboxのZeppelinをKnoxのDemo LDAPと連携してみる

https://community.hortonworks.com/articles/76938/zeppelin-ldap-authentication-with-openldap.html

前のHDP 2.5.3のKnox Demo Ldapを使ってみるを参考に、Advanced zeppelin-envからshiro_ini_contentを下記に変更 ([users]は全てコメントアウトしてある):

[main]
ldapRealm = org.apache.shiro.realm.ldap.JndiLdapRealm
ldapRealm.userDnTemplate = uid={0},ou=people,dc=hadoop,dc=apache,dc=org
ldapRealm.contextFactory.url = ldap://sandbox.hortonworks.com:33389
ldapRealm.contextFactory.authenticationMechanism = SIMPLE
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login

[urls]
/** = authc

あとは再起動すればOK。
HDP 2.5.3, 2.6.0だと設定が変わった模様
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_zeppelin-component-guide/content/config-secure-prod.html
https://zeppelin.apache.org/docs/0.6.0/security/shiroauthentication.html

2017年2月12日日曜日

Ambari AgentをPythonからスタートしてみる


備考:インポートしているのは最低限。例えばPIDFILEとかは設定していません。 (でも必要なし?)

A. シェルの"export"で:
[root@node9 ~]# export PYTHON=/usr/bin/python2.6
[root@node9 ~]# export PYTHONPATH=/usr/lib/python2.6/site-packages:/usr/lib/python2.6/site-packages/ambari_commons
[root@node9 ~]# cd /usr/lib/python2.6/site-packages/ambari_agent/
[root@node9 ambari_agent]# python
import sys, os

B. Pythonの"sys.path"で
#cd /var/lib/ambari-agent
python
import sys, os
sys.path.append("/usr/lib/python2.6/site-packages")
sys.path.append("/usr/lib/python2.6/site-packages/ambari_agent")
sys.path.append("/usr/lib/ambari-agent")
# TODO: 下記も必要?
sys.path.append("/usr/lib/python2.6/site-packages/ambari_commons")
sys.path.append("/usr/lib/ambari-agent/lib")
sys.path.append("/usr/lib/ambari-agent/lib/resource_management/libraries/functions")
sys.path.append("/var/lib/ambari-agent/tmp")

共通
from ambari_agent.AmbariConfig import AmbariConfig
from ambari_agent.Controller import Controller
config = AmbariConfig()
config.read(AmbariConfig.getConfigFile())


オペレーションのコンフィグをロードしてみる
# load config for /var/lib/ambari-agent/cache/stacks/HDP/2.2/services/HBaseRest./package/scripts/params.py
#sys.path.append("/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/HBaseRest./package/scripts")
import json
from resource_management.libraries.script.config_dictionary import ConfigDictionary, UnknownConfiguration
from resource_management.libraries import Script

command_data_file = "/var/lib/ambari-agent/data/command-1016.json"
with open(command_data_file) as f:
     pass
     Script.config = ConfigDictionary(json.load(f))

config = Script.get_config()
config['configurations'].keys()
[u'spark-defaults', u'ranger-knox-plugin-properties', u'ranger-hdfs-audit', u'zeppelin-config', u'ranger-hdfs-policymgr-ssl', u'pig-env', u'anonymization-rules', u'ranger-knox-audit', u'ranger-kafka-plugin-properties', u'slider-env', u'usersyn...


HBaseのコンフィグを使ってみる(失敗)
sys.path.append("/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts")
from hbase import hbase
hbase(name='client')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase.py", line 47, in hbase
    import params
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params.py", line 26, in <module>
    from params_linux import *
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py", line 20, in <module>
    import status_params
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/status_params.py", line 46, in <module>
    pid_dir = config['configurations']['hbase-env']['hbase_pid_dir']
TypeError: 'NoneType' object is unsubscriptable
>>>

HBaseのスタート時に使われるPath
['/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/HBaseRest./package/scripts', '/usr/lib/python2.6/site-packages', '/var/lib/ambari-agent', '/usr/lib64/python26.zip', '/usr/lib64/python2.6', '/usr/lib64/python2.6/plat-linux2', '/usr/lib64/python2.6/lib-tk', '/usr/lib64/python2.6/lib-old', '/usr/lib64/python2.6/lib-dynload', '/usr/lib64/python2.6/site-packages', '/usr/lib64/python2.6/site-packages/gtk-2.0', '/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info', '/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info', '/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info']

HBaseのスタート時に使われるArgv
['/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/HBaseRest./package/scripts/client.py', 'RESTART', '/var/lib/ambari-agent/data/command-1016.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/HBaseRest./package', '/var/lib/ambari-agent/data/structured-out-1016.json', 'INFO', '/var/lib/ambari-agent/tmp']

上記の出力方法例:
/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/HBaseRest./package/scripts/params.py", line 4
    print(str(sys.path), file=sys.stderr)
                             ^

参考:https://cwiki.apache.org/confluence/display/AMBARI/Defining+a+Custom+Stack+and+Services


Misc.:
export PYTHON=/usr/bin/python2.6
export PYTHONPATH=/usr/lib/python2.6/site-packages:/usr/lib/python2.6/site-packages/ambari_commons
#export PYTHONPATH=/usr/lib/python2.6/site-packages:/usr/lib/python2.6/site-packages/ambari_commons:/usr/lib/ambari-agent/lib/resource_management/libraries/functions:/var/lib/ambari-agent/tmp
export AMBARI_PASSPHRASE=DEV
PATH=/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent /usr/bin/python2 -mtrace --trace /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py stop


備忘録:もしNumber of Under-Replicated Blocksが減らない場合チェックすること

NameNodeログ
INFO  BlockStateChange (BlockManager.java:computeReplicationWorkForBlocks(1527)) - BLOCK* neededReplications = 2030564, pendingReplications = 1497.
INFO  blockmanagement.BlockManager (BlockManager.java:computeReplicationWorkForBlocks(1534)) - Blocks chosen but could not be replicated = 20; of which 0 have no target, 20 have no source, 0 are UC, 0 are abandoned, 0 already have enough replicas.



最終手段的に、再度Replicateしてみる
https://community.hortonworks.com/articles/4427/fix-under-replicated-blocks-in-hdfs-manually.html

su - <$hdfs_user>

hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/under_replicated_files 

for hdfsfile in `cat /tmp/under_replicated_files`; do echo "Fixing $hdfsfile :" ;  hadoop fs -setrep 3 $hdfsfile; done



調査中:HDFS JMX: DataNode Disk Capacityはどうやって計算されている?

1) DataNodeのポートを見つける(忘れたので)
[root@sandbox ~]# ps auxwww | grep ^hdfs | grep DataNode
hdfs     25716  0.4  1.9 1042212 312496 ?      Sl    2016 266:45 /usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.5.0.0-1245 -XX:+PrintClassHistogramAfterFullGC -XX:+PrintClassHistogramBeforeFullGC -Djava.net.preferIPv4Stack=true -Dhdp.version= -XX:+PrintClassHistogramAfterFul...

[root@sandbox ~]# lsof -nPp 25716 | grep LISTEN
java    25716 hdfs  360u  IPv4           61851996      0t0      TCP *:50010 (LISTEN)
java    25716 hdfs  365u  IPv4           61852005      0t0      TCP 127.0.0.1:35369 (LISTEN)
java    25716 hdfs  469u  IPv4           61851242      0t0      TCP *:50075 (LISTEN)
java    25716 hdfs  474u  IPv4           61848210      0t0      TCP *:8010 (LISTEN)

2) DNのJMXページを見てみる
[root@sandbox ~]# curl -L -s http://sandbox.hortonworks.com:50075/jmx | grep -w Capacity -B 3
    "name" : "Hadoop:service=DataNode,name=FSDatasetState-108756b4-ee13-404e-8c94-e0897451be59",
    "modelerType" : "org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl",
    "Remaining" : 94382731264,
    "Capacity" : 167994830848,
--

3)実際の値は?
[root@sandbox ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
rootfs                 40G   23G   16G  60% /
/dev/mapper/docker-253:1-132787-63ad60aad76a8b9b37b207791b8300dea7818705d026ad050969393d962922ee
                       40G   23G   16G  60% /
tmpfs                 7.9G     0  7.9G   0% /dev
tmpfs                 7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/vda1             158G   63G   89G  42% /hadoop
/dev/vda1             158G   63G   89G  42% /etc/resolv.conf
/dev/vda1             158G   63G   89G  42% /etc/hostname
/dev/vda1             158G   63G   89G  42% /etc/hosts
shm                    64M     0   64M   0% /dev/shm





調査中: Zeppelin: Interpreter

http://zeppelin.apache.org/docs/latest/manual/interpreters.html によると、”Shared",”Scoped",”Isolated"とありますが、プロセスを見ただけで違いがわかるのか、調査しています。

http://ho-ubu03.openstacklocal:9995/#/interpreter

==> zeppelin-zeppelin-sandbox.hortonworks.com.log <==
 INFO [2016-12-08 04:47:19,586] ({qtp1250172175-78} NotebookServer.java[sendNote]:423) - New operation from 10.42.80.62 : 65073 : anonymous : GET_NOTE : 2BQKC1P49
 INFO [2016-12-08 04:47:29,708] ({qtp1250172175-90} InterpreterFactory.java[createInterpretersForNote]:616) - Create interpreter instance jdbc for note 2BQKC1P49
 INFO [2016-12-08 04:47:29,709] ({qtp1250172175-90} InterpreterFactory.java[createInterpretersForNote]:648) - Interpreter org.apache.zeppelin.jdbc.JDBCInterpreter 1783724414 created
 INFO [2016-12-08 04:47:29,713] ({pool-1-thread-5} SchedulerFactory.java[jobStarted]:131) - Job paragraph_1467882454542_218269314 started by scheduler org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session184513978
 INFO [2016-12-08 04:47:29,714] ({pool-1-thread-5} Paragraph.java[jobRun]:254) - run paragraph 20160707-143734_1332455999 using jdbc org.apache.zeppelin.interpreter.LazyOpenInterpreter@6a51797e
 INFO [2016-12-08 04:47:29,715] ({pool-1-thread-5} RemoteInterpreterProcess.java[reference]:148) - Run interpreter process [/usr/hdp/current/zeppelin-server/bin/interpreter.sh, -d, /usr/hdp/current/zeppelin-server/interpreter/jdbc, -p, 56479, -l, /usr/hdp/current/zeppelin-server/local-repo/2BW4S5NJ4]
 INFO [2016-12-08 04:47:31,227] ({pool-1-thread-5} RemoteInterpreter.java[init]:180) - Create remote interpreter org.apache.zeppelin.jdbc.JDBCInterpreter
 INFO [2016-12-08 04:47:31,359] ({pool-1-thread-5} RemoteInterpreter.java[pushAngularObjectRegistryToRemote]:465) - Push local angular object registry from ZeppelinServer to remote interpreter group 2BW4S5NJ4:shared_process
 INFO [2016-12-08 04:47:31,673] ({pool-1-thread-5} NotebookServer.java[afterStatusChange]:1141) - Job 20160707-143734_1332455999 is finished
 INFO [2016-12-08 04:47:31,710] ({pool-1-thread-5} SchedulerFactory.java[jobFinished]:137) - Job paragraph_1467882454542_218269314 finished by scheduler org.apache.zeppelin.interpreter.remote.RemoteInterpretershared_session184513978

2017年2月8日水曜日

調査中:KnoxのTCPダンプをWiresharkで見てみる

1)Knoxサーバ上でパケットキャプチャ
tcpdump -n -s 0 -i eth0 -w /tmp/knox.pcap port 8443

2)gateway.jksからPrivate Keyを作る
keytool -importkeystore -srckeystore ./gateway.jks -destkeystore gateway.p12 -deststoretype PKCS12 -srcalias gateway-identity -srcstorepass hadoop -srckeypass hadoop -deststorepass changeit -destkeypass changeit

TODO: WiresharkはPEMフォーマットじゃなくp12?
#openssl pkcs12 -in gateway.p12 -out gateway-key.pem -nocerts -nodes
#Or
#openssl pkcs12 -in gateway.p12 -out gateway-key.pem

3)https://support.citrix.com/article/CTX116557 に従ってWiresharkを設定
詰まったのは、上記のPEMではPasswordを打つとエラーになる
Could not load PKCS#12 key file: could not load PKCS#12 in PEM format: Base65 unexpected header error.
あと、Protocolは小文字の"http"。でないとcould not find dissector for 'HTTP'エラーが出る


備考:Straceのほうが簡単?
strace -tf -e trace=network,read,write -s 1000 -o ./strace.out -p `cat /var/run/knox/gateway.pid` &
strace -tf -e trace=poll,select,connect,recvfrom,sendto -s 32 -o ./strace.out -p `cat /var/run/knox/gateway.pid` &

ちなみに、gateway-site.xmlにssl.enabled=falseを追加することで、SSL/HTTPSは無効化できます。


2017年2月7日火曜日

Beeline用(Knox経由)のTruststoreを作る


1)PEMフォーマットの証明書を作成
echo "" | openssl s_client -connect node7.localdomain:8443 -prexit 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/p;/END\ CERTIFICATE/q' > knox-cert.pem
または、
keytool -exportcert -rfc -file ./knox-cert.pem -keystore /usr/hdp/current/knox-server/data/security/keystores/gateway.jks -alias gateway-identity [-storepass XXXXXXXX]

2)あたらしいTruststoreを作成
keytool -import -alias knox -keystore ./myNewTrustStore.jks -file ./knox-cert.pem -noprompt -storepass changeit

3)Beelineでテスト
beeline --verbose -u "jdbc:hive2://node7.localdomain:8443/;ssl=true;sslTrustStore=./myNewTrustStore.jks;trustStorePassword=changeit;transportMode=http;httpPath=gateway/default/hive" -n admin -p admin-password -e "show databases;"

備考:
"node7.localdomain"はKnoxがインストールされたサーバ
" 2>/dev/null"はあってもなくても可。不必要な行が見えるだけ。
何故か証明書が二回出力されるので";/END\ CERTIFICATE/q"
" -p admin-password"は" -w ./.passwords"などのパスワードファイルのほうが良いかも

2017年2月2日木曜日

HDPでYARN ATS(Application Timeline Server)のみのJavaを変更する

現在のHDP 2.4.2.0ではhadoop-env.shがyarn-env.shから読み込まれるため、JAVA_HOMEを下記のように変更してもコマンドラインからスタートしようとするとうまく行きません。(Ambariだけの運用の場合は大丈夫そう)

/etc/hadoop/conf/yarn-env.sh
      export HADOOP_LIBEXEC_DIR={{hadoop_libexec_dir}}
if [ "$command" == "timelineserver" ]; then
    export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk.x86_64
else
    export JAVA_HOME={{java64_home}}
fi

Ambariは、下記のコマンドを実行するので、hadoop-env.shを変更すれば大丈夫そうです。(今後新しいAmbariかHDPバージョンでうまくいかなくなる可能性はあります)
/usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh --config /usr/hdp/current/hadoop-client/conf start timelineserver

/etc/hadoop/conf/hadoop-env.sh
# The java implementation to use.  Required.
if [ "$1" == "timelineserver" -o "$2" == "timelineserver" ]; then
    export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk.x86_64
else
    export JAVA_HOME={{java_home}}
fi

もしくは、yarn-env.shを最初のように変更し、JAVA_HOMEが設定されてないときだけ変更でも大丈夫かと。ただ、この場合はどこかでJAVA_HOMEが設定されている場合は予測できないJavaが使われる場合があります。(例えばユーザのprofileとか)
/etc/hadoop/conf/hadoop-env.sh
# The java implementation to use.  Required.
if [ -z "$JAVA_HOME" ]; then
    export JAVA_HOME=/usr/jdk64/jdk1.8.0_60
fi