精通Spark集群搭建與測(cè)試_第1頁
精通Spark集群搭建與測(cè)試_第2頁
精通Spark集群搭建與測(cè)試_第3頁
精通Spark集群搭建與測(cè)試_第4頁
精通Spark集群搭建與測(cè)試_第5頁
已閱讀5頁,還剩46頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、精通Spark集群搭建與測(cè)試電腦配置最好i5+內(nèi)存最少8G.安裝 VMware Workstation 軟件推薦官方下載最新版,下載地址: HYPERLINK s:/my.vmware /cn/web/vmware/details?downloadGroup=WKST-1210-WIN&productld=52 s:/my.vmware /cn/web/vmware/details?downloadGroup=WKST-1210-WIN&productld=524&rPld=9763 VMware WorkstationFile Edit View VM Tabs Helpd 口口2.運(yùn)行VM

2、ware Workstation,新建3臺(tái)虛擬機(jī),并安裝Ubuntu操作系統(tǒng)Ubuntu 下載土也址: HYPERLINK :/ /download/alterTiative-downloads :/ /download/alterTiative-downloads我用的 ubuntu-14.04.5-desktop-amd64.iso需要配置虛擬機(jī)使之能夠上網(wǎng),在這里我們采用網(wǎng)絡(luò)地址轉(zhuǎn)換即NAT的方式,與宿主機(jī)共 享IP上網(wǎng):注1:可以先裝好一臺(tái)機(jī)器,然后通過VMware的克隆功能生成另外兩臺(tái)。注2:安裝完系統(tǒng)后,為了能從宿主機(jī)與虛擬機(jī)互相COPY文件,也為了能使虛擬機(jī)全屏 顯示,推薦安裝V

3、MwareTools,方法如下:tar -xzvf VMwareTools-9.6.0-1294478.tar.gzcd vmware-tools-distrib/c.sudo 7vmware-install.pld.然后一路回車即可e.由于版本不同操作可能不同,百度Ubuntu安裝Tools即可3.為了簡(jiǎn)化后續(xù)操作中的權(quán)限問題,我們?cè)谶@里配置為root賬戶登錄系統(tǒng),方法如下:a.終端進(jìn)入root用戶權(quán)限模式:命令sudo-sb.命令 gedit /etc/lightdm/lightdm.confc.最后一行添加:greeter-show-manual-login=trueallow-gues

4、t=falsehadoop.tmp.dir/usr/local/hadoop/hadoop-2.6.4/tmpA base for other temporary directorieshadoop.native.IibtrueShould native hadoop libraries, if present, be used.h.修改9皿11(55呢3(011,以下是最小配置,更詳細(xì)的信息可以參考官網(wǎng): :/hadoopapacheorg/docs/stable/hadoopproiectdist/hadoophdfs/hdfsdefa ult.xmlgedit hdfs-site.xml

5、:dfs.replication2.dir/usr/local/hadoop/hadoop-2.6.4/dfs/namedfs.datanode.data.dir/usr/local/hadoop/hadoop-2.6.4/dfs/data注:這里指定的 .dir 與 dfs.datanode.data.dir 假設(shè)不存在的話, 后續(xù)start-dfs時(shí)會(huì)報(bào)錯(cuò):i.修改gedit mapre&site.xml,以下是最小配置,更詳細(xì)的信息可以參考官網(wǎng): : hadoop. apache. orq/docs/stable/hadoop-proiectdist/hadoop-hdfs/hdf s-

6、default.xml注:MRv1的Hadoop沒有使用yam作為資源管理器,其配置如下:gedit mapred-site.xml : (without yarn) mapred.job.trackermaster:9001 *MRv2的hadoop使用yarn作為資源管理器,其配置如下:vim mapred-site.xml : (with yarn)yarnj.修改yarnsite.xml,以下是最小配置,更詳細(xì)的信息可以參考官網(wǎng): HYPERLINK :/docs/stable/hadoop-yarr/hadoop-yanvcommor/yarivd :/docs/stable/had

7、oop-yarr/hadoop-yanvcommor/yarivd efault, xmlgedit yarn-site.xml:yarn.resourcemanager.hostnamemasteryarn.nodemanager.aux-servicesmapreduce_shuffle注:Yarn是Hadoop推出整個(gè)分布式(大數(shù)據(jù))集群的資源管理器,負(fù)責(zé)資源 的管理和分配,基于Yarn我們可以在同一個(gè)大數(shù)據(jù)集群上同時(shí)運(yùn)行多個(gè)計(jì)算框架,例如 Spark、MapReduce、Stormo12.啟動(dòng)并驗(yàn)證hadoop集群:a. 格式化 hdfs 文件系統(tǒng):hadoop namenode -f

8、ormat/hdfs namenode -formatrootmaster:/usr/local/hadoop/hadoop-2.6.0/bin# hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it.16/03/03 14:38:15 INFO namenode.NameNode: STARTUP_MSG:STARTUP_MSG: Starting NameNodeSTARTUP_MSG:

9、host = master/30STARTUP_MSG:args = -formatSTARTUP_MSG:version = 2.6.Q該命令會(huì)啟動(dòng),格式化,然后關(guān)閉namenode。實(shí)際上格式化后,在namenode上會(huì)生成以下文件:rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# Is fsimage_O0O00O0000000000000 seen_txidfsimage_O0OOOOOOOOO00OOO0OO.md5 VERSIONrootmaster:/usr/local/hadoop/hadoop-2.6.O/

10、dfs/name/current#其中VERSION文件的內(nèi)容如下:rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name# Is current rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name# cd current/ rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# Is fsimage_00O0000O0O0000O0000 seen_txid fstmage_O000000O00000O00000 . md5 VERS

11、IONnamespaceID=1103891 clusterID=CID-69035837-rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# more VERSION #Thu Mar 03 16:54:31 CST 2016.029a-45a3-b0b3-ld662751eb43cTime=0storageType=NAME NO blockpoolID=BP-996551254-192.168 85.130-1456995271763layoutVersion=-60rootmaster:/usr/local/hadoo

12、p/hadoop-2.6.0/dfs/name/current# ,該命令不會(huì)在datanode的dfs.datanode.data.dir對(duì)應(yīng)的目錄下生成任何文件:rootgworkerl:/usr/local/hadoop/hadoop-2.6.O/dfs/data# Is rootgworkerl:/usr/local/hadoop/hadoop-2.6.6/dfs/data# |有關(guān)該命令的細(xì)節(jié)請(qǐng)參考官方文檔: : hadoop. aD/docs/stable/hadoop-Droject-dist/hadooD-hdfs/HDFSCo mmands.html#namenodeb.啟動(dòng)

13、 hdfs: start-dfs.shrootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs# start-dfs.sh16/03/03 16:57:43 WARN util.NativeCodeLoader: Unable to load nattve-hadoop libra ry for your platform. using butltin-java classes where applicableStarting namenodes on mastermaster: starting namenode, logging to /usr/local

14、/hadoop/hadoop-26.0/logs/hadoop root-namenode-master.outworkerl: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoo p-root-datanode-worker1.outworker2: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoo p-root-datanode-worker2.outworker3: starting datanode,

15、logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoo p- root-datanode-worker3.outStarting secondary namenodes mastermaster: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-2.6.0/10 gs/hadoop-root-secondarynamenode-master.out16/03/03 16:57:58 WARN util.NativeCodeLoader: Unable to load

16、 nattve-hadoop libra ry for your platform. using butltin-java classes where applicable rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs# |使用jps驗(yàn)證HDFS是否啟動(dòng)成功:rootmaster:/usr/local/hadoop/hadoop-2e6.O/bin# jps 3600 NameNode3926 Jps3815 SecondaryNameNode_通過webui檢查HDFS是否啟動(dòng)成功 : master:50070 r.r-lh.H v YW a m

17、aster:50070/dfshealth.html#tab-overviewHadoop Overview Datanodes Snapshot Startup Progress UtilitiesOverview,maste匚900(y (active/LStarted:Thu Mar 03 16:57:44 CST 2016Version:2.6.0, re3496499ecb8d220fba99dc5ed4c99c8f9e33bblCompiled:2014-ll-13T21:10Z by jenkins from (detached from e349649)Cluster ID:C

18、ID-69035837-029a-45a3-b0b3-ld662751eb43Block Pool ID:BP-996551254-30-1456995271763Namenode information x6 master:50070/dfshealth.html#tab-overview CI Q SearchDFS Used:72 KBNon DFS Used:18.82 GBDFS Remaining:33.96 GBDFS Used%:0%DFS Remaining%:64.35%Block Pool Used:72 KBBlock Pool Used%:0%DataNodes us

19、ages% (Min/Median/Max/stdDev):0.00% / 0.00% / 0.00% / 0.00%Live NodesDead Nodes0 (Decommissioned: 0)3 (Decommissioned: 0)Decommissioning NodesNumber of Under-Replicated BlocksNumber of Blocks Pending Deletion注1:實(shí)際上第一次啟動(dòng)hdfs后,在datanode dfs.datanode.data.dir對(duì)應(yīng)的目錄下會(huì)生成current目錄,該目錄下的BP文件與namenode上.dir對(duì)應(yīng)

20、的目錄下的current子目錄的VERSION文件中的 blockpoollD字段的值一致;在該目錄下也會(huì)生成VERSION文件,該VERSION文 件中的 clusterlD 和 namenode 的 .dir 對(duì)應(yīng)的目錄下的 current 子目錄的VERSION文件中的clusterlD 一致: O root0)worker3: /usr/local/hadoop/hadoop-2.6.0/dFs/data/currentrootworker3:/usr/local/hadoop/hadoop-2.6e0/dfs/data# Iscurrent in_uselockrootworker3

21、:/usr/local/hadoop/hadoop-2.6.0/dfs/data# cd current/rootworkefftjsr/*local/hadoop/hadoop-2.6.0/dfs/data/current# IsBP-996551254-30-1456995271763 VERSIONrootworker3:/usr/local/hadoop/hadoop-2.6.0/dfs/data/current# more VERSION#Thu Mar 03 16:57:50 CST 2016storageID=DS-773e81f4-39f9-4a20-9f36-b48952d0

22、6848clusterID=CID-69035837-029a-45a3-b0b3-ld662751eb43cTime=0datanodeUuid=db5rt1b7-6592-46ff-af4e-c99a0ee75b80storageType=DATA_NODElayoutVersion=-56rootworker3:/usr/local/hadoop/hadoop-2.6.0/dfs/data/current#實(shí)際上在后續(xù)執(zhí)行了 hdfs namenode -format后,namenode的VERSION文件會(huì) 改變:000 root)masten /usr/local/hadoop/ha

23、doop-2.6.0/dfs/name/currentnamespaceID=2001999531clusterID=CID-d216d552-e79e-4d9c-8c6d-f9b412205090cTime=0storageType=NAME_NODEblockpoolID=BP-148499760630-1457136293776layoutVersion=-60而dananode的BP和VERSION文件都不會(huì)改變:rootujorker2:/usr/local/hadoop/hadoop-2.6.O/dfs/data/current# Is551254-192.1四二85130-145

24、6935271763 VERSIONrootuorker2:/usr/BatteadooiiZboop-Z,6, O/dfs/data/current# more VERSION#Fri Mar 04 19:03:10 EST 2ulb一_ _storageID=DS-a9f0dfd3-cdc0-4810-ab49-49579blee3b2clusterID=CID-69035837-029a-45a3-b0b3-ld662751eb43cTime=0datanodeUuid=f005a5B*e346fe-94fa-8061c8ac0fb0storageType=DATA_NODElayout

25、Version=-56rootiuorker2:/usr/local/hadoop/hadoop-2.6.O/dfs/data/current#再次start-dfs.sh時(shí),namenode可以成功啟動(dòng),但在datanode上,因?yàn)関ersion 文件與namenode的不一致,datanode不能成功啟動(dòng)并成功注冊(cè)到namenode! 所以:每次執(zhí)行hdfs namenode -format前,必須清空datanode的data文件夾! (namenode的name文件夾不需要清空,namenode和datanode的tmp文件夾也 不需要空。)注2:注:有的朋友喜歡使用start-all

26、.sh,其實(shí)質(zhì)是執(zhí)行了 start-dfs.sh和start-yarn.sh, 如下列圖可見,在提示中也可見,推薦分開使用start-dfs.sh和start-yarn.sh而不是 直接使用start-alLsh:# Start all hadoop daemons. Run this on master node.echo * T N s scr Is Depr(u :dbtn= dtrname M$BASH_SOURCE-$Obtn= cd Sbin,; pwdDEFAULT_LIBEXEC_DIR=H$btn,7./ItbexecHADOOP_LIBEXEJD1R=$HADOOP_LIB

27、EXEJD1R:-$DEFAULT_L1BEXEC_DIR $HADOOP_LIBEXEC_DIR/hadoop-config.shstart hdfs daemons If hdfs is presentif -f 1 HADOOP HDFS HOME H /sbin/start-dfs.sh ; thenSfHADOOP_HDFS_HOMEjVsbin/start-dfs.sh -config $HADOOP_CONF_DIRflstart yarn daemons if yarn is presentif -f HADOOP YARN HOME/sbin/start-yarn.sh ;

28、thenStHADOOP-YARN-HOMEjVsbtn/start-yarn.sh -config $HADOOP_CONF_DIR代一一.38,1Botc.啟動(dòng) yarn: start-yarn.shrootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# start-yarn.sh starting yarn daemonsstarting resourcemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-roo t-resourcemanager-mas

29、ter.outworker3: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yar n-root-nodemanager-worker3.outworker2: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yar n-root- nodemanager-worker2.outworked: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.

30、6.0/logs/yar n-root- nodemanager-workerl.outrootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# |使用jps驗(yàn)證yarn是否啟動(dòng)成功:rootmaster:/usr/local/hadoop/hadoop-2.6.0/dfs/name/current# jps9480 ResourceManagerr8908 NameNode9116 SecondaryNameNode9743 JpsrootQworkerl : -S jps通過webui檢查yarn是否啟動(dòng)成功: :master:

31、8088/ : workerl:8042/Applicationsa master:8088/cluster ? Q Searcha master:8088/cluster 6.0/dfs/name/current# jps9878 JobHistoryServer9480 ResourceManager_8908 NameNode9116 SecondaryNameNode9948 JpsootQnaste:/us八ocal/hadoop/hadoop-2.6.e/dfs/namR/cuent# ,通過webui檢查JobHistory Server是否啟動(dòng)成功: HYPERLINK :/m

32、aster:19888 :/master:19888也割:/畝可享J,JobHistory ApplicationAbout JobsRetired Jobs ToolsShow 20 entriesSearch:Submit Time 人Start Time 人 7Finish TimeJob ID人 YName人 VUser人 VQueue人 VState人 YMaps Total人 YMaps Completed 人 yNo data available in tableSubmitStartFinishJotNameUserQueueStateMapsMaps CompShowing

33、0 to 0 of 0 entriesJobHistory G | | Q Search華 凈 master: 19888/jobhistorye.驗(yàn)證hadoop集群創(chuàng)立文件夾:hdfs dfs -mkdir -p /data/wordcounthdfs dfs -mkdir -p /output上傳文件:hdfs dfs -put /usr/local/hadoop/hadoop-2.6.0/etc/hadoop/*.xml /data/wordcount查看上傳文件是否成功:hdfs dfs -Is /data/wordcount保存d. 為root賬號(hào) 設(shè)置密碼:sudo passwd

34、 roote.重新啟動(dòng)系統(tǒng)后,即可用root賬號(hào)登錄:reboot注1:如果系統(tǒng)提示vim沒有安裝的話,可以通過apt-get install vim安裝。注2:切換為root賬戶登錄后,如果遇到以下問題:O Error found when loading /file:stdin: is not a ttyAs a result the session will not be configured correctly.Ydu should fix the problem as soon as feasible.OK.方法一:將/root/, profile 文件中的 mesg n替換成 tt

35、y -s & mesg n重啟方法二:將非root賬戶目錄中的.profile復(fù)制到/root/:例如:cp例ome/非root賬戶的名字/.profile /root/.重啟在各個(gè)節(jié)點(diǎn)修改節(jié)點(diǎn)名稱,并配置ip地址和hostname的對(duì)應(yīng)關(guān)系:rootmaster:# hdfs dfs -Is /data/wordcount16/03/05 08:36:28 WARN util.NativeCodeLoader: Unable to load native-hadoop libra ry for your platform. using builtin-java classes where a

36、pplicableFound 10 items -rw-r-r-2 root(ity-scheduler.xml spark.deploy.history.Hi storyServer-1-master.out rootgmaster:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/conf# jps 5826 NameNode 7107 Master 7636 Jps6046 SecondaryNameNoderootmaster:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/conf#Jps可見,historys

37、erver沒能啟動(dòng)成功。仔細(xì)觀察日志可見,指定的日志目錄不存在:rootmaster:/usr/local/spark/spark-l6.0-bin-hadoop2.6/conf# more /usr/local/spa rk/spark-1.6.0-bin-hadoop2.6/logs/spark-root-orgapache.spark.depl.oy.history.His toryServer-1-master.outSpark Command: /usr/lib/java/jdkl.8.0_60/bin/java -cp /usr/local/spark/spark-1.6 .0-b

38、in-hadoop2.6/conf/:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/lib/spark-asse mbly-1.6.0-hadoop2.6.0.jar:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/lib/datanu cleus-api-jdo-3.2.6.jar:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/lib/datanucle us-core-3.2.10.jar:/usr/local/spark/spark-1.6.0-bin-hadocpZad

39、/ltb/datanucleus-rd bms-3.2.9.jar:/usr/local/hadoop/hadoop-2.6.0/etc/hadoop/ -Xmslg -Xmxlg org.apach e.spark.deploy.history.HtstoryServer 16/03/05 16:47:15 INFO history.HistoryServer: Registered signal handlers for TE RM, HUP, INT16/03/05 16:47:15 WARN util.NativeCodeLoader: Unable to load native-ha

40、doop libra ry for your platform. using builtin-java classes where applicable16/03/05 16:47:15 INFO spark.SecurityManager: Changing view acls to: root 16/03/05 16:47:15 INFO spark.SecurityManager: Changing modify acts to: root 16/03/05 16:47:15 INFO spark.SecurityManager: SecurityManager: authenticat

41、ion di sabled; ui acls disabled; users with view permissions: Set(root); users with mod ify permissions: Set(root)Exception in thread “matn” java.lang.reflect.InvocationTargetExceptionat sun.reflect.NativeconstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeconstructorAccessorImp

42、l.newInstance(NativeConstruct orAccessorImpl. java:62)at sun.reflect.DelegattngConstructorAccessorImpl.newInstance(DelegattngC onstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newlnstance(Constructor.java:422)at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.sea la:

43、235)at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.seal a) TOC o 1-5 h z Caused by: java.lang.IllegalArgumentException: Log directory specified does not exist: hdfs:/master:9000/htstoryserverforspark.卜at org.apache.spark.deploy.history.FsHtstoryProvidorg$apache$spark$de ploy$his

44、tory$FsHistoryProvider$startPolling(FsHistoryProvi.scala:168)at org.apache.spark.deploy.history.FsHistoryProviderAQitialize(FsHistor yProvider.scala:120)at org.apache.spark.deploy.history.FsHtstoryProvider.(FsHistoryPro vtder.scala:116)at org.apache.spark.deploy.history.FsHistoryProvider.tntt(FsHtst

45、oryPro vider.scala:49)6 morerootmaster:/usr/local/spark/spark-1.6.0-bin-hadoop2.6/conf# |該目錄是我們?cè)趕park-defaults.conf中指定的(事實(shí)上,假設(shè)不指定spark.eventLog.dir 和spark.history.fs.logDirectory,你也無法啟動(dòng)historyserver,因?yàn)橄到y(tǒng)不知道去哪里 存儲(chǔ)你的history信息)contributor license agreements. See the NOTICE file distributed withthis wor

46、k for additional information regarding copyright ownership*The ASF licenses this file to You under the Apache License, Version 2.0(the License); you may not use this file except in compliance withthe License. You may obtain a copy of the License at # HYPERLINK :/ /licenses/LICENSE-2.0 :/ /licenses/L

47、ICENSE-2.0#Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an AS IS BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under th

48、e License. #Default system properties included when running spark-submit.This is useful for setting default environmental settings.Example:spark.masterspark:/master:7077spark.eventLog.enabledtruespark.eventLog.dirhdfs:/master:9000/historyserverforsparkspark.yarriehistoryserver.address Master: 1808。s

49、park.history . fs .logDirectorysparkeSerializer=org.apache.spark.serializer.KryoSeriaIizerspark.driver.memory5g3 spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers=none two three*31,1Bot我們只需使用hdfs命令hdfs dfs -mkdir -p /historyserverforspark來創(chuàng)立該目錄,再啟 動(dòng) historyserver, jps 可見多了 個(gè)

50、historyserver 進(jìn)程:rootmaster:# hdfs dfs -mkdir p /historyserverforspark16/03/05 16:59:22 WARN util.NativeCodeLoader: Unable to load native-hadoop libra ry for your platform. using builtin-java classes where applicable rootmaster:#網(wǎng)絡(luò)連接更改這3處地方,各虛擬機(jī)之間不能相同a. gedit /etc/hostname 將 4 臺(tái)節(jié)點(diǎn)分別命名為 master,worker

51、1 ,worker2b.保存,重啟虛擬機(jī)使更改生效c.驗(yàn)證hostname更改是否生效:用hostname命令 C Q Search“ master: 50070/explorer.html#/Browse DirectoryGo!Go!PermissionOwnerGroupdrwxr-xr-xrootsupergroupdrwxr-xr-xrootsupergroupdrwxr-xr-xrootsupergroupdrwxrwx-rootsupergrouproot)masten 0 B 00 B 00 B 00 B 0Size Replication Block Size Namedat

52、a0 B58267107784979006046rootmaster:# start-history-server.sh starting org.apache.spark.deploy.history.Historyserver, logging to /usr/local/sp| ark/spark-1.6.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.Hi storyServer-1-master.out rootmaster:# jpsNameNodeMasterHtstoryServerJpsSecon

53、daryNameNode rootmaster:#f.使用 webUI 來查看 historyserver 是否成功啟動(dòng) :master:8080Browsing HDFS xSpark Master at. xT Problem load. xWmaster: 18080/?page=1 &showlncomplete=fal C Q Search,History Server自 DHistory ServerEvent log directory: hdfs:/master:9000/historyserverforsparkNo completed applications found!

54、Did you specify the correct logging directory? Please verify your setting of spark.history.fsJogDirectory and whether you have the permissions to access it.It is also possible that your application did not run to completion or did not stop the SparkContext.Show incomplete applications注 1 : start-all

55、.sh 其實(shí)質(zhì)是先后執(zhí)行了 start-master.sh 和 start-slaves.sh:Start all spark daemons.)Starts the master on this node.Starts a worker on each node specified in conf/slavesif -z ”$SPARK_HOME” ; thenexport SPARK_HOME=$(cd H dtrname $0 / pwd) ft一TACHYON_STR= |while ( $# ); do lease $1 in -with-tachyon) TACHYON_STR=/

56、start-masterash 二一 F /start-slaves.sh 5TACHYON_STR 9 9 esac shift done # Load the Spark configuration . $SPARK_HOME) # Start Master $SPARK_HOME # Start Workers $SPARK_HOME最正確實(shí)踐是:在master節(jié)點(diǎn)上執(zhí)行start-all.sh.注 2: start-master.sh 會(huì)首先嘗試從 spark-env.sh 中獲取 spark_master_ip ,獲取到的話, 就在該參數(shù)對(duì)應(yīng)的節(jié)點(diǎn)上(可以是當(dāng)前節(jié)點(diǎn)也可以不是當(dāng)期節(jié)點(diǎn)

57、)啟動(dòng)master進(jìn)程;如果 沒有狄取到的話,那么會(huì)在當(dāng)前”點(diǎn)上啟動(dòng)master進(jìn)程:rootmaster: /usr/local/spark/spark-1.6.0-bin-hadoop2.6/sbinif “$SPARK_MASTER_PORT” =; thenSPARK_MASTER_PORT=fiif SSPARK_MASTER_IP =; thenSPARK MASTER IP=hostnameftif ,$SPARK-MASTER_WEBUI_PORT =; thenSPARK-MASTER-WEBUI-PORTs 086fi: 卜_卜 1E/sparkdaemon.sh star

58、t :CL-5S if ”$START_TACHYON” =; then*$ 5apK HOfIE /tachyon/bin/tachyon bootstrap-conf $ FK 卜二 EP_IF$SPARK HOZE八achyon/btn八achyon format -sHE E: /tachyon/bin/tachyon - start. sh master玳81,1Bot最正確實(shí)踐是:在master節(jié)點(diǎn)上執(zhí)行start-master.sh.注3: start-slaves.sh會(huì)在slaves文件中指定的每個(gè)節(jié)點(diǎn)上分別調(diào)用start-slave.sh來啟動(dòng) worker進(jìn)程,并嘗試注冊(cè)

59、到特定的master上。這個(gè)master通過以下方式獲?。菏紫葒L試 從spark-env.sh中獲取spark_master_ip,獲取到的話,該參數(shù)對(duì)應(yīng)的節(jié)點(diǎn)(可以是當(dāng)前 節(jié)點(diǎn)也可以不是當(dāng)期節(jié)點(diǎn))就是master節(jié)點(diǎn);如果沒有獲取到的話,那么會(huì)視當(dāng)前節(jié)點(diǎn)為 master節(jié)點(diǎn)。假設(shè)該master節(jié)點(diǎn)上的master進(jìn)程沒有啟動(dòng),這些worker節(jié)點(diǎn)上的worker 進(jìn)程會(huì)不斷嘗試注冊(cè)到master上:最正確實(shí)踐是: 在 master節(jié)點(diǎn)上start-slaves.sh.注4: start-slave.sh:該命令可以動(dòng)態(tài)啟動(dòng)worker節(jié)點(diǎn)并注冊(cè)到到master這樣當(dāng)我們已經(jīng)啟動(dòng)了 spark

60、集群后,當(dāng)后續(xù)有新的節(jié)點(diǎn)可用時(shí),無需stop整個(gè)集群,只需要在新的 可用節(jié)點(diǎn)上執(zhí)行該命令就可以動(dòng)態(tài)啟動(dòng)并注冊(cè)到master上。需要注意的是,當(dāng)使用該命令 時(shí),必須在命令行指定maste匚NOTE: This exact class name is matched downstream by SparkSubmit.Any changes need to be reflected there.CLASS=H j.apache.spark.deploy,worker”if $# t= *-help | M$M = *-h ; thenecho Usage: ./sbtn/start-slave.s

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論