06-CCTC2016-聚效廣告劉憶智-Beyond MLLibScale up Advanced Machine Learning on Spark_第1頁(yè)
06-CCTC2016-聚效廣告劉憶智-Beyond MLLibScale up Advanced Machine Learning on Spark_第2頁(yè)
06-CCTC2016-聚效廣告劉憶智-Beyond MLLibScale up Advanced Machine Learning on Spark_第3頁(yè)
06-CCTC2016-聚效廣告劉憶智-Beyond MLLibScale up Advanced Machine Learning on Spark_第4頁(yè)
06-CCTC2016-聚效廣告劉憶智-Beyond MLLibScale up Advanced Machine Learning on Spark_第5頁(yè)
已閱讀5頁(yè),還剩53頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

ScaleupAdvancedMachineLearningonSparkQihoo360/MVADDMLCmemberMXNetcommitterAlgorithmsinSparkML?ClassificationandRegression?Linearmodels?NaiveBayescisionTreesnsemblesofTrees?IsotonicRegressionCollaborativeFiltering?ALS?……?ClusteringLatentDirichletAllocation?……?DimensionalityReduction?……wwwwComputationalModelwwnAdvancedMLAlgorithmsXGBoostisanoptimizeddistributedgradientexibleandportableboostingXGBoostisanoptimizeddistributedgradientexibleandportableMXNetisadeeplearningframeworkdesignedforbothe?ciencyand?exibilityp(cat)p(dog)p(cat)p(dog)efinethenetworktionalcostlayersemorylimitTrainDeepNetworkonMXNet(declaratively)nvaldataSymbolVariable"data")valfc1=Symbol.FullyConnected(name="fc1")(Map("data"->data,valdataSymbolVariable"data")valact1=Symbol.Activation(name="relu1")(Map("data"->fc1,"act_type"->"relu"))valfc2=Symbol.FullyConnected(name="fc2")(Map("data"->act1,"num_hidden"->64))valact2=Symbol.Activation(name="relu2")(Map("data"->fc2,"act_type"->"relu"))valfc3=Symbol.FullyConnected(name="fc3")(Map("data"->act2,"num_hidden"->10))valmlp=Symbol.SoftmaxOutput(name="sm")(Map("data"->fc3))valmodelFeedForwardnewBuildermlp)valmodelFeedForwardnewBuildermlp).setContext(Context.cpu()).setNumEpoch(10).setOptimizer(newSGD(learningRate=0.1f,momentum=0.9f,wd=0.0001f)).setTrainData(trainDataIter).setEvalData(valDataIter).build()valprobArrays=model.predict(valDataIter)//inthiscase,wedonothavemultipleoutputserrequire(probArrays.lengthervalprob=probArrays(0)//getpredictedlabelsvalpy=NDArray.argmaxChannel(prob)//dealwithpredictedlabelspyMXNetMXNetCompareCompareMXNettoothersonasingleforward-backwardperformanceiceiceGPUSupporticeherevalweight=NDArray.empty(Shape(3,2),Contexticehereweight-=eta*(grad+lambda*weight);valweightOnCpuweightcopyToContextcpu)CPUandGPUtemplate<typenamexpu>voidUpdateSGD(Tensor<xpu,2>weight,constTensor<xpu,2>&grad,floateta,floatlambda){weight-=eta*(grad+lambda*weight);}GPUSupport:mshadow/dmlc/mshadow?Efficient:alltheexpressionyouwritewillbelazilyevaluatedandcompiledintooptimizedcode.?Notemporalmemoryallocationwillhappenforexpressionyouwrite.?mshadowwillgeneratespecifickernelforeveryexpressionyouwriteincompiletime.?Deviceinvariant:youcanwriteonecodeanditwillrunonbothCPUandGPU.?Simple:mshadowallowsyoutowritemachinelearningcodeusingexpressions.?Whitebox:putafloat*intotheTensorstructandtakethebenefitofthepackage,nomemoryallocationishappenedunlessexplicitlycalled.?Lightweightlibrary:lightamountofcodetosupportfrequentlyusedfunctionsinmachinelearning.?Extendable:usercanwritesimplefunctionsthatplugsintomshadowandrunonGPU/CPU,noexperienceinCUDAisrequired.?MultiGPUandDistributedML:mshadow-psinterfaceallowsusertowriteefficientMultiGPUanddistributedprogramsinanunifiedway.dserversdserversvalenvs=Map("DMLC_ROLE"->role,"DMLC_PS_ROOT_URI"->schedulerHost,"DMLC_PS_ROOT_PORT"->schedulerPort,"DMLC_NUM_WORKER"->numWorker,vs"DMLC_NUM_SERVER"vsif(role=="server"||role=="scheduler"){//scheduler&serverKVStoreServer.start()}else{ronousupdatevalkv=KVStore.create("dist_sync")model.fit(trainData=train,kvStore=kv)tsyncIterationKVStore.create("dist_async")IterationKVStore.create("dist_async")AsynchronousExecutionk?Sparkhasbecomethedefactoprocessing.?CombineETLwithmachinelearningpipeline..setNumServer(2)setupserver&.setNumWorker(4)workernumber.setLabelName.setNumServer(2)setupserver&.setNumWorker(4)workernumber.setLabelName("softmax_label")mlpwillbeserializedkAPIvalmxnet=newMXNet().setBatchSize(128).setContext(Context.gpu(0))toworkers.setDimension(Shape(784)).setNetwork(mlp).setExecutorClasspath(classpaths).setOptimizer(newSGD(learningRate=0.01f,momentum=0.9f,wd=0.00001f)valmodel=mxnet.train(trainData)model.save(modelPath)alpredictionsmodelpredicttestDataServerComponentsonSparkmapRange(0,numServer)0MXNetGPUMXNetCPUSparkMLTrain660,000sampleson5EC2nodesengineissingleton?CPU&GPUallocationoncluster?Whatifthere’snoenoughresourcesincluster?execs=[symbol.bind(mx.gpuexecs=[symbol.bind(mx.gpu(i))foriinrange(ngpu)]%w-=learning_rate*gradkvstore.set_updater(…)%iteratingondatafordbatchintrain_iter:%iteratingonGPUsforiinrange(ngpu):%readadatapartitioncopy_data_slice(dbatch,execs[i])%pulltheparametersforkeyinupdate_keys:kvstore.pull(key,execs[i].weight_array[key])%computethegradientexecs[i].forward(is_train=True)execs[i].backward()%pushthegradientforkeyinupdate_keys:kvstore.push(key,execs[i].grad_array[key])UGPUGPU1GPUGPU2GPUGPU3XGBoostSingleMachinePerformanceoostonSparkgonSingleMachinevalparams=newmutable.HashMap[String,Any]()params+="eta"->1.0params+="max_depth"->2params+="silent"->1params+="objective"->"binary:logistic"valwatches=newmutable.HashMap[String,DMatrix]watches+="train"->trainMaxwatches+="test"->testMaxvalround=2//trainamodelvalbooster=XGBoost.train(trainMax,params.toMap,round,watches.toMap)valpredicts=booster.predict(testMax)delpredicttestSetdelpredicttestSetwrapwrapinRDD=MLUtils.loadLibSVMFile(sc,inputTrainPath).repartition(args(1).toInt)valxgboostModel=XGBoost.train(trainRDD,paramMap,numRound,numWorkers)//testSetisanRDDcontainingtestsetdatarepresentedas//org.apache.spark.mllib.regression.LabeledPointvaltestSet=MLUtils.loadLibSVMFile(sc,inputTestPath)//localprediction//importmethodsinDataUtilstoconvertIterator[org.apache.spark.mllib.regression.LabeledPoint]//toIterator[ml.dmlc.xgboost4j.LabeledPoint]inautomaticimportDataUtils._xgboostModel.predict(newDMatrix(testSet.collect().iterator)?Portable:rabitislightweightandrunseverywhere.?Rabitisalibraryinsteadofaframework.?Rabitonlyreliesonamechanismtostartprogram,whichwasprovidedbymostframework.?ScalableandFlexible:rabitrunsfast?RabitprogramuseAllreducetocommunicate,anddonotsufferthecostbetweeniterationsofMapReduceabstraction.?Programscancallrabitfunctionsinanyorder,asopposedtoframeworkswherecallbacksareofferedandcalle

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論