版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
ParallelProgrammingInstructor:ZhangWeizhe(張偉哲)ComputerNetworkandInformationSecurityTechniqueResearchCenter,SchoolofComputerScienceandTechnology,HarbinInstituteofTechnologyProgrammingUsingtheMessage-PassingParadigm3IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline4AParallelMachineModelInterconnect…TheclusterAnodecancommunicatewithothernodesbysendingandreceivingmessagesoveraninterconnectionnetwork可以通過在互連網(wǎng)絡(luò)上發(fā)送和接收消息來與其他節(jié)點(diǎn)進(jìn)行通信的節(jié)點(diǎn)ThevonNeumanncomputer5Principlesof
Message-PassingProgrammingEachprocessorinamessagepassingprogramrunsaseparateprocess(sub-program,task)Thelogicalviewofamachinesupportingthemessage-passingparadigmconsistsofpprocesses,eachwithitsownexclusiveaddressspace.AllvariablesareprivateEachdataelementmustbelongtooneofthepartitionsofthespace;hence,datamustbeexplicitlypartitionedandplaced.CommunicateviaspecialsubroutinecallsAllinteractions(read-onlyorread/write)requirecooperationoftwoprocesses-theprocessthathasthedataandtheprocessthatwantstoaccessthedata.6Principlesof
Message-PassingProgramming消息傳遞程序中的每個(gè)處理器都運(yùn)行一個(gè)單獨(dú)的進(jìn)程(子程序,任務(wù))支持消息傳遞范例的機(jī)器的邏輯視圖由p個(gè)進(jìn)程組成,每個(gè)進(jìn)程都有自己的獨(dú)占地址空間。所有變量都是私有的每個(gè)數(shù)據(jù)元素必須屬于該空間的一個(gè)分區(qū);因此,必須明確分區(qū)和放置數(shù)據(jù)。通過特殊的子程序通話所有交互(只讀或讀/寫)都需要兩個(gè)進(jìn)程的協(xié)作,這兩個(gè)進(jìn)程是具有數(shù)據(jù)訪問數(shù)據(jù)的進(jìn)程。7Principlesof
Message-PassingProgrammingSPMDSingleProgramMultipleDataSameprogramrunseverywhere EachprocessonlyknowsandoperatesonasmallpartofdataMPMDMultipleProgramMultipleData Eachprocessperformadifferentfunction(input,problemsetup,solution,output,display)8MessagesMessagesarepacketsofdatamovingbetweenprocessesThemessagepassingsystemhastobetoldthefollowinginformation:SendingprocessSourcelocationDatatypeDatalengthReceivingprocess(es)DestinationlocationDestinationsize9MessagePassingMessage-passingprogramsareoftenwrittenusingtheasynchronous異步orlooselysynchronous松散同步paradigms.Asynchronouscommunicationdoesnotcompleteuntilthemessagehasbeenreceived.Anasynchronouscommunicationcompletesassoonasthemessageisonitsway
10IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline11WhatisMPI?ThedevelopmentofMPIstartedinApril1992.MPIwasdesignedbytheMPIForum(adiversecollectionofimplementors,librarywriters,andendusers)quiteindependentlyofanyspecificimplementation
MPI由MPI論壇(實(shí)施者,圖書館作家和最終用戶的多樣化集合)設(shè)計(jì),完全獨(dú)立于任何具體的實(shí)現(xiàn)Website
//mpi/12WhatisMPI?MPIdefinesastandardlibraryformessage-passingthatcanbeusedtodevelopportablemessage-passingprogramsusingeitherCorFortran.Afixedsetofprocessesiscreatedatprograminitialization,oneprocessiscreatedperprocessor
mpirun–np5programEachprocessknowsitspersonalnumber(rank)EachprocessknowsnumberofallprocessesEachprocesscancommunicatewithotherprocessesProcesscan’tcreatenewprocesses(inMPI-1)13WhatisMPI?MPI定義了消息傳遞的標(biāo)準(zhǔn)庫(kù),可用于使用C或Fortran開發(fā)便攜式消息傳遞程序。在程序初始化時(shí)創(chuàng)建一組固定的進(jìn)程,每個(gè)處理器創(chuàng)建一個(gè)進(jìn)程每個(gè)進(jìn)程知道其個(gè)人號(hào)碼(等級(jí))每個(gè)進(jìn)程知道所有進(jìn)程的數(shù)量每個(gè)進(jìn)程都可以與其他進(jìn)程進(jìn)行通信進(jìn)程無法創(chuàng)建新進(jìn)程(在MPI-1中)14MPI:theMessagePassingInterfaceTheminimalsetofMPIroutines.MPI_InitInitializesMPI.MPI_FinalizeTerminatesMPI.MPI_Comm_sizeDeterminesthenumberofprocesses.MPI_Comm_rankDeterminesthelabelofcallingprocess.MPI_SendSendsamessage.MPI_RecvReceivesamessage.15StartingandTerminatingtheMPILibrary
MPI_InitiscalledpriortoanycallstootherMPIroutines.ItspurposeistoinitializetheMPIenvironment.MPI_Finalizeiscalledattheendofthecomputation,anditperformsvariousclean-uptaskstoterminatetheMPIenvironment.Theprototypesofthesetwofunctionsare:
intMPI_Init(int*argc,char***argv) intMPI_Finalize()
MPI_InitalsostripsoffanyMPIrelatedcommand-linearguments.MPI_Init也剝離任何與MPI相關(guān)的命令行參數(shù)。AllMPIroutines,data-types,andconstantsareprefixedby“MPI_”.ThereturncodeforsuccessfulcompletionisMPI_SUCCESS.所有MPI例程,數(shù)據(jù)類型和常量都以“MPI_”作為前綴。成功完成的返回碼為MPI_SUCCESS。16CommunicatorsAcommunicatordefinesacommunicationdomain-asetofprocessesthatareallowedtocommunicatewitheachother.InformationaboutcommunicationdomainsisstoredinvariablesoftypeMPI_Comm.CommunicatorsareusedasargumentstoallmessagetransferMPIroutines.Aprocesscanbelongtomanydifferent(possiblyoverlapping)communicationdomains.MPIdefinesadefaultcommunicatorcalledMPI_COMM_WORLDwhichincludesalltheprocesses.17Communicators通信者定義通信域-允許彼此通信的一組進(jìn)程。有關(guān)通信域的信息存儲(chǔ)在MPI_Comm類型的變量中。通信器用作所有消息傳輸MPI例程的參數(shù)。進(jìn)程可以屬于許多不同(可能重疊)的通信域。MPI定義了一個(gè)名為MPI_COMM_WORLD的默認(rèn)通訊器,包括所有進(jìn)程。18QueryingInformationThe
MPI_Comm_size
and
MPI_Comm_rank
functionsareusedtodeterminethenumberofprocessesandthelabelofthecallingprocess,respectively.Thecallingsequencesoftheseroutinesareasfollows:
intMPI_Comm_size(MPI_Commcomm,int*size) intMPI_Comm_rank(MPI_Commcomm,int*rank)
Therankofaprocessisanintegerthatrangesfromzerouptothesizeofthecommunicatorminusone.進(jìn)程的等級(jí)是從零到通信者的大小減一的整數(shù)。19OurFirstMPIProgram#include<mpi.h>main(intargc,char*argv[]){ intnpes,myrank;
MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&npes); MPI_Comm_rank(MPI_COMM_WORLD,&myrank); printf("Fromprocess%doutof%d,HelloWorld!\n", myrank,npes);
MPI_Finalize();}20ParallelProgrammingWithMPICommunication通訊Basicsend/receive(blocking)基本發(fā)送/接收(阻塞)Non-blocking非阻塞Collective集體Synchronization同步Implicitinpoint-to-pointcommunication隱含的點(diǎn)對(duì)點(diǎn)通信Globalsynchronizationviacollectivecommunication通過集體交流進(jìn)行全球同步ParallelI/O(MPI2)21Basic
SendingandReceivingMessagesThebasicfunctionsforsendingandreceivingmessagesinMPIaretheMPI_SendandMPI_Recv,respectively.Thecallingsequencesoftheseroutinesareasfollows:
intMPI_Send(void*buf,intcount,MPI_Datatype datatype,intdest,inttag,MPI_Commcomm) intMPI_Recv(void*buf,intcount,MPI_Datatype datatype,intsource,inttag, MPI_Commcomm,MPI_Status*status)
MPI_Send22MPI_SendThemessagetobesentisdeterminedbypointingtothememoryblock(buffer),whichcontainsthemessage.Thetriad,whichisusedtopointtothebuffer(buf,count,type),isincludedintotheparametersofpracticallyalldatapassingfunctions,Theprocesses,amongwhichdataispassed,shouldbelongtothecommunicator,specifiedinthefunctionMPI_Send,Theparametertagisusedonlywhenitisnecessarytodifferentiateamongthemessagesbeingpassed.Otherwise,anarbitraryintegernumbercanbeusedastheparametervalue23MPI_Send要發(fā)送的消息通過指向包含該消息的存儲(chǔ)器塊(緩沖器)來確定。用于指向緩沖區(qū)(buf,count,type)的三元組被包含在幾乎所有數(shù)據(jù)傳遞函數(shù)的參數(shù)中,數(shù)據(jù)傳遞的過程應(yīng)屬于MPI_Send函數(shù)中指定的通信器,僅當(dāng)需要區(qū)分正在傳遞的消息之間時(shí)才使用參數(shù)標(biāo)簽。否則,可以使用任意整數(shù)作為參數(shù)值24MPI_Recv2526SendingandReceivingMessages
MPIallowsspecificationofwildcardargumentsforbothsourceandtag.IfsourceissettoMPI_ANY_SOURCE,thenanyprocessofthecommunicationdomaincanbethesourceofthemessage.IftagissettoMPI_ANY_TAG,thenmessageswithanytagareaccepted.Onthereceiveside,themessagemustbeoflengthequaltoorlessthanthelengthfieldspecified.27SendingandReceivingMessages
MPI允許為源和標(biāo)簽指定通配符參數(shù)。如果source設(shè)置為MPI_ANY_SOURCE,則通信域的任何進(jìn)程都可以是消息的源。如果標(biāo)簽設(shè)置為MPI_ANY_TAG,則會(huì)接受帶有任何標(biāo)簽的消息。在接收端,該消息的長(zhǎng)度必須等于或小于指定的長(zhǎng)度字段。28MPIDatatypes
MPIDatatypeCDatatypeMPI_CHARsignedcharMPI_SHORTsignedshortintMPI_INTsignedintMPI_LONGsignedlongintMPI_UNSIGNED_CHARunsignedcharMPI_UNSIGNED_SHORTunsignedshortintMPI_UNSIGNEDunsignedintMPI_UNSIGNED_LONGunsignedlongintMPI_FLOATfloatMPI_DOUBLEdoubleMPI_LONG_DOUBLElongdoubleMPI_BYTEMPI_PACKED29Point-to-pointExampleProcess0 Process1#defineTAG999floata[10];intdest=1;MPI_Send(a,10,MPI_FLOAT,dest,TAG,MPI_COMM_WORLD);#defineTAG999MPI_Statusstatus;intcount;floatb[20];intsender=0;MPI_Recv(b,20,MPI_FLOAT,sender,TAG,MPI_COMM_WORLD,&status);MPI_Get_count(&status,MPI_FLOAT,&count);30Non-blockingCommunicationInordertooverlapcommunicationwithcomputation,MPIprovidesapairoffunctionsforperformingnon-blocking非阻塞sendandreceiveoperations.intMPI_Isend(void*buf,intcount,MPI_Datatypedatatype, intdest,inttag,MPI_Commcomm, MPI_Request*request)intMPI_Irecv(void*buf,intcount,MPI_Datatypedatatype, intsource,inttag,MPI_Commcomm, MPI_Request*request)Theseoperationsreturnbeforetheoperationshavebeencompleted.FunctionMPI_Testtestswhetherornotthenon-blockingsendorreceiveoperationidentifiedbyitsrequesthasfinished.intMPI_Test(MPI_Request*request,int*flag, MPI_Status*status)
Non-blockingCommunicationThefollowingschemeofcombiningthecomputationsandtheexecutionofthenonblockingcommunicationoperationispossible:組合計(jì)算和非阻塞通信操作的執(zhí)行的以下方案是可能的:31
ProgramsEvaluatingofMPIProgramExecutionTime
Theexecutiontimeneedstoknowforestimatingtheobtainedspeedupofparallelcomputation執(zhí)行時(shí)間需要知道估計(jì)獲得的并行計(jì)算加速Obtainingthetimeofthecurrentmomentoftheprogramexecutionisprovidedbymeansofthefollowingfunction:Theaccuracyoftimemeasurementcandependontheenvironmentoftheparallelprogramexecution.Thefollowingfunctioncanbeusedinordertodeterminethecurrentvalueoftimemeasurementaccuracy:時(shí)間測(cè)量的準(zhǔn)確度可以取決于并行程序執(zhí)行的環(huán)境。可以使用以下功能來確定時(shí)間測(cè)量精度的當(dāng)前值:3233MPICollectiveCommunicationRoutinesthatsendmessage(s)toagroupofprocessesorreceivemessage(s)fromagroupofprocesses將消息發(fā)送到一組進(jìn)程或從一組進(jìn)程接收消息Potentiallymoreefficientthanpoint-to-pointcommunication潛在地比點(diǎn)對(duì)點(diǎn)通信更有效率ExamplesBroadcastReductionBarrierScatterGatherAll-to-allCollectiveCommunication-BroadcastThereistheneedfortransmittingthevaluesofthevectorXtoalltheparallelprocesses,AnevidentwayistousetheabovediscussedMPIcommunicationfunctionstoprovideallrequireddatatransmissions:Therepetitionofthedatatransmissionsleadstosummingupthelatenciesofthecommunicationoperations,Therequireddatatransmissionscanbeexecutedwiththesmallernumberofiterations34CollectiveCommunication-Broadcast需要將矢量X的值發(fā)送到所有并行進(jìn)程,一個(gè)明顯的方法是使用上述討論的MPI通信功能來提供所有必需的數(shù)據(jù)傳輸:數(shù)據(jù)傳輸?shù)闹貜?fù)導(dǎo)致總結(jié)通信操作的延遲,所需的數(shù)據(jù)傳輸可以用較少的迭代次數(shù)執(zhí)行3536CollectiveCommunication-BroadcastTheone-to-allbroadcastoperationis:
intMPI_Bcast(void*buf,intcount,MPI_Datatypedatatype,intsource,MPI_Commcomm)SendsdatafromroottoallothersinagroupCollectiveCommunication-Broadcast3738CollectiveCommunication–ReduceTheall-to-onereductionoperationis:
intMPI_Reduce(void*sendbuf,void*recvbuf,intcount, MPI_Datatypedatatype,MPI_Opop,inttarget, MPI_Commcomm)Combinesdatafromallprocessesingroup-Performs(associative)reductionoperation(SUM,MAX)-ReturnsthedatatooneprocessCollectiveCommunication–Reduce39CollectiveCommunication–ReduceTheBasicMPIOperationTypesforDataReductionFunctions…4041CollectiveCommunication–synchronizationThebarriersynchronizationoperationisperformedinMPIusing:
intMPI_Barrier(MPI_Commcomm)
Abarrieroperationsynchronisesanumberofprocessors.屏障操作同步多個(gè)處理器。42CollectiveCommunication–ScatterThecorrespondingscatteroperationis:
intMPI_Scatter(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, intsource,MPI_Commcomm)
SendseachelementofarrayinroottoseparateprocessCollectiveCommunication–GatherGatheringDatafromAlltheProcessestoaProcess…Gatheringdatafromalltheprocessestoaprocessisreversetodatascattering.ThefollowingMPIfunctionprovidestheexecutionofthisoperation:4344CollectiveCommunication–GatherThegatheroperationisperformedinMPIusing:
intMPI_Gather(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, inttarget,MPI_Commcomm)
Collectsdatafromsetofprocesses45OtherCollectiveCommunicationMPIalsoprovidestheMPI_Allgatherfunctioninwhichthedataaregatheredatalltheprocesses.
intMPI_Allgather(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf,intrecvcount, MPI_Datatyperecvdatatype,MPI_Commcomm)Iftheresultofthereductionoperationisneededbyallprocesses,MPIprovides:
intMPI_Allreduce(void*sendbuf,void*recvbuf,intcount,MPI_Datatypedatatype,MPI_Opop,MPI_Commcomm)Tocomputeprefix-sums,MPIprovides:
intMPI_Scan(void*sendbuf,void*recvbuf,intcount,MPI_Datatypedatatype,MPI_Opop,MPI_Commcomm)
46OtherCollectiveCommunicationTheall-to-allpersonalizedcommunicationoperationisperformedby:
intMPI_Alltoall(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, MPI_Commcomm)Usingthiscoresetofcollectiveoperations,anumberofprogramscanbegreatlysimplified.使用這套核心集體操作,可以大大簡(jiǎn)化一些程序。4748TopologiesandEmbeddings
MPIallowsaprogrammertoorganizeprocessorsintologicalk-dmeshes.TheprocessoridsinMPI_COMM_WORLDcanbemappedtoothercommunicators(correspondingtohigher-dimensionalmeshes)inmanyways.Thegoodnessofanysuchmappingisdeterminedbytheinteractionpatternoftheunderlyingprogramandthetopologyofthemachine.MPIdoesnotprovidetheprogrammeranycontroloverthesemappings.
49TopologiesandEmbeddings
MPI允許程序員將處理器組織成邏輯k-d網(wǎng)格。MPI_COMM_WORLD中的處理器ID可以通過多種方式映射到其他通訊器(對(duì)應(yīng)于高維網(wǎng)格)。任何此類映射的優(yōu)點(diǎn)由底層程序的交互模式和機(jī)器的拓?fù)浣Y(jié)構(gòu)決定。MPI不提供程序員對(duì)這些映射的任何控制。50TopologiesandEmbeddingsDifferentwaystomapasetofprocessestoatwo-dimensionalgrid.and(b)showarow-andcolumn-wisemappingoftheseprocesses,(c)showsamappingthatfollowsaspace-llingcurve(dottedline)(d)showsamappinginwhichneighboringprocessesaredirectlyconnectedinahypercube.51CreatingandUsing
CartesianTopologies
Wecancreatecartesiantopologiesusingthefunction:我們可以使用以下功能創(chuàng)建笛卡爾拓?fù)洌?/p>
intMPI_Cart_create(MPI_Commcomm_old,intndims, int*dims,int*periods,intreorder,MPI_Comm*comm_cart) Thisfunctiontakestheprocessesintheoldcommunicatorandcreatesanewcommunicatorwithdimsdimensions.該功能采用舊通信器中的進(jìn)程,并創(chuàng)建一個(gè)具有dims維度的新通信器。Eachprocessorcannowbeidentifiedinthisnewcartesiantopologybyavectorofdimensiondims.每個(gè)處理器現(xiàn)在可以在這個(gè)新的笛卡爾拓?fù)溆沙叽鏒IMS的向量標(biāo)識(shí)。Example:CalculatingπThevalueofconstantπcanbecomputedbymeansoftheintegralTocomputethisintegralthemethodofrectanglescanbeusedfornumericalintegration52Example:CalculatingπCyclicschemecanbeusedtodistributethecalculationsamongtheprocessorsPartialsums,thatwerecalculatedondifferentprocessors,havetobesummed5354Calculatingπ–SequentialProgram
intnum_steps=1000;doublewidth;voidmain() { inti; doublex,pi,sum=0.0; width=1.0/(double)num_steps;
for(i=1;i<=num_steps;i++){ x=(i-0.5)*width; sum=sum+4.0/(1.0+x*x); } pi=sum*width;}55MPIExample–Calculatingπ56MPIExample–Calculatingπ57CollectiveCommunication–Calculatingπ58CollectiveCommunication–Calculatingπ59IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline60WhatisPVM?
ThedevelopmentofPVMstartedinsummer1989atOakRidgeNationalLaboratory(ORNL).Isasoftwarepackagethatallowsaheterogeneouscollectionofworkstations(hostpool)tofunctionasasinglehighperformanceparallelmachine(virtual)PVM,throughits
virtualmachine
providesasimpleyetusefuldistributedoperatingsystemIthas
daemon
runningonallcomputersmakingupthe
virtualmachine61WhatisPVM?PVM的發(fā)展始于1989年夏天在橡樹嶺國(guó)家實(shí)驗(yàn)室(ORNL)。是一個(gè)軟件包,允許異構(gòu)收集的工作站(主機(jī)池)作為單個(gè)高性能并行機(jī)器(虛擬)PVM通過其虛擬機(jī)提供了一個(gè)簡(jiǎn)單而有用的分布式操作系統(tǒng)它具有在組成虛擬機(jī)的所有計(jì)算機(jī)上運(yùn)行的守護(hù)程序62PVMResourcesWebsite
/pvm/pvm_home.html
Book PVM:ParallelVirtualMachine
AUsers'GuideandTutorialforNetworkedParallelComputing
AlGeist,AdamBeguelin,JackDongarra,WeichengJiang,RobertManchek,VaidySunderam
/pvm3/book/pvm-book.html
63HowPVMisDesigned64BasicPVMFunctionsEnrollsthecallingprocessintoPVMandgeneratesauniquetaskidentifierifthisprocessisnotalreadyenrolledinPVM.IfthecallingprocessisalreadyenrolledinPVM,thisroutinesimplyreturnstheprocess'stid.將調(diào)用進(jìn)程注冊(cè)到PVM中,并生成唯一的任務(wù)標(biāo)識(shí)符,如果此進(jìn)程尚未注冊(cè)在PVM中。如果調(diào)用進(jìn)程已經(jīng)注冊(cè)在PVM中,這個(gè)例程只需返回進(jìn)程的tid。 tid=pvm_mytid();StartsnewPVMprocesses.Theprogrammercanspecifythemachinearchitectureandmachinenamewhereprocessesaretobespawned.開始新的PVM流程。程序員可以指定要生成進(jìn)程的機(jī)器體系結(jié)構(gòu)和機(jī)器名稱。numt=pvm_spawn("worker",0,PvmTaskDefault,"",1,&tids[i]);65BasicPVMFunctionsTellsthelocalpvmdthatthisprocessisleavingPVM.ThisroutineshouldbecalledbyallPVMprocessesbeforetheyexit.Addhoststothevirtualmachine.Thenamesshouldhavethesamesyntaxaslinesofapvmdhostfile.pvm_addhosts(hostarray,4,infoarray);Deleteshostsfromthevirtualmachine.
pvm_delhosts(hostarray,4);66BasicPVMFunctionsImmediatelysendsthedatainthemessagebuffertothespecifieddestinationtask.Thisisablocking,sendoperation.Returns0ifsuccessful,<0otherwise.
pvm_send(tids[1],MSGTAG);Multicasts組播amessagestoredintheactivesendbuffertotasksspecifiedinthetids[].Themessageisnotsenttothecallereveniflistedinthearrayoftids.
pvm_mcast(tids,ntask,msgtag);67BasicPVMFunctionsBlocksthereceivingprocessuntilamessagewiththespecifiedtaghasarrivedfromthespecifiedtid.Themessageisthenplacedinanewactivereceivebuffer,whichalsoclearsthecurrentreceivebuffer.
pvm_recv(tid,msgtag);Sameaspvm_recv,exceptanon-blocking非阻塞receiveoperationisperformed.Ifthespecifiedmessagehasarrived,thisroutinereturnsthebufferidofthenewreceivebuffer.Ifthemessagehasnotarrived,itreturns0.Ifanerroroccurs,thenaninteger<0isreturned.
pvm_nrecv(tid,msgtag);68BasicPVMFunctionspvm_barrier("worker",5);pvm_bcast("worker",msgtag);pvm_gather(&getmatrix,&myrow,10,PVM_INT,msgtag,"workers",root);pvm_scatter(&getmyrow,&matrix,10,PVM_INT,
msgtag,"workers",root);pvm_reduce(PvmMax,&myvals,10,PVM_INT,msgtag,"workers",root);69PVMExample:HelloWorld!#include<stdio.h>#include"pvm3.h"main(){intcc,tid;charbuf[100];printf("i'mt%x\n",pvm_mytid());cc=pvm_spawn("hello_other",0,0,"",1,&tid);if(cc==1){ cc=pvm_recv(-1,-1); pvm_bufinfo(cc,0,0,&tid); pvm_upkstr(buf); printf("fromt%x:%s\n",tid,buf);}else printf("can'tstarthello_other\n");pvm_exit();exit(0);}#include"pvm3.h"main(){ intptid; charbuf[100]; ptid=pvm_parent(); strcpy(buf,"hello,worldfrom"); gethostname(buf+strlen(buf),64); pvm_initsend(PvmDataDefault); pvm_pkstr(buf); pvm_send(ptid,1); pvm_exit(); exit(0);}70SetuptoUsePVMSetPVM_ROOTandPVM_ARCHinyour.cshrcfileBuildPVMforeacharchitecturetypeCreatea.rhostsfileoneachhostlistingallthehostsyouwishtouseCreatea$HOME/.xpvm_hostsfilelistingallthehostsyouwishtouseprependedbyan``&''71StartingPVMBefore
wegooverthestepstocompileandrunparallelPVMprograms,youshouldbesureyoucanstartupPVMandconfigureavirtualmachine.OnanyhostonwhichPVMhasbeeninstalledyoucantype%pvmandyoushouldgetbackaPVMconsolepromptsignifyingthatPVMisnowrunningonthishost.Youcanaddhoststoyourvirtualmachinebytypingattheconsolepromptpvm>addhostnameAndyoucandeletehosts(excepttheoneyouareon)fromyourvirtualmachinebytypingpvm>deletehostnameIfyougetthemessage``Can'tStartpvmd,''thencheckthecommonstartupproblemssectionandtryagain.72StartingPVM在完成編譯并運(yùn)行并行PVM程序的步驟之前,您應(yīng)該確??梢詥?dòng)PVM并配置虛擬機(jī)。在任何安裝了PVM的主機(jī)上,您可以鍵入%pvm并且您應(yīng)該返回PVM控制臺(tái)提示符,表示PVM正在此主機(jī)上運(yùn)行。您可以通過在控制臺(tái)提示符下鍵入來將主機(jī)添加到虛擬機(jī)pvm>addhostname并且您可以通過鍵入從虛擬機(jī)中刪除主機(jī)(您所在的除外)pvm>deletehostname如果您收到消息“無法啟動(dòng)pvmd”,請(qǐng)檢查常見的啟動(dòng)問題部分,然后重試。73StartingPVMToseewhatthepresentvirtualmachinelookslike,youcantypepvm>confToseewhatPVMtasksarerunningonthevirtualmachine,youtypepvm>ps–aOfcourseyoudon'thaveanytasksrunningyet;that'sinthenextsection.Ifyoutype``quit"attheconsoleprompt,theconsolewillquitbutyourvirtualmachineandtaskswillcontinuetorun.AtanyUnixpromptonanyhostinthevirtualmachine,youcantype%pvmandyouwillgetthemessage``pvmalreadyrunning"andtheconsoleprompt.Whenyouarefinishedwiththevirtualmachine,youshouldtypepvm>haltThiscommandkillsanyPVMtasks,shutsdownthevirtualmachine,andexitstheconsole.ThisistherecommendedmethodtostopPVMbecauseitmakessurethatthevirtualmachineshutsdowncleanly.YoushouldpracticestartingandstoppingandaddinghoststoPVMuntilyouarecomfortablewiththePVMconsole.AfulldescriptionofthePVMconsoleanditsmanycommandoptionsisgivenattheendofthischapter.74StartingPVM要查看當(dāng)前虛擬機(jī)的外觀,可以鍵入pvm>conf要查看虛擬機(jī)上運(yùn)行的PVM任務(wù),請(qǐng)鍵入pvm>ps–a當(dāng)然你還沒有任何運(yùn)行的任務(wù);這在下一節(jié)。如果您在控制臺(tái)提示符下鍵入“quit”,則控制臺(tái)將退出,但您的虛擬機(jī)和任務(wù)將繼續(xù)運(yùn)行。在虛擬機(jī)中的任何主機(jī)上的任何Unix提示符下,可以鍵入%pvm您將收到消息“pvmalreadyrunning”和控制臺(tái)提示。完成虛擬機(jī)后,您應(yīng)該鍵入pvm>halt該命令可以殺死任何PVM任務(wù),關(guān)閉虛擬機(jī),并退出控制臺(tái)。這是阻止PVM的推薦方法,因?yàn)樗_保虛擬機(jī)完全關(guān)閉。您應(yīng)該練習(xí)啟動(dòng)和停止并將主機(jī)添加到PVM,直到您對(duì)PVM控制臺(tái)感到滿意為止。在本章末尾給出了PVM控制臺(tái)及其許多命令選項(xiàng)的完整說明。75IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline76PVMandMPIGoalsPVMAdistributedoperatingsystemPortabilityHeterogeneityHandlingcommunicationfailuresMPIAlibraryforwritingapplicationprogram,notadistributedoperatingsystemportabilityHighPerformanceHeterogeneityWell-definedbehavior77PVMandMPIGoalsPVM分布式操作系統(tǒng)可移植性異質(zhì)性處理通信故障MPI用于編寫應(yīng)用程序的庫(kù),而不是分布式操作系統(tǒng)可移植性高性能異質(zhì)性明確的行為78WhatisNotDifferent?Portability
–sourcecodewrittenforonearchitecturecanbecopiedtoasecondarchitecture,compiledandexecutedwithoutmodification(tosomeextent)Support
MPMD
programsaswellas
SPMDInteroperability–theabilityofdifferentimplementationsofthesamespecificationtoexchangemessagesHeterogeneity(tosomeextent)
PVM&MPIaresystemsdesignedtoprovideuserswithlibrariesforwritingportable,heterogeneous,MPMDprograms79WhatisNotDifferent?可移植性-為一個(gè)架構(gòu)編寫的源代碼可以復(fù)制到第二個(gè)架構(gòu),編譯并執(zhí)行,無需修改(在某種程度上)支持MPMD程序以及SPMD互操作性-相同規(guī)范的不同實(shí)現(xiàn)交換消息的能力異質(zhì)性(某種程度上)PVM和MPI是為用戶提供寫入便攜式,異構(gòu)的MPMD程序的庫(kù)的系統(tǒng)80ProcesscontrolAbilitytostartandstoptasks,tofindoutwhichtasksarerunning,andpossiblywheretheyarerunning.PVM
containsallofthesecapabilities–itcanspawn/killtasksdynamicallyMPI-1
hasnodefinedmethodtostartnewtask.
MPI-2
containfunctionstostartagroupoftasksandtosendakillsignaltoagroupoftasks81Processcontrol能夠啟動(dòng)和停止任務(wù),查找正在運(yùn)行的任務(wù),以及可能運(yùn)行的位置。PVM包含所有這些功能-
它可以動(dòng)態(tài)地生成/刪除任務(wù)MPI-1沒有定義的方法來啟動(dòng)新任務(wù)。MPI-2包含啟動(dòng)一組任務(wù)并向一組任務(wù)發(fā)送殺死信號(hào)的功能82ResourceControlPVM
isinherently
dynamic
innature,andithasarichsetofresourcecontrolfunctions.HostscanbeaddedordeletedloadbalancingtaskmigrationfaulttoleranceefficiencyMPI
isspecificallydesignedtobe
static
innaturetoimproveperformance83ResourceControlPVM本質(zhì)上是動(dòng)態(tài)的,它具有豐富的資源控制功能。主機(jī)可以添加或刪除負(fù)載均衡任務(wù)遷移容錯(cuò)效率MPI是專門設(shè)計(jì)為靜態(tài)的,以提高性能84Virtualtopology
-onlyfor
MPIConvenientprocessnamingNamingschemetofitthecommunicationpatternSimplifieswritingofcodeCanal
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025下半年重慶市地震局事業(yè)單位招聘擬聘歷年高頻重點(diǎn)提升(共500題)附帶答案詳解
- 2025下半年浙江省舟山引航站招聘引航員9人高頻重點(diǎn)提升(共500題)附帶答案詳解
- 2025下半年四川遂寧高新區(qū)部分事業(yè)單位考試招聘擬聘用人員高頻重點(diǎn)提升(共500題)附帶答案詳解
- 2025下半年四川省廣安市華鎣市“小平故里英才”引進(jìn)急需緊缺專業(yè)人才17人歷年高頻重點(diǎn)提升(共500題)附帶答案詳解
- 2025上半年重慶市屬衛(wèi)生計(jì)生事業(yè)單位招聘擬聘歷年高頻重點(diǎn)提升(共500題)附帶答案詳解
- 2025上半年江蘇省南通事業(yè)單位招聘95人歷年高頻重點(diǎn)提升(共500題)附帶答案詳解
- 2025上半年四川省甘孜州考試招聘事業(yè)單位人員163人筆試高頻重點(diǎn)提升(共500題)附帶答案詳解
- 2025上半年四川省內(nèi)江市“甜城英才”選聘活動(dòng)402人高頻重點(diǎn)提升(共500題)附帶答案詳解
- 2025上半年四川內(nèi)江市本級(jí)部分事業(yè)單位考聘55人高頻重點(diǎn)提升(共500題)附帶答案詳解
- 成分分析產(chǎn)品行業(yè)相關(guān)投資計(jì)劃提議范本
- 軍事理論智慧樹知到期末考試答案2024年
- 小班故事《小狗賣冷飲》課件
- 2023水庫(kù)大壩震后安全檢查技術(shù)指南
- 2024年中小學(xué)財(cái)務(wù)管理知識(shí)筆試歷年真題薈萃含答案
- CNC數(shù)控編程述職報(bào)告
- 小學(xué)口才課教學(xué)大綱
- 生產(chǎn)車間環(huán)境改善方案
- 2023-2024學(xué)年四川省成都市錦江區(qū)七年級(jí)(上)期末數(shù)學(xué)試卷(含解析)
- 消防內(nèi)務(wù)條令全文文檔
- 中傳文史哲2023初試真題及答案
- DB4201T622-2020燃?xì)夤艿涝O(shè)施安全保護(hù)規(guī)程
評(píng)論
0/150
提交評(píng)論