版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
NobelPrizehighlightsneuralnetworks’physicsroots罡
Theroadtothemodernmachine-learningmarvelswaspavedwithideasfromstatisticalmechanicsandcollectivephenomena.
JohannaL.Miller
PhysicsToday77(12),12–16(2024);
/10.1063/pt.qjmx.snxw
恩
View
Online
CrossMark
5
Export
Citation
24December202423:32:07
SEARCH&DISCOVERY
NobelPrizehighlightsneuralnetworks’
physicsroots
Theroadtothemodern
MATTRASPANTI/PRINCETONUNIVERSITY
JOHNNYGUATTO/UNIVERSITYOFTORONTO
machine-learningmarvelswaspavedwithideasfromstatisticalmechanicsandcollectivephenomena.
“G
arbagein,garbageout.”According
totheoldadagefromcomputersci-
JohnHop?eld
Geo?reyHinton
onthoseideastodevelopthealgorithmsusedbyneural-networkmodelstoday.
Glassymemory
Itwasfarfromobvious,atfirst,thatneuralnetworkswouldevergrowtobesopowerful.Asrecentlyas2011,theflashiestmilestonesinAIwerebeingachievedbyanotherapproachentirely.IBMWatson,thecomputerthatbeatKenJenningsandBradRutteratJeop-ardy!,wasnotaneuralnetwork:Itwasexplicitlyprogrammedwithrulesforlanguageprocessing,informationre-trieval,andlogicalreasoning.AndmanyresearchersthoughtthatwasthewaytogotocreatepracticalAImachines.
Incontrast,theearlyworkonneuralnetworkswascuriosity-drivenresearch,inspiredmorebyrealbrainsthanbycomputersandtheirapplications.Butthenatureoftheinterdisciplinaryconnectionwassubtle.“ThequestionsHopfieldad-dressedarenotunrelatedtothingsneuro-scientistswereworriedabout,”saysPrinceton’sWilliamBialek.“Butthisisn’tabout‘a(chǎn)pplicationofphysicstoX’;rather,it’saboutintroducingawholepointofviewthatjustdidn’texistbefore.”
Bythe1980s,neuroscientistshadknownfordecadesthatthebrainiscom-posedofneurons,whichareconnectedtooneanotherviasynapsesandalter-natebetweenperiodsofhighandlowelectricalactivity(colloquially,“firing”and“notfiring”),andtheywerestudy-ingsystemsofafewneuronstounder-standhowoneneuron’sfiringaffected
thoseitwasconnectedto.“Somethoughtofneuronsintermsoflogicgates,likeinelectronics,”saysStanfordUniversity’sJayMcClelland.
Inalandmark1982paper,Hopfieldtookadifferentapproach.1Inphysics,heargued,manyimportantpropertiesoflarge-scalesystemsareindependentofsmall-scaledetails.Allmaterialscon-ductsoundwaves,forexample,irrespec-tiveofexactlyhowtheiratomsormole-culesinteract.Microscopicforcesmightaffectthespeedofsoundorotheracous-ticproperties,butstudyingtheforcesamongthreeorfouratomsrevealslittleabouthowtheconceptofsoundwavesemergesinthefirstplace.
Sohewrotedownamodelofanet-workofneurons,withaneyemoreto-wardcomputationalandmathematicalsimplicitythanneurobiologicalrealism.Themodel,nowknownasaHopfieldnetwork,issketchedinfigure1.(Thefig-ureshowsafive-neuronnetworkforeaseofillustration;Hopfieldwassimulatingnetworksof30to100neurons.)Eachneuroncanbeinstate1,forfiring,orstate0,fornotfiring.Andeachneuronwasconnectedtoalltheothersviacouplingconstantsthatcouldhaveanypositiveornegativevalue,dependingonwhethereachsynapsefavorsordisfavorstheneu-ronstobothbefiringatthesametime.
That’sexactlythesameformasaspinglass,afamouslythornysystemfromcondensed-matterphysics.(SeePhysicsToday,December2021,page17.)Unlikeaferromagnet,inwhichthecouplingsareall
ence,whatyougetfromacomputerisnobetterthanwhatyougiveit.Anditwouldseemtoimplythatbecausecom-puterscan’tthinkforthemselves,theycanneverdoanythingmoresophisti-catedthanwhatthey’vebeenexplicitlyinstructedto.
24December202423:32:07
Butthatlastpartappearstobenolongertrue.Neuralnetworks—computingarchitectures,inspiredbythehumanbrain,inwhichsignalsarepassedamongnodescalledartificialneurons—have,inrecentyears,beenproducingwaveafterwaveofstunningresults.(See,forexam-ple,page17ofthisissue.)Individualartificialneuronsperformonlythemostelementaryofcomputations.Butwhenbroughttogetherinlargeenoughnum-bers,andwhenfedonenoughtrainingdata,theyacquirecapabilitiesuncannilyreminiscentofhumanintelligence,seem-inglyoutofnowhere.
Physicistsarenostrangerstotheideaofunexpectedphenomenaemergingfromsimplerbuildingblocks.Afewel-ementaryparticlesandtherulesoftheirinteractionscombinetoyieldalmostthewholeofthevisibleworld:super-conductors,plasmas,andeverythinginbetween.Whyshouldn’taphysicsap-proachtoemergentcomplexitybeap-pliedtoneuralnetworkstoo?
Indeed,itwas—andstillis—asshow-casedbythisyear’sNobelPrizeinPhys-ics,whichgoestoPrincetonUniversity’sJohnHopfieldandtheUniversityofTo-ronto’sGeoffreyHinton.Beginningintheearly1980s,Hopfieldlaidthecon-ceptualfoundationsforphysics-basedthinkingaboutbrain-inspiredinforma-tionprocessing;Hintonwasatthefore-frontofthedecades-longefforttobuild
12PHYSICSTODAY|DECEMBER2024
10
1
00
FIGURE1.AHOPFIELDNETWORK,formallyequivalenttoaspinglass,functionsasanassociativememory:Whenpresentedwithapartiallyrecalledstate,itusesanenergy-loweringalgorithmtofillinthegaps.Thememoriesarestoredinthestrengthsofthe
connectionsamongthenodes.WhenJohnHopfieldshowedthatwiththerightcombinationofconnectionweights,thenetworkcouldstoremanymemoriessimultaneously,hesetthestageforphysics-basedthinkingaboutneuralnetworks.(FigurebyFreddiePagani;rabbitphotobyJMLigeroLoarte/WikimediaCommons/CCBY3.0.)
positiveandthesystemhasacleargroundstatewithallitsspinsaligned,aspinglassalmostalwayslacksastatethatsatisfiesallitsspins’energeticpreferencessimul-taneously.Itsenergylandscapeiscom-plex,withmanylocalenergyminima.
Hopfieldarguedthatthelandscapecouldserveasamemory,witheachoftheenergy-minimizingconfigurationsservingasastatetoberemembered.Andhepresentedanelegantwayofset-tingtheconnectionstrengths—inspiredbywhathappensatrealsynapses—sothatthememorywouldstoreanyde-siredcollectionofstates.
ButtheHopfieldnetworkisfunda-mentallydifferentfromanordinarycom-putermemory.Inacomputer,eachitemofdatatobestoredisencodedasastringofonesandzerosinaspecificplace,andit’srecalledbygoingbacktothatplaceandreadingoutthestring.InaHopfieldnetwork,alltheitemsarestoredsimulta-neouslyinthecouplingstrengthsofthewholenetwork.Andtheycanberecalledassociatively,bygivingthenetworkastartingpointthatsharesjustafewfea-tureswithoneoftherememberedstatesandallowingittorelaxtothenearestenergyminimum.Moreoftenthannot,itwillrecallthedesiredmemory.(SeealsothearticlesbyHaimSompolinsky,PhysicsToday,December1988,page70,andJohnHopfield,PhysicsToday,Feb-ruary1994,page40.)
Thoseareboththingsthathappeninrealbrains.“Itwasknownexperimen-tallyinhigheranimalsthatbrainactivitywaswellspreadout,anditinvolved
manyneurons,”saysHopfield.Andas-sociativememoryissomethingyou’vedirectlyexperiencedifyou’veeverre-calledasongyou’veheardbeforeafterhearingonerandomline.
Hopfield’smodelwasavastsimplifica-tionofarealbrain.Realneuronsarein-trinsicallydynamic,notcharacterizedbystaticstates,andrealneuronconnectionsarenotsymmetric.Butinaway,thosedifferenceswerefeatures,notbugs:They
showedthatcollective,associativemem-orywasanemergentlarge-scalephenom-enon,robustagainstsmall-scaledetails.
Learninghowtolearn
“NotonlyisHopfieldaverygoodphysi-cist,buttheHopfieldmodelisexcellentphysicsbyitself,”saysLeovanHemmen,oftheTechnicalUniversityofMunich.Still,its1982formulationleftmanyin-triguingopenquestions.Hopfieldhadfocusedonsimulationstoshowhowthesystemrelaxestoanenergyminimum;wouldthemodeladmitamorerobustanalyticaltreatment?Howmanystatescouldthemodelremember,andwhatwouldhappenifitwasoverloaded?Weretherebetterwaysofsettingthecon-nectionstrengthsthantheoneHopfieldproposed?
Thosequestions,andothers,weretakenonbyaflurryofphysics-trainedresearcherswhowereinspiredbyHopfield’sworkandenteredtheneural-networkfieldoverthe1980s.“Physicistsareversatile,curious,andarrogant—inapositiveway,”saysEytanDomany,oftheWeizmannInstituteofScienceinIsrael.
“They’rewillingtostudythoroughlyandthentackleaproblemthey’veneverseenbefore,ifit’sinteresting.Andeveryoneisexcitedaboutunderstandingthebrain.”
24December202423:32:07
AnotherpartoftheappealwasinhowHopfieldhadtakenatraditionalphysicsproblemandturneditonitshead.“Inmostenergy-landscapeprob-lems,you’regiventhemicroscopicinter-actions,andyouask,Whatisthegroundstate?Whatarethelocalminima?Whatistheentirelandscape?”saysHaimSompolinsky,oftheHebrewUniversityofJerusalem.“The1982paperdidtheopposite.Westartwiththegroundstatesthatwewant:thememories.Andweask,Whatarethemicroscopicinteractionsthatwillsupportthoseasgroundstates?”
Fromthere,itwasashortconceptualleaptoask,Whatifthecouplingstrengthsthemselvescanevolveontheirownen-ergylandscape?Thatis,insteadofbeingpreprogrammedwithparameterstoen-codespecificmemories,canthesystemimproveitselfbylearning?
Machinelearninginneuralnetworkshadbeentriedbefore.Theperceptron—aneural-network-likedevicethatsortedim-agesintosimplecategories,suchascirclesandsquares—datesbacktothe1950s.Whenprovidedwithaseriesoftrainingimagesandasimplealgorithmforupdat-ingitsconnectionsbetweenneurons,itcouldeventuallylearntocorrectlyclassifyevenimagesithadn’tseenbefore.
Buttheperceptrondidn’talwayswork:Withthewaythenetworkwasstructured,sometimestherewasn’tanywayofsettingtheconnectionstrengths
DECEMBER2024|PHYSICSTODAY13
SEARCH&DISCOVERY
3
10
0
1
01
A
3
3
3
3
3
FIGURE2.ABOLTZMANNMACHINEextendstheHopfieldnetworkintwoways:Itaugmentsthenetworktoincludehidden
nodes(showninthecenterofthenetworkingray)thataren’tinvolvedinencodingthedata,anditoperatesatanonzeroeffectivetemperature,sothattheentirespaceofconfigurationscanbecharacterizedbyaBoltzmannprobabilitydistribution.Geoffrey
HintonandcolleaguesdevelopedawaytotraintheBoltzmannmachineasagenerativemodel:Whenpresentedwithseveralinputsthatallsharedacommonfeature,itproducedmoreitemsofthesametype.(FigurebyFreddiePagani.)
toperformthedesiredclassification.“Whenthathappened,youcoulditer-ateforever,andthealgorithmwouldneverconverge,”saysvanHemmen.“Thatwasabigshock.”Withoutaguid-ingprincipletochartapathforward,thefieldhadstalled.
Findingcommonground
Hintondidn’tcometoneuralnetworksfromabackgroundinphysics.ButhiscollaboratorTerrenceSejnowski—who’dearnedhisPhDunderHopfieldin1978—did.Together,theyextendedtheHop-fieldnetworkintosomethingtheycalledtheBoltzmannmachine,whichvastlyextendedthemodel’scapabilitiesbyex-plicitlydrawingonconceptsfromstatis-ticalphysics.2
InHopfield’s1982simulations,he’deffectivelyconsideredthespin-glassnet-workatzerotemperature:Heallowedthesystemtoevolveitsstateonlyinwaysthatwouldloweritsoverallenergy.Sowhateverthestartingstate,itrolledintoanearbylocalenergyminimumandstayedthere.
“TerryandIimmediatelystartedthinkingaboutthestochasticversion,withnonzerotemperature,”saysHinton.In-steadofadeterministicenergy-loweringrule,theyusedaMonteCarloalgorithmthatallowedthesystemtooccasionallyjumpintoastateofhigherenergy.Givenenoughtime,astochasticsimulationofthenetworkwouldexploretheentireen-ergylandscape,anditwouldsettleintoaBoltzmannprobabilitydistribution,withallthelow-energystates—regardlessof
14PHYSICSTODAY|DECEMBER2024
whetherthey’relocalenergyminima—representedwithhighprobability.
“Andin1983,wediscoveredareallybeautifulwaytodolearning,”Hintonsays.Whenthenetworkwassuppliedwithtrainingdata,theyiterativelyup-datedtheconnectionstrengthssothatthedatastateshadhighprobabilityintheBoltzmanndistribution.3Moreover,whentheinputdatahadsomethingincommon—liketheimagesofthenu-meral3infigure2—thenotherhigh-probabilitystateswouldsharethesamecommonfeatures.
Thekeyingredientforthatkindofcommonalityfindingwasaugmentingthenetworktoincludemorenodesthanjusttheonesthatencodethedata.Thosehiddennodes,representedingrayinfigure2,allowthesystemtocapturehigher-levelcorrelationsamongthedata.
Inprinciple,theBoltzmannmachinecouldbeusedformachinerecognitionofhandwritingorfordistinguishingnormalfromemergencyconditionsinafacilitysuchasapowerplant.Unfortu-nately,theBoltzmannmachine’slearn-ingalgorithmisprohibitivelyslowformostpracticalapplications.Itremainedatopicofacademicresearch,butitdidn’tfindmuchreal-worlduse—untilitmadeasurprisingreappearanceyearslater.
Howthenetworkswork
Aroundthesametime,HintonwasworkingwithcognitivescientistDavidRumelhartonanotherlearningalgo-rithm,whichwouldbecomethesecretsauceofalmostalloftoday’sneural
24December202423:32:07
networks:backpropagation.4Thealgo-rithmwasdevelopedforadifferentkindofnetworkarchitecture,calledafeed-forwardnetwork,showninfigure3.IncontrasttotheHopfieldnetworkandBoltzmannmachine,withtheirbidirec-tionalconnectionsamongnodes,signalsinafeedforwardnetworkflowinonedirectiononly:fromalayerofinputneu-rons,throughsomenumberofhiddenlayers,totheoutput.Asimilararchitec-turehadbeenusedinthemultilayerperceptron.
Supposeyouwanttotrainafeed-forwardnetworktoclassifyimages.Yougiveitapictureofarabbit,andyouwantittoproducetheoutputmessage“Thisisarabbit.”Butsomethingiswrong,andinsteadyougettheoutput“Thisisaturtle.”Howdoyougetthingsbackontrack?Thenetworkmighthavedozensorhundreds—ortoday,trillions—ofinter-nodeconnectionsthatcontributetotheoutput,eachwithitsownnumericalweight.There’sadizzyingnumberofwaystoadjustthemalltotrytogettheoutputyouwant.
Backpropagationsolvesthatproblemthroughgradientdescent:First,youde-fineanerrorfunctionthatquantifieshowfartheoutputyougotisfromtheoutputyouwant.Then,calculatethepartialde-rivativesoftheerrorfunctionwithre-specttoeachoftheinternodalweights—asimplematterofrepeatedlyapplyingcalculus’schainrule.Finally,usethosederivativestoadjusttheweightsinawaythatdecreasestheerror.
Itmighttakemanyrepetitionstoget
0.3
0.4
Rabbit
0.1
0.5
0.9
0.8
0.7
0.6
0.6
Writeahaikuaboutarabbit
Softearsinthegrass,
Hoppingthroughthemorningdew,Nature’squietjoy.
0.5
0.9
0.2
FIGURE3.AFEEDFORWARDNETWORK,trainedbybackpropagation,isthebasicstructureoftheneuralnetworksusedtoday.
Bypassingnumericalsignalsfromaninputlayerthroughhiddenlayerstoanoutputlayer,feedforwardnetworksperformfunctionsthatincludeimageclassificationandtextgeneration.(FigurebyFreddiePagani;rabbitphotobyJMLigeroLoarte/Wikimedia
Commons/CCBY3.0;haikugeneratedbyGPT-4,OpenAI,22October2024.)
theerrorcloseenoughtozero—andyou’llwanttomakesurethatthenetworkgivestherightoutputformanyinputs,notjustone.Butthosebasicstepsareusedtotrainallkindsofnetworks,includingproof-of-conceptimageclassifiersandlargelan-guagemodels,suchasChatGPT.
Gradientdescentisintuitivelyele-gant,anditwasn’tconceptuallynew.“Butseveralelementshadtocometo-gethertogetthebackpropagationideatowork,”saysMcClelland.“Foronething,youcan’ttakethederivativeofsome-thingifit’snotdifferentiable.”Realneu-ronsoperatemoreorlessindiscreteonandoffstates,andtheoriginalHopfieldnetwork,Boltzmannmachine,andper-ceptronwerealldiscretemodels.Forbackpropagationtowork,itwasneces-sarytoshifttoamodelinwhichthenodestatescantakeacontinuumofvalues.Butthosecontinuous-valuednetworkshadalreadybeenintroduced,includingina1984paperbyHopfield.5
Asecondinnovationhadtowaitforlonger.Backpropagationworkedwellfornetworkswithjustacoupleoflayers.Butwhenthelayercountapproachedfiveormore—atriflingnumberbyto-day’sstandards—someofthepartialde-rivativesweresosmallthatthetrainingtookanimpracticallylongtime.
Intheearly2000s,Hintonfoundasolution,anditinvolvedhisoldBoltz-mannmachine—orrather,aso-calledrestrictedversionofit,inwhichtheonlyconnectionsarethosebetweenonehid-denneuronandonevisible(non-hidden)neuron.6RestrictedBoltzmannmachines(RBMs)areeasytocomputationally
model,becauseeachgroupofneurons—visibleandhidden—couldbeupdatedallatonce,andtheconnectionweightscouldallbeadjustedtogetherinasinglestep.Hinton’sideawastoisolatepairsofsuccessivelayersinafeedforwardnetwork,trainthemasiftheywereRBMstogettheweightsapproximatelyright,andthenfine-tunethewholenetworkusingbackpropagation.
“Itwaskindofahackything,butitworked,andpeoplegotveryexcited,”saysGrahamTaylor,oftheUniversityofGuelphinCanada,whoearnedhisPhDunderHintonin2009.“Itwasnowpos-sibletotrainnetworkswithfive,six,sevenlayers.Peoplecalledthem‘deep’networks,andtheystartedusingtheterm‘deeplearning.’”
TheRBMhackwasn’tusedforlong.Computingpowerwasadvancingsoquickly—particularlywiththerealizationthatgraphicsprocessingunits(GPUs)wereideallysuitedtothecomputationsneededforneuralnetworks—thatwithinafewyears,itwaspossibletodoback-propagationonevenlargernetworksfromacoldstart,withnoRBMsrequired. “IfRBMlearninghadn’thappened,wouldGPUshavecomealonganyway?”asksTaylor.“That’sarguable.Buttheex-citementaroundRBMschangedtheland-scape:Itledtotherecruitmentandtrain-ingofnewstudentsandtonewwaysofthinking.Ithinkattheveryleast,itwouldn’thavehappenedthesameway.”
What’snewisold
Today’snetworksusehundredsorthou-sandsoflayers,buttheirformislittle
24December202423:32:07
changedfromwhatHintondescribed.“Ilearnedaboutneuralnetworksfrombooksfromthe1980s,”saysBernhardMehlig,oftheUniversityofGothenburginSweden.“WhenIstartedteachingit,Irealizedthatnotmuchisnew.It’sessen-tiallytheoldstuff.”Mehlignotesthatinatextbookhewrote,publishedin2021,part1of3isaboutHopfield,andpart2isaboutHinton.
Neuralnetworksnowinfluenceavastnumberofhumanendeavors:They’reinvolvedindataanalysis,websearches,andcreatinggraphics.Aretheyintelli-gent?It’seasytodismissthequestionoutofhand.“Therehavealwaysbeenlotsofthingsthatmachinescandobetterthanhumans,”saystheUniversityofMaryland’sSankarDasSarma.“Thathasnothingtodowithbecominghuman.ChatGPTisfabulouslygoodatsomethings,butatmanyothers,it’snotevenasgoodasatwo-year-oldbaby.”
Anillustrativecomparisonisthevastdatagapbetweentoday’sneuralnet-worksandhumans.7Aliterate20-year-oldmayhavereadandheardafewhun-dredmillionwordsinlifesofar.Largelanguagemodels,incontrast,aretrainedonhundredsofbillionsofwords,anum-berthatgrowswitheachnewrelease.WhenyouaccountforthefactthatChatGPThastheadvantageofathousandtimesasmuchlifeexperienceasyoudo,itsabilitiesmayseemlesslikeintelli-gence.Butperhapsitdoesn’tmatterifAIfumbleswithsometasksifit’sgoodattherightcombinationofothers.
HintonandHopfieldhavebothspo-kenaboutthedangersofuncheckedAI.
DECEMBER2024|PHYSICSTODAY15
SEARCH&DISCOVERY
Amongtheirargumentsistheideathatoncemachinesbecomecapableofbreak-ingupgoalsintosubgoals,they’llquicklydeducethattheycanmakealmostanytaskeasierforthemselvesbyconsolidat-ingtheirownpower.Andbecauseneu-ralnetworksareoftentaskedwithwrit-ingcodeforothercomputers,stoppingthedamageisnotassimpleaspullingtheplugonasinglemachine.
“Therearealsoimminentrisksthatwe’refacingrightnow,”saysMehlig.“Therearecomputer-writtentextsandfakeimagesthatarebeingusedtotrickpeopleandinfluenceelections.Ithinkthatbytalkingaboutcomputerstakingovertheworld,peopletaketheimmi-nentdangerslessseriously.”
Whatcanphysicistsdo?
Muchoftheuneasestemsfromthefactthatsolittleisknownaboutwhatneu-ralnetworksarereallydoing:Howdobillionsofmatrixmultiplicationsadduptotheabilitytofindproteinstruc-turesorwritepoetry?“Peopleatthebigcompaniesaremoreinterestedinpro-ducingrevenue,notunderstanding,”saysDasSarma.“Understandingtakeslonger.Thejoboftheoristsistounder-standphenomena,andthisisahugephysicalphenomenon,waitingtobeunderstoodbyus.Physicistsshouldbeinterestedinthis.”
“It’shardnottobeexcitedbywhat’sgoingon,andit’shardnottonoticethatwedon’tunderstand,”saysBialek.“Ifyouwanttosaythatthingsareemergent,what’stheorderparameter,andwhatisitthat’semerged?Physicshasawayof
makingthatquestionmoreprecise.Willthatapproachyieldinsight?We’llsee.”
Fornow,thebiggestquestionsarestilloverwhelming.“Ifthereweresome-thingobviousthatcametomind,therewouldbeahordeofpeopletryingtosolveit,”saysHopfield.“Butthereisn’tahordeofpeopleworkingonthis,be-causenobodyknowswheretostart.”
Butafewsmaller-scalequestionsaremoretractable.Forexample,whydoesbackpropagationsoreliablyreducethenetworkerrortonearzero,ratherthangettingstuckinhigh-lyinglocalminimaliketheHopfieldnetworkdoes?“TherewasabeautifulpieceofworkonthisafewyearsagobySuryaGanguliatStan-ford,”saysSaraSolla,ofNorthwesternUniversity.“Hefoundthatmosthigh-lyingminimaarereallysaddlepoints:It’saminimuminmanydimensions,butthere’salwaysoneinwhichit’snot.Soifyoukeepkicking,youeventuallyfindyourwayout.”
Whenphysics-trained
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 人教版小學(xué)四年級(jí)數(shù)學(xué)上期教案
- 2024高中地理第六章人類(lèi)與地理環(huán)境的協(xié)調(diào)發(fā)展第1節(jié)人地關(guān)系思想的練習(xí)含解析新人教版必修2
- 2024高中生物第2章動(dòng)物和人體生命活動(dòng)的調(diào)節(jié)第3節(jié)神經(jīng)調(diào)節(jié)與體液調(diào)節(jié)的關(guān)系課堂演練含解析新人教版必修3
- 2024高中語(yǔ)文第二單元古代記敘散文第5課荊軻刺秦王學(xué)案新人教版必修1
- 2024高考地理一輪復(fù)習(xí)第五部分選修地理-重在遷移第43講環(huán)境保護(hù)課時(shí)作業(yè)含解析新人教版
- 2024高考地理一輪復(fù)習(xí)第一部分自然地理-重在理解第一章行星地球第3講地球的宇宙環(huán)境及地球的圈層結(jié)構(gòu)學(xué)案新人教版
- 誤解以后心健教案
- (4篇)2024年幼兒園教師年度個(gè)人總結(jié)
- 施工測(cè)量控制措施
- 2024年渤海船舶職業(yè)學(xué)院高職單招職業(yè)適應(yīng)性測(cè)試歷年參考題庫(kù)含答案解析
- 2024年京東商家入駐協(xié)議模板
- 義務(wù)教育(音樂(lè))課程標(biāo)準(zhǔn)(2022年版)解讀
- 智慧農(nóng)業(yè)行業(yè)營(yíng)銷(xiāo)策略方案
- 市場(chǎng)部整體運(yùn)營(yíng)概況
- 數(shù)字廣告數(shù)據(jù)要素流通保障技術(shù)研究報(bào)告(2023年)
- JJF(蘇) 283-2024 暫態(tài)地電壓法局部放電檢測(cè)儀校準(zhǔn)規(guī)范
- 某27層高層住宅樓施工組織設(shè)計(jì)方案
- 2025年中考語(yǔ)文備考之名著導(dǎo)讀:《水滸傳》主要人物梳理
- 小學(xué)科學(xué)學(xué)情分析報(bào)告總結(jié)
- 健康中國(guó)產(chǎn)業(yè)園規(guī)劃方案
- (2024年)二年級(jí)上冊(cè)音樂(lè)
評(píng)論
0/150
提交評(píng)論