




版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
DeepLearning
IanGoodfellow
YoshuaBengio
AaronCourville
Contents
Websitevii
Acknowledgmentsviii
Notationxi
1Introduction1
1.1WhoShouldReadThisBook?....................8
1.2HistoricalTrendsinDeepLearning.................11
IAppliedMathandMachineLearningBasics29
2LinearAlgebra31
2.1Scalars,Vectors,MatricesandTensors...............31
2.2MultiplyingMatricesandVectors..................34
2.3IdentityandInverseMatrices....................36
2.4LinearDependenceandSpan....................37
2.5Norms.................................39
2.6SpecialKindsofMatricesandVectors...............40
2.7Eigendecomposition..........................42
2.8SingularValueDecomposition....................44
2.9TheMoore-PenrosePseudoinverse..................45
2.10TheTraceOperator.........................46
2.11TheDeterminant...........................47
2.12Example:PrincipalComponentsAnalysis.............48
3ProbabilityandInformationTheory53
3.1WhyProaility?...........................54
i
CONTENTS
3.2RandomVariales..........................56
3.3ProailityDistriutions.......................56
3.4MarginalProaility.........................58
3.5ConditionalProaility.......................59
3.6TheChainRuleofConditionalProailities............59
3.7IndependenceandConditionalIndependence............60
3.8Expectation,VarianceandCovariance...............60
3.9CommonProailityDistriutions.................62
3.10sefulPropertiesofCommonFunctions..............67
3.11Bayes’Rule..............................70
3.12TechnicalDetailsofContinuousVariales.............71
3.13InformationTheory..........................73
3.14StructuredProailisticModels...................75
4NumericalComputation80
4.1Over?owandnder?ow.......................80
4.2PoorConditioning..........................82
4.3Gradient-BasedOptimization....................82
4.4ConstrainedOptimization......................93
4.5Example:LinearLeastSquares...................96
5MachineLearningBasics98
5.1LearningAlgorithms.........................99
5.2Capacity,Over?ttingandnder?tting...............110
5.3HyperparametersandValidationSets................120
5.4Estimators,BiasandVariance....................122
5.5MaximumLikelihoodEstimation..................131
5.6BayesianStatistics..........................135
5.7SupervisedLearningAlgorithms...................140
5.8nsupervisedLearningAlgorithms.................146
5.9StochasticGradientDescent.....................151
5.10BuildingaMachineLearningAlgorithm..............1535.11ChallengesMotivatingDeepLearning................155
IIDeepNetworks:ModernPractices166
6DeepFeedforwardNetworks168
6.1Example:LearningXOR.......................171
6.2Gradient-BasedLearning.......................177
ii
CONTENTS
6.3Hiddennits.............................191
6.4ArchitectureDesign..........................197
6.5Back-PropagationandOtherDi?erentiationAlgorithms.....204
6.6HistoricalNotes............................224
7RegularizationforDeepLearning228
7.1ParameterNormPenalties......................230
7.2NormPenaltiesasConstrainedOptimization............237
7.3Regularizationandnder-ConstrainedProlems.........239
7.4DatasetAugmentation........................240
7.5NoiseRoustness...........................242
7.6Semi-SupervisedLearning......................243
7.7Multi-TaskLearning.........................244
7.8EarlyStopping............................246
7.9ParameterTyingandParameterSharing..............253
7.10SparseRepresentations........................254
7.11BaggingandOtherEnsemleMethods...............256
7.12Dropout................................258
7.13AdversarialTraining.........................268
7.14TangentDistance,TangentProp,andManifoldTangentClassi?er270
8OptimizationforTrainingDeepModels274
8.1HowLearningDi?ersfromPureOptimization...........275
8.2ChallengesinNeuralNetworkOptimization............282
8.3BasicAlgorithms...........................294
8.4ParameterInitializationStrategies.................301
8.5AlgorithmswithAdaptiveLearningRates.............306
8.6ApproximateSecond-OrderMethods................310
8.7OptimizationStrategiesandMeta-Algorithms...........317
9ConvolutionalNetworks330
9.1TheConvolutionOperation.....................331
9.2Motivation...............................335
9.3Pooling.................................339
9.4ConvolutionandPoolingasanIn?nitelyStrongPrior.......345
9.5VariantsoftheBasicConvolutionFunction............347
9.6StructuredOutputs..........................358
9.7DataTypes..............................360
9.8E?cientConvolutionAlgorithms..................362
9.9RandomornsupervisedFeatures.................363
iii
CONTENTS
9.10TheNeuroscienti?cBasisforConvolutionalNetworks.......364
9.11ConvolutionalNetworksandtheHistoryofDeepLearning....371
10SequenceModeling:RecurrentandRecursiveNets373
10.1nfoldingComputationalGraphs..................375
10.2RecurrentNeuralNetworks.....................378
10.3BidirectionalRNNs..........................394
10.4Encoder-DecoderSequence-to-SequenceArchitectures.......396
10.5DeepRecurrentNetworks......................398
10.6RecursiveNeuralNetworks......................400
10.7TheChallengeofLong-TermDependencies.............401
10.8EchoStateNetworks.........................404
10.9LeakynitsandOtherStrategiesforMultipleTimeScales....406
10.10TheLongShort-TermMemoryandOtherGatedRNNs......408
10.11OptimizationforLong-TermDependencies.............413
10.12ExplicitMemory...........................416
11PracticalMethodology421
11.1PerformanceMetrics.........................422
11.2DefaultBaselineModels.......................425
11.3DeterminingWhethertoGatherMoreData............426
11.4SelectingHyperparameters......................427
11.5DeuggingStrategies.........................436
11.6Example:Multi-DigitNumerRecognition.............440
12Applications443
12.1Large-ScaleDeepLearning......................443
12.2ComputerVision...........................452
12.3SpeechRecognition..........................458
12.4NaturalLanguageProcessing....................461
12.5OtherApplications..........................478
IIIDeepLearningResearch486
13LinearFactorModels489
13.1ProailisticPCAandFactorAnalysis...............490
13.2IndependentComponentAnalysis(ICA)..............491
13.3SlowFeatureAnalysis........................493
13.4SparseCoding.............................496
iv
CONTENTS
13.5ManifoldInterpretationofPCA...................499
14Autoencoders502
14.1ndercompleteAutoencoders....................503
14.2RegularizedAutoencoders......................504
14.3RepresentationalPower,LayerSizeandDepth...........508
14.4StochasticEncodersandDecoders..................509
14.5DenoisingAutoencoders.......................510
14.6LearningManifoldswithAutoencoders...............515
14.7ContractiveAutoencoders......................521
14.8PredictiveSparseDecomposition..................523
14.9ApplicationsofAutoencoders....................524
15RepresentationLearning526
15.1GreedyLayer-WisensupervisedPretraining...........528
15.2TransferLearningandDomainAdaptation.............536
15.3Semi-SupervisedDisentanglingofCausalFactors.........541
15.4DistriutedRepresentation......................546
15.5ExponentialGainsfromDepth...................553
15.6ProvidingCluestoDiscovernderlyingCauses..........554
16StructuredProbabilisticModelsforDeepLearning558
16.1TheChallengeofnstructuredModeling..............559
16.2singGraphstoDescrieModelStructure.............563
16.3SamplingfromGraphicalModels..................580
16.4AdvantagesofStructuredModeling.................582
16.5LearningaoutDependencies....................582
16.6InferenceandApproximateInference................584
16.7TheDeepLearningApproachtoStructuredProailisticModels585
17MonteCarloMethods590
17.1SamplingandMonteCarloMethods................590
17.2ImportanceSampling.........................592
17.3MarkovChainMonteCarloMethods................595
17.4GisSampling............................59917.5TheChallengeofMixingetweenSeparatedModes........599
18ConfrontingthePartitionFunction605
18.1TheLog-LikelihoodGradient....................606
18.2StochasticMaximumLikelihoodandContrastiveDivergence...607
v
CONTENTS
18.3Pseudolikelihood...........................615
18.4ScoreMatchingandRatioMatching................617
18.5DenoisingScoreMatching......................619
18.6Noise-ContrastiveEstimation....................620
18.7EstimatingthePartitionFunction..................623
19ApproximateInference631
19.1InferenceasOptimization......................633
19.2ExpectationMaximization......................634
19.3MAPInferenceandSparseCoding.................635
19.4VariationalInferenceandLearning.................638
19.5LearnedApproximateInference...................651
20DeepGenerativeModels654
20.1BoltzmannMachines.........................654
20.2RestrictedBoltzmannMachines...................656
20.3DeepBeliefNetworks.........................660
20.4DeepBoltzmannMachines......................663
20.5BoltzmannMachinesforReal-ValuedData.............676
20.6ConvolutionalBoltzmannMachines.................683
20.7BoltzmannMachinesforStructuredorSequentialOutputs....685
20.8OtherBoltzmannMachines.....................686
20.9Back-PropagationthroughRandomOperations..........687
20.10DirectedGenerativeNets.......................692
20.11DrawingSamplesfromAutoencoders................711
20.12GenerativeStochasticNetworks...................714
20.13OtherGenerationSchemes......................716
20.14EvaluatingGenerativeModels....................717
20.15Conclusion...............................720
Bibliography721
Index777
vi
Website
www.deeplearning
Thisookisaccompaniedytheaovewesite.Thewesiteprovidesa
varietyofsupplementarymaterial,includingexercises,lectureslides,correctionsof
mistakes,andotherresourcesthatshouldeusefultoothreadersandinstructors.
vii
Acknowledgments
Thisbookwouldnothavebeenpossiblewithoutthecontributionsofmanypeople.
Wewouldliketothankthosewhocommentedonourproposalforthebook
andhelpedplanitscontentsandorganization:GuillaumeAlain,KyunghyunCho,
?a?larGül?ehre,DavidKrueger,HugoLarochelle,RazvanPascanuandThomas
Rohée.
Wewouldliketothankthepeoplewhoo?eredfeedbackonthecontentofthebookitself.Someo?eredfeedbackonmanychapters:MartínAbadi,Guillaume
Alain,IonAndroutsopoulos,FredBertsch,OlexaBilaniuk,UfukCanBi?ici,Matko
Bo?njak,JohnBoersma,GregBrockman,AlexandredeBrébisson,PierreLuc
Carrier,SarathChandar,PawelChilinski,MarkDaoust,OlegDashevskii,Laurent
Dinh,StephanDreseitl,JimFan,MiaoFan,MeireFortunato,FrédéricFrancis,
andodeFreitas,?a?larGül?ehre,JurgenVanGael,JavierAlonsoGarcía,
JonathanHunt,GopiJeyaram,ChingizKabytayev,LukaszKaiser,VarunKanade,
AsifullahKhan,AkielKhan,JohnKing,DiederikP.Kingma,YannLeCun,Rudolf
Mathey,MatíasMattamala,AbhinavMaurya,KevinMurphy,OlegMürk,Roman
ovak,AugustusQ.Odena,SimonPavlik,KarlPichotta,EddiePierce,KariPulli,
RousselRahman,TapaniRaiko,AnuragRanjan,JohannesRoith,MihaelaRosca,
HalisSak,CésarSalgado,GrigorySapunov,YoshinoriSasaki,MikeSchuster,
JulianSerban,irShabat,KenShirri?,AndreSimpelo,ScottStanley,David
Sussillo,IlyaSutskever,CarlesGeladaSáez,GrahamTaylor,ValentinTolmer,
MassimilianoTomassoli,AnTran,ShubhenduTrivedi,AlexeyUmnov,VincentVanhoucke,MarcoVisentini-Scarzanella,MartinVita,DavidWarde-Farley,Dustin
Webb,KelvinXu,WeiXue,KeYang,LiYao,ZygmuntZaj?candOzan?a?layan.
Wewouldalsoliketothankthosewhoprovideduswithusefulfeedbackon
individualchapters:
?otation:ZhangYuanhang.
?Chapter1,Introduction:YusufAkgul,SebastienBratieres,SamiraEbrahimi,
viii
CONTENTS
CharlieGorichanaz,BrendanLoudermilk,EricMorris,CosminParvulescu
andAlfredoSolano.
?Chapter2,LinearAlgebra:AmjadAlmahairi,ikolaBani?,KevinBennett,
PhilippeCastonguay,OscarChang,EricFosler-Lussier,AndreyKhalyavin,
SergeyOreshkov,IstvánPetrás,DennisPrangle,ThomasRohée,Gitanjali
GulveSehgal,ColbyToland,AlessandroVitaleandBobWelland.
?Chapter3,ProbabilityandInformationTheory:JohnPhilipAnderson,Kai
Arulkumaran,VincentDumoulin,RuiFa,StephanGouws,ArtemOboturov,
AnttiRasmus,AlexeySurkovandVolkerTresp.
?Chapter4,umericalComputation:TranLamAnIanFischerandHu
Yuhuang.
?Chapter5,MachineLearningBasics:DzmitryBahdanau,JustinDomingue,
ikhilGarg,MakotoOtsuka,BobPepin,PhilipPopien,EmmanuelRayner,
PeterShepard,Kee-BongSong,ZhengSunandAndyWu.
?Chapter6,DeepFeedforwardetworks:UrielBerdugo,FabrizioBottarel,
ElizabethBurl,IshanDurugkar,Je?Hlywa,JongWookKim,DavidKrueger
andAdityaKumarPraharaj.
?Chapter7,RegularizationforDeepLearning:MortenKolb?k,KshitijLauria,
InkyuLee,SunilMohan,HaiPhongPhanandJoshuaSalisbury.
?Chapter8,OptimizationforTrainingDeepModels:MarcelAckermann,Peter
Armitage,RowelAtienza,AndrewBrock,TeganMaharaj,JamesMartens,
KashifRasul,KlausStroblandicholasTurner.
?Chapter9,Convolutionaletworks:MartínArjovsky,EugeneBrevdo,Kon-
stantinDivilov,EricJensen,MehdiMirza,AlexPaino,MarjorieSayer,Ryan
StoutandWentaoWu.
?Chapter10,SequenceModeling:RecurrentandRecursiveets:G?k?en
Eraslan,StevenHickson,RazvanPascanu,LorenzovonRitter,RuiRodrigues,
DmitriySerdyuk,DongyuShiandKaiyuYang.
?Chapter11,PracticalMethodology:DanielBeckstein.
?Chapter12,Applications:GeorgeDahl,VladimirekrasovandRibana
Roscher.
?Chapter13,LinearFactorModels:JayanthKoushik.
ix
CONTENTS
?Chapter15,RepresentationLearning:KunalGhosh.
?Chapter16,StructuredProbabilisticModelsforDeepLearning:MinhLê
andAntonVarfolom.
?Chapter18,ConfrontingthePartitionFunction:SamBowman.
?Chapter19,ApproximateInference:YujiaBao.
?Chapter20,DeepGenerativeModels:icolasChapados,DanielGalvez,
WenmingMa,FadyMedhat,ShakirMohamedandGrégoireMontavon.
?Bibliography:LukasMichelbacherandLeslie.Smith.
Wealsowanttothankthosewhoallowedustoreproduceimages,?guresor
datafromtheirpublications.Weindicatetheircontributionsinthe?gurecaptionsthroughoutthetext.
WewouldliketothankLuWangforwritingpdf2htmlEX,whichweusedto
makethewebversionofthebook,andforo?eringsupporttoimprovethequality
oftheresultingHTML.
WewouldliketothankIan’swifeDanielaFloriGoodfellowforpatiently
supportingIanduringthewritingofthebookaswellasforhelpwithproofreading.
WewouldliketothanktheGoogleBrainteamforprovidinganintellectualenvironmentwhereIancoulddevoteatremendousamountoftimetowritingthis
bookandreceivefeedbackandguidancefromcolleagues.Wewouldespeciallylike
tothankIan’sformermanager,GregCorrado,andhiscurrentmanager,Samy
Bengio,fortheirsupportofthisproject.Finally,wewouldliketothankGeo?rey
Hintonforencouragementwhenwritingwasdi?cult.
x
Notation
Thissectionprovidesaconcisereferencedescribingthenotationusedthroughout
thisbook.Ifyouareunfamiliarwithanyofthecorrespondingmathematical
concepts,wedescribemostoftheseideasinchapters2–4.
NumbersandArrays
aAscalar(integerorreal)
aAvector
AAmatrix
AAtensor
InIdentitymatrixwithnrowsandncolumns
IIdentitymatrixwithdimensionalityimpliedby
context
e(i)Standardbasisvector[0,...,0,1,0,...,0]witha
1atpositioni
diag(a)Asquare,diagonalmatrixwithdiagonalentries
givenbya
aAscalarrandomvariable
aAvector-valuedrandomvariable
AAmatrix-valuedrandomvariable
xi
CONTENTS
SetsandGraphs
AAset
RThesetofrealnumbers
{0,1}Thesetcontaining0and1
{0,1,...,n}Thesetofallintegersbetween0andn
[a,b]Therealintervalincludingaandb
(a,b]Therealintervalexcludingabutincludingb
A\BSetsubtraction,i.e.,thesetcontainingtheele-
mentsofAthatarenotinB
GAgraph
PaG(xi)TheparentsofxiinG
Indexing
aiElementiofvectora,withindexingstartingat1
a?
iAllelementsofvectoraexceptforelementi
Ai,jElementi,jofmatrixA
Ai,:RowiofmatrixA
A:,iColumniofmatrixA
Ai,j,kElement(i,j,k)ofa3-DtensorA
A:,:,i2-Dsliceofa3-Dtensor
aiElementioftherandomvectora
LinearAlgebraOperations
ATransposeofmatrixA
A+Moore-PenrosepseudoinverseofA
ABElement-wise(Hadamard)productofAandB
det(A)DeterminantofA
xii
CONTENTS
dy
dx
Calculus
Derivativeofywithrespecttox
?y
?x
Partialderivativeofywithrespecttox
?xyGradientofywithrespecttox
?XyMatrixderivativesofywithrespecttoX
?XyTensorcontainingderivativesofywithrespectto
X
?f
JacobianmatrixJ∈
?x
Rm×noff:Rn→Rm2
?xf(x)orH(f)(x)TheHessianmatrixoffatinputpointx
f(x)dxDe?niteintegralovertheentiredomainofx
f(x)dxDe?niteintegralwithrespecttoxoverthesetSS
ProbabilityandInformationTheory
a⊥bTherandomvariablesaandbareindependent
a⊥b|cTheyareconditionallyindependentgivenc
P(a)Aprobabilitydistributionoveradiscretevariable
p(a)Aprobabilitydistributionoveracontinuousvari-
able,oroveravariablewhosetypehasnotbeen
speci?ed
a~PRandomvariableahasdistributionPEx~
P[f(x)]orEf(x)Expectationoff(x)withrespecttoP(x)
Var(f(x))Varianceoff(x)underP(x)
Cov(f(x),g(x))Covarianceoff(x)andg(x)underP(x)
H(x)Shannonentropyoftherandomvariablex
DKL(PQ)Kullback-LeiblerdivergenceofPandQ
N(x;μ,Σ)Gaussiandistributionoverxwithmeanμand
covarianceΣ
xiii
CONTENTS
Functions
f:A→BThefunctionfwithdomainandrange
AB
f?gCompositionofthefunctionsfandg
f(x;θ)Afunctionofxparametrizedbyθ.(Sometimes
wewritef(x)andomittheargumentθtolightennotation)
logxaturallogarithmofx
1
σ(x)Logisticsigmoid,
1+exp(?x)ζ(x)Softplus,log(1+exp(x))
pLpnormofx
||x||
||x||L
2normofx
x+Positivepartofx,i.e.,max(0,x)
1conditionis1iftheconditionistrue,0otherwise
Sometimesweuseafunctionfwhoseargumentisascalarbutapplyittoa
vector,matrix,ortensorf(x),f(X),orf(X).Thisdenotestheapplicationofftothearrayelement-wise.Forexample,ifC=σ(X),thenCi,j,k=σ(Xi,j,k)forall
validvaluesofi,jandk.
DatasetsandDistributions
pdataThedatageneratingdistribution
p?dataTheempiricaldistributionde?nedbythetraining
set
XAsetoftrainingexamples
x(i)Thei-thexample(input)fromadataset
y(i)ory(i)Thetargetassociatedwithx(i)forsupervisedlearn-
ing
XThem×nmatrixwithinputexamplex
Xi,:
xiv
Chapter1
Introduction
Inventorshavelongdreamedofcreatingmachinesthatthink.Thisdesiredates
backtoatleastthetimeofancientGreece.Themythical?guresPygmalion,Daedalus,andHephaestusmayallbeinterpretedaslegendaryinventors,and
Galatea,Talos,andPandoramayallberegardedasarti?ciallife(OvidandMartin,
2004;Sparkes,1996;Tandy,1997).
Whenprogrammablecomputerswere?rstconceived,peoplewonderedwhether
suchmachinesmightbecomeintelligent,overahundredyearsbeforeonewas
built(Lovelace,1842).Today,arti?cialintelligence(AI)isathriving?eldwith
manypracticalapplicationsandactiveresearchtopics.Welooktointelligentsoftwaretoautomateroutinelabor,understandspeechorimages,makediagnoses
inmedicineandsupportbasicscienti?cresearch.
Intheearlydaysofarti?cialintelligence,the?eldrapidlytackledandsolved
problemsthatareintellectuallydi?cultforhumanbeingsbutrelativelystraight-
forwardforcomputers—problemsthatcanbedescribedbyalistofformal,math-
ematicalrules.Thetruechallengetoarti?cialintelligenceprovedtobesolving
thetasksthatareeasyforpeopletoperformbuthardforpeopletodescribe
formally—problemsthatwesolveintuitively,thatfeelautomatic,likerecognizingspokenwordsorfacesinimages.
Thisbookisaboutasolutiontothesemoreintuitiveproblems.Thissolutionis
toallowcomputerstolearnfromexperienceandunderstandtheworldintermsofa
hierarchyofconcepts,witheachconceptde?nedintermsofitsrelationtosimpler
concepts.Bygatheringknowledgefromexperience,thisapproachavoidstheneed
forhumanoperatorstoformallyspecifyalloftheknowledgethatthecomputer
needs.Thehierarchyofconceptsallowsthecomputertolearncomplicatedconceptsbybuildingthemoutofsimplerones.Ifwedrawagraphshowinghowthese
1
CHAPTER1.INTRODUCTION
conceptsarebuiltontopofeachother,thegraphisdeep,withmanylayers.orthisreason,wecallthisapproachtoAIdeeplearning.
ManyoftheearlysuccessesofAItookplaceinrelativelysterileandformal
environmentsanddidnotreuirecomputerstohavemuchknowledgeabout
theworld.orexample,IBM’sDeepBluechess-playingsystemdefeatedworld
championGarryKasparovin1997(Hsu,2002).Chessisofcourseaverysimpleworld,containingonlysixty-fourlocationsandthirty-twopiecesthatcanmove
inonlyrigidlycircumscribedways.Devisingasuccessfulchessstrategyisa
tremendousaccomplishment,butthechallengeisnotduetothedi?cultyof
describingthesetofchesspiecesandallowablemovestothecomputer.Chess
canbecompletelydescribedbyaverybrieflistofcompletelyformalrules,easily
providedaheadoftimebytheprogrammer.
Ironically,abstractandformaltasksthatareamongthemostdi?cultmentalundertakingsforahumanbeingareamongtheeasiestforacomputer.Computers
havelongbeenabletodefeateventhebesthumanchessplayer,butareonly
recentlymatchingsomeoftheabilitiesofaveragehumanbeingstorecognizeobjects
orspeech.Aperson’severydaylifereuiresanimmenseamountofknowledge
abouttheworld.Muchofthisknowledgeissubjectiveandintuitive,andtherefore
di?culttoarticulateinaformalway.Computersneedtocapturethissame
knowledgeinordertobehaveinanintelligentway.Oneofthekeychallengesin
arti?cialintelligenceishowtogetthisinformalknowledgeintoacomputer.
Severalarti?cialintelligenceprojectshavesoughttohard-codeknowledgeabout
theworldinformallanguages.Acomputercanreasonaboutstatementsinthese
formallanguagesautomaticallyusinglogicalinferencerules.Thisisknownasthe
knowledgebaseapproachtoarti?cialintelligence.Noneoftheseprojectshasled
toamajorsuccess.OneofthemostfamoussuchprojectsisCyc(LenatandGuha,
1989).Cycisaninferenceengineandadatabaseofstatementsinalanguage
calledCycL.Thesestatementsareenteredbyasta?ofhumansupervisors.Itisan
unwieldyprocess.Peoplestruggletodeviseformalruleswithenoughcomplexity
toaccuratelydescribetheworld.orexample,Cycfailedtounderstandastory
aboutapersonnamedredsha
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
- 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 懷化市重點中學2024-2025學年高三下第二次大考英語試題含解析
- 吉林職業(yè)技術學院《水資源利用程》2023-2024學年第一學期期末試卷
- 廊坊衛(wèi)生職業(yè)學院《生物產業(yè)概論》2023-2024學年第二學期期末試卷
- 北京市人民大附屬中學2024-2025學年初三下學期模擬考試化學試題含解析
- 造紙廠化驗知識培訓課件
- 廈門軟件職業(yè)技術學院《電視節(jié)目包裝》2023-2024學年第二學期期末試卷
- 石家莊鐵道大學四方學院《先進材料表征技術》2023-2024學年第二學期期末試卷
- 企業(yè)管理中的溝通
- 輸血法律法規(guī)知識培訓課件
- 糖尿病循證護理
- XK3168電子稱重儀表技術手冊
- 電梯系統質量檢查記錄表
- 最新山東地圖含市縣地圖矢量分層可編輯地圖PPT模板
- 電子教案與課件:精細化工工藝學(第四版)-第5章-食品添加劑
- 機械設計齒輪機構基礎
- 統編版高一語文現代文閱讀理解專題練習【含答案】
- T∕CGMA 033001-2018 壓縮空氣站能效分級指南
- 世聯年重慶樵坪山項目發(fā)展戰(zhàn)略與整體規(guī)劃
- 人教版七年級數學下冊期中知識點整理復習ppt課件
- 紅頭文件模板
- 風冷螺桿熱泵機組招標技術要求
評論
0/150
提交評論