第2課數(shù)據(jù)預(yù)處理技術(shù)_第1頁
第2課數(shù)據(jù)預(yù)處理技術(shù)_第2頁
第2課數(shù)據(jù)預(yù)處理技術(shù)_第3頁
第2課數(shù)據(jù)預(yù)處理技術(shù)_第4頁
第2課數(shù)據(jù)預(yù)處理技術(shù)_第5頁
已閱讀5頁,還剩44頁未讀 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

第2課數(shù)據(jù)預(yù)處理技術(shù)副教授1內(nèi)容提綱Whypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary2WhyDataPreprocessing?Dataintherealworldisdirtyincomplete:lackingattributevalues,lackingcertainattributesofinterest,orcontainingonlyaggregatedatae.g.,occupation=“”noisy:containingerrorsoroutlierse.g.,Salary=“-10”inconsistent:containingdiscrepanciesincodesornamese.g.,Age=“42”Birthday=“03/07/1997”e.g.,Wasrating“1,2,3”,nowrating“A,B,C”e.g.,discrepancybetweenduplicaterecords3WhyIsDataDirty?Incompletedatacomesfromn/adatavaluewhencollecteddifferentconsiderationbetweenthetimewhenthedatawascollectedandwhenitisanalyzed.human/hardware/softwareproblemsNoisydatacomesfromtheprocessofdatacollectionentrytransmissionInconsistentdatacomesfromDifferentdatasourcesFunctionaldependencyviolation4WhyIsDataPreprocessingImportant?Noqualitydata,noqualityminingresults!Qualitydecisionsmustbebasedonqualitydatae.g.,duplicateormissingdatamaycauseincorrectorevenmisleadingstatistics.DatawarehouseneedsconsistentintegrationofqualitydataDataextraction,cleaning,andtransformationcomprisesthemajorityoftheworkofbuildingadatawarehouse.—BillInmon5Multi-DimensionalMeasureofDataQualityAwell-acceptedmultidimensionalview:AccuracyCompletenessConsistencyTimelinessBelievabilityValueaddedInterpretabilityAccessibilityBroadcategories:intrinsic,contextual,representational,andaccessibility.6MajorTasksinDataPreprocessingDatacleaningFillinmissingvalues,smoothnoisydata,identifyorremoveoutliers,andresolveinconsistenciesDataintegrationIntegrationofmultipledatabases,datacubes,orfilesDatatransformationNormalizationandaggregationDatareductionObtainsreducedrepresentationinvolumebutproducesthesameorsimilaranalyticalresultsDatadiscretizationPartofdatareductionbutwithparticularimportance,especiallyfornumericaldata7Formsofdatapreprocessing

8DataCleaningImportance“Datacleaningisoneofthethreebiggestproblemsindatawarehousing”—RalphKimball“Datacleaningisthenumberoneproblemindatawarehousing”—DCIsurveyDatacleaningtasksFillinmissingvaluesIdentifyoutliersandsmoothoutnoisydataCorrectinconsistentdataResolveredundancycausedbydataintegration9MissingDataDataisnotalwaysavailableE.g.,manytupleshavenorecordedvalueforseveralattributes,suchascustomerincomeinsalesdataMissingdatamaybeduetoequipmentmalfunctioninconsistentwithotherrecordeddataandthusdeleteddatanotenteredduetomisunderstandingcertaindatamaynotbeconsideredimportantatthetimeofentrynotregisterhistoryorchangesofthedataMissingdatamayneedtobeinferred.10HowtoHandleMissingData?Ignorethetupleusuallydonewhenclasslabelismissing(assumingthetasksinclassification—noteffectivewhenthepercentageofmissingvaluesperattributevariesconsiderably).Fillinthemissingvaluemanually tedious+infeasible?Fillinitautomaticallywithaglobalconstant:e.g.,“unknown”,anewclass?!theattributemeantheattributemeanforallsamplesbelongingtothesameclass:smarterthemostprobablevalue:inference-basedsuchasBayesianformulaordecisiontree11NoisyDataNoise:randomerrororvarianceinameasuredvariableIncorrectattributevaluesmayduetofaultydatacollectioninstrumentsdataentryproblemsdatatransmissionproblemstechnologylimitationinconsistencyinnamingconventionOtherdataproblemswhichrequiresdatacleaningduplicaterecordsincompletedatainconsistentdata12HowtoHandleNoisyData?Binningmethod:firstsortdataandpartitioninto(equi-depth)binsthenonecansmoothbybinmeans,smoothbybinmedian,smoothbybinboundaries,etc.ClusteringdetectandremoveoutliersCombinedcomputerandhumaninspectiondetectsuspiciousvaluesandcheckbyhuman(e.g.,dealwithpossibleoutliers)Regressionsmoothbyfittingthedataintoregressionfunctions13SimpleDiscretizationMethods:BinningEqual-width(distance)partitioning:DividestherangeintoNintervalsofequalsize:uniformgridifAandBarethelowestandhighestvaluesoftheattribute,thewidthofintervalswillbe:W=(B–A)/N.Themoststraightforward,butoutliersmaydominatepresentationSkeweddataisnothandledwell.Equal-depth(frequency)partitioning:DividestherangeintoNintervals,eachcontainingapproximatelysamenumberofsamplesGooddatascalingManagingcategoricalattributescanbetricky.14BinningMethodsforDataSmoothingSorteddataforprice(indollars)

4,8,9,15,21,21,24,25,26,28,29,34*Partitioninto(equi-depth)bins:

-Bin1:4,8,9,15

-Bin2:21,21,24,25

-Bin3:26,28,29,34*Smoothingbybinmeans:-Bin1:9,9,9,9

-Bin2:23,23,23,23

-Bin3:29,29,29,29*Smoothingbybinboundaries:

-Bin1:4,4,4,15

-Bin2:21,21,25,25

-Bin3:26,26,26,3415ClusterAnalysis16Regressionxyy=x+1X1Y1Y1’17DataIntegrationDataintegration:combinesdatafrommultiplesourcesintoacoherentstoreSchemaintegrationintegratemetadatafromdifferentsourcesEntityidentificationproblem:identifyrealworldentitiesfrommultipledatasources,e.g.,A.cust-idB.cust-#Detectingandresolvingdatavalueconflictsforthesamerealworldentity,attributevaluesfromdifferentsourcesaredifferentpossiblereasons:differentrepresentations,differentscales,e.g.,metricvs.Britishunits18HandlingRedundancyinDataIntegrationRedundantdataoccuroftenwhenintegrationofmultipledatabasesThesameattributemayhavedifferentnamesindifferentdatabasesOneattributemaybea“derived”attributeinanothertable,e.g.,annualrevenueRedundantdatamaybeabletobedetectedbycor-relationalanalysisCarefulintegrationofthedatafrommultiplesourcesmayhelpreduce/avoidredundanciesandinconsistenciesandimproveminingspeedandquality19DataTransformationSmoothing:removenoisefromdataAggregation:summarization,datacubeconstructionGeneralization:concepthierarchyclimbingNormalization:scaledtofallwithinasmall,specifiedrangemin-maxnormalizationz-scorenormalizationnormalizationbydecimalscalingAttribute/featureconstructionNewattributesconstructedfromthegivenones20DataTransformation:Normalizationmin-maxnormalization(最小-最大規(guī)范化)z-scorenormalization(z-score規(guī)范化)normalizationbydecimalscaling(小數(shù)定標(biāo)規(guī)范化)WherejisthesmallestintegersuchthatMax(||)<121DataReductionStrategiesAdatawarehousemaystoreterabytesofdataComplexdataanalysis/miningmaytakeaverylongtimetorunonthecompletedatasetDatareductionObtainareducedrepresentationofthedatasetthatismuchsmallerinvolumebutyetproducethesame(oralmostthesame)analyticalresultsDatareductionstrategiesDatacubeaggregation(數(shù)據(jù)立方體聚集)Dimensionalityreduction—removeunimportantattributesDataCompressionNumerosityreduction—fitdataintomodelsDiscretizationandconcepthierarchygeneration22DataCubeAggregationThelowestlevelofadatacubetheaggregateddataforanindividualentityofintereste.g.,acustomerinaphonecallingdatawarehouse.MultiplelevelsofaggregationindatacubesFurtherreducethesizeofdatatodealwithReferenceappropriatelevelsUsethesmallestrepresentationwhichisenoughtosolvethetaskQueriesregardingaggregatedinformationshouldbeansweredusingdatacube,whenpossible23DimensionalityReductionFeatureselection(i.e.,attributesubsetselection):Selectaminimumsetoffeaturessuchthattheprobabilitydistributionofdifferentclassesgiventhevaluesforthosefeaturesisascloseaspossibletotheoriginaldistributiongiventhevaluesofallfeaturesreduce#ofpatternsinthepatterns,easiertounderstandHeuristicmethods(duetoexponential#ofchoices):step-wiseforwardselection(逐步向前選擇)step-wisebackwardelimination(逐步向后刪除)combiningforwardselectionandbackwardeliminationdecision-treeinduction24ExampleofDecisionTreeInductionInitialattributeset:{A1,A2,A3,A4,A5,A6}A4?A1?A6?Class1Class2Class1Class2>Reducedattributeset:{A1,A4,A6}25HeuristicFeatureSelectionMethodsThereare2d

possiblesub-featuresofdfeaturesSeveralheuristicfeatureselectionmethods:Bestsinglefeaturesunderthefeatureindependenceassumption:choosebysignificancetests.Beststep-wisefeatureselection:Thebestsingle-featureispickedfirstThennextbestfeatureconditiontothefirst,...Step-wisefeatureelimination:RepeatedlyeliminatetheworstfeatureBestcombinedfeatureselectionandelimination:Optimalbranchandbound:Usefeatureeliminationandbacktracking26DataCompressionStringcompressionThereareextensivetheoriesandwell-tunedalgorithmsTypicallylosslessButonlylimitedmanipulationispossiblewithoutexpansionAudio/videocompressionTypicallylossycompression,withprogressiverefinementSometimessmallfragmentsofsignalcanbereconstructedwithoutreconstructingthewholeTimesequenceisnotaudioTypicallyshortandvaryslowlywithtime27DataCompressionOriginalDataCompressedDatalosslessOriginalDataApproximatedlossy28WaveletTransformationDiscretewavelettransform(DWT):linearsignalprocessing,multiresolutionalanalysisCompressedapproximation:storeonlyasmallfractionofthestrongestofthewaveletcoefficientsSimilartodiscreteFouriertransform(DFT),butbetterlossycompression,localizedinspaceMethod:Length,L,mustbeanintegerpowerof2(paddingwith0s,whennecessary)Eachtransformhas2functions:smoothing,differenceAppliestopairsofdata,resultingintwosetofdataoflengthL/2Appliestwofunctionsrecursively,untilreachesthedesiredlengthHaar2Daubechie429GivenNdatavectorsfromk-dimensions,findc<=korthogonalvectorsthatcanbebestusedtorepresentdataTheoriginaldatasetisreducedtooneconsistingofNdatavectorsoncprincipalcomponents(reduceddimensions)EachdatavectorisalinearcombinationofthecprincipalcomponentvectorsWorksfornumericdataonlyUsedwhenthenumberofdimensionsislargePrincipalComponentAnalysis30X1X2Y1Y2PrincipalComponentAnalysis31NumerosityReduction(數(shù)值歸約)ParametricmethodsAssumethedatafitssomemodel,estimatemodelparameters,storeonlytheparameters,anddiscardthedata(exceptpossibleoutliers)Log-linearmodels:obtainvalueatapointinm-DspaceastheproductonappropriatemarginalsubspacesNon-parametricmethods

DonotassumemodelsMajorfamilies:histograms,clustering,sampling32RegressionandLog-LinearModelsLinearregression:DataaremodeledtofitastraightlineOftenusestheleast-squaremethodtofitthelineMultipleregression:allowsaresponsevariableYtobemodeledasalinearfunctionofmultidimensionalfeaturevectorLog-linearmodel:approximatesdiscretemultidimensionalprobabilitydistributions33Linearregression:Y=+XTwoparameters,andspecifythelineandaretobeestimatedbyusingthedataathand.usingtheleastsquarescriteriontotheknownvaluesofY1,Y2,…,X1,X2,….Multipleregression:Y=b0+b1X1+b2X2.Manynonlinearfunctionscanbetransformedintotheabove.Log-linearmodels:Themulti-waytableofjointprobabilitiesisapproximatedbyaproductoflower-ordertables.Probability:p(a,b,c,d)=abacadbcdRegressAnalysisandLog-LinearModels34HistogramsApopulardatareductiontechniqueDividedataintobucketsandstoreaverage(sum)foreachbucketCanbeconstructedoptimallyinonedimensionusingdynamicprogrammingRelatedtoquantizationproblems.35ClusteringPartitiondatasetintoclusters,andonecanstoreclusterrepresentationonlyCanbeveryeffectiveifdataisclusteredbutnotifdatais“smeared”Canhavehierarchicalclusteringandbestoredinmulti-dimensionalindextreestructuresTherearemanychoicesofclusteringdefinitionsandclusteringalgorithms,furtherdetailedinfuture36SamplingAllowaminingalgorithmtorunincomplexitythatispotentiallysub-lineartothesizeofthedataChoosearepresentativesubsetofthedataSimplerandomsamplingmayhaveverypoorperformanceinthepresenceofskewDevelopadaptivesamplingmethodsStratifiedsampling(分層采樣):Approximatethepercentageofeachclass(orsubpopulationofinterest)intheoveralldatabaseUsedinconjunctionwithskeweddataSamplingmaynotreducedatabaseI/Os(pageatatime).37SamplingSRSWOR(simplerandomsamplewithoutreplacement)SRSWRRawData38SamplingRawDataCluster/StratifiedSample39HierarchicalReductionUsemulti-resolutionstructurewithdifferentdegreesofreductionHierarchicalclusteringisoftenperformedbuttendstodefinepartitionsofdatasetsratherthan“clusters”ParametricmethodsareusuallynotamenabletohierarchicalrepresentationHierarchicalaggregationAnindextreehierarchicallydividesadatasetintopartitionsbyvaluerangeofsomeattributesEachpartitioncanbeconsideredasabucketThusanindextreewithaggregatesstoredateachnodeisahierarchicalhistogram40DiscretizationThreetypesofattributes:Nominal—valuesfromanunorderedsetOrdinal—valuesfromanorderedsetContinuous—realnumbersDiscretization:dividetherangeofacontinuousattributeintointervalsSomeclassificationalgorithmsonlyacceptcategoricalattributes.ReducedatasizebydiscretizationPrepareforfurtheranalysis41DiscretizationandConcepthierachyDiscretization

reducethenumberofvaluesforagivencontinuousattributebydividingtherangeoftheattributeintointervals.IntervallabelscanthenbeusedtoreplaceactualdatavaluesConcepthierarchies

reducethedatabycollectingandreplacinglowlevelconcepts(suchasnumericvaluesfortheattributeage)byhigherlevelconcepts(suchasyoung,middle-aged,orsenior)42DiscretizationandConceptHierarchyGenerationforNumericDataBinning(seesectionsbefore)Histogramanalysis(seesectionsbefore)Clusteringanalysis(seesectionsbefore)Entropy-baseddiscretizationSegmentationbynaturalpartitioning43Entropy-BasedDiscretizationGivenasetofsamplesS,ifSispartitionedintotwointervalsS1andS2usingboundaryT,theentropyafterpartitioningisTheboundarythatminimizestheentropyfunctionoverallpossibleboundariesisselectedasabinarydiscretization.Theprocessisrecursivelyappliedtopartitionsobtaineduntilsomestoppingcriterionismet,e.g.,Experimentsshowthatitmayreducedatasizeandimproveclassificationaccuracy44SegmentationbyNaturalPartitioningAsimply3-4-5rulecanbeusedtosegmentnumericdataintorelativelyuniform,“natural”intervals.Ifanintervalcovers3,6,7or9distinctvaluesatthemostsignificantdigit,partitiontherangeinto3equi-widthintervalsIfitcovers2,4,or8distinctvaluesatthemostsignificantdigit,partitiontherangeinto4intervalsIfitcovers1,5,or10distinctvaluesatthemostsignificantdigit,partitiontherangeinto5intervals45Exampleof3-4-5Rule(-$4000-$5,000)(-$400-0)(-$400--$300)(-$300--$200)(-$200--$100)(-$100-0)(0-$1,000)(0-$200)($200-$400)($400-$600)($600-$800)($800-$1,000)($2,000-$5,000)($2,000-$3,000)($3,000-$4,000)($4,000-$5,000)($1,000-$2,000)($1,000-$1,200)($1,200-$1,400)($1,400-$1,600)($1,600-$1,800)($1,800-$2,000)msd=1,000 Low=-$1,000 High=$2,000Step2:Step4:Step1:-$351 -$159 profit $1,838 $4,700 MinLow(i.e,5%-tile) High(i.e,95%-0tile)Maxcount(-$1,000-$2,000)(-$1,000-0)(0-$1,000)Step3:($1,000-$2,000)46ConceptHierarchyGenerationforCategoricalDataSpecificationofapartialorderingofattributesexplicitlyattheschemalevelbyusersorexpertsstreet<city<state<countrySpecificationofaportionofahierarchybyexplicitdatagrouping{Urbana,Champaign,Chicago}<IllinoisSpecificationofasetofattributes.SystemautomaticallygeneratespartialorderingbyanalysisofthenumberofdistinctvaluesE.g.,street<city<state<countrySpecificationofonlyapartialsetofatt

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論