分析成果介紹_第1頁
分析成果介紹_第2頁
分析成果介紹_第3頁
分析成果介紹_第4頁
分析成果介紹_第5頁
已閱讀5頁,還剩4頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

ANew DenoisingMethodusingTextureMetrican ptiveStructureVarianceYiweiZhang1,GeLi?2,XiaoqiangGuo3,WenminWang4,RonggangWangSchoolofElectronicandComputerEngineering,PekingUniversityShenzhenGraduateSchoolLishuiRoad2199,NanshanDistrict,Shenzhen,GuangdongProv 1 4 5AcademyofBroadcastingScience,FuxingmenOuterStreet2,XichengDistrict, ,3—Inthispaper,aninnovatedmethodisproposedfordenoising.ThemethodconsistsoftwomajorFirst,anewadaptivesuperpixel texturemetricisproposed. texturemetriciscalculated,whichrelatestodifferentpartsofa stream.Thenanewadaptivestructurevarianceisestimatedbyadoptingfineandcoarsestructures.Finally,anoisefilterbasedontheestimatedweightsofdifferentstructuresina streamisapplied.Bycomparison,theproposedmethodoutperformstraditionalstate-of-artmethods,especiallyinblockartifactsreduction.IndexTerms— denoising,preprocessing,texturemetric,structurevariance,superpixelI.Denoisingtechniqueisawellexploredtopicinthefieldofimageand pre-processing.Itistheactualfoundationforaplentyofapplications,suchasobjectdetection,behavior codecandcomputervision,etc.Overthepastfewdecades,theperformanceofimagedenoisinghasbeenefficientlyimprovedbyemployingamuchmoreelaboratemodelofnaturalimages.Bynow,manygooddenoisingalgo-rithms[1],[2],[3],[4],[5],[6]havebeenproposed,and[7]presentsacomprehensivecomparisonofthem.Eventhough,theoriginalpurposeofdenoisingistoremoveunexpectednoisefromacorruptedimageand .Howevertheartifactscausedbythesealgorithmshavepostedasevereeffectonthequalityofimageand .Thusagooddenoisingmethodshouldintroduceasfewartifactsaspossible.SofarBM3D[4]andBM4D[5]aretwoofthestate-of-artmethodsforimageand denoising.Grousimilar2-Dblockstoa3-DgroupandperformingtransformationarethemajorcontributioninBM3D.Themethodadoptsacollaborativefiltertoperformimagereconstruction,whichretainsbetterdetails.DerivingfromBM3D,BM4Dgroupssimilar3-Dspatiotemporalvolumestoa4-Dgroup.ThosetwomethodsThisprojectwassupportedbyShenzhenPeacockn ),ScienceandTechnologynningProjectofGuangdongProve,(No.2014B )and863projectunderGrant(No.2015AA015905).

performswellinthenoisereduction.However,blockartifactsintroducedbyBM3DandBM4Darestillneedtobeimproved,asshowninFig.1(b).Inthispaper,aninnovatedmethodisproposedwhichcanreducenoisewell,andintheme showslessblockingartifacts.Fig.1.(a)thefirstframeofCrowded3.(b)denoisingresultsofBM4D.(c)denoisingresultsofourmethod.Toachievebetterdenoisingperformance,theapproachiscomposedoftwostages:obtainmentofadaptivesuperpixeltexturemetric;combinationoffineandcoarsestructure.First,weutilizesuperpixelandSingleValue position(SVD)toobtainatexturemetricρforeachpathofeachframe.Bygrouallofpathmetricρ,weobtainabrandnewtexturemetricP.Andalso,Pisutilizedtoestimateproportionoffinestructureandcoarsestructureinthestream.Second,anewadaptivestructurevarianceisobtainedbyadoptingfineandcoarsestructurebasedonP,whichcanbefitwellfordifferentscenarios.Finally,thestreamisobtainedbyapplyinganoisefilterbasedontheestimatedweightsofdifferentstructures.Contribution:twomajorproceduresareproposedinthispaper.First,aninnovateptive texturemetricbasedonsuperpixelisintroduced,whichisrelatedtodifferentpartsofa stream.Second,anewadaptivestructurevarianceisobtainedbyutilizingfineandcoarsestructurestoperformfiltering.Bycomparison,ourproposedmethodoutperformstraditionalstate-of-artmethods,especiallyinblockartifactreduction,asshowninFig.1(c).Theremainderofthispaperisorganizedasfollows.givesthedetailsofourmethod.SectionIgivestheexperimentalresultsofourapproachandcomparisonswithtwostate-of-artalgorithms.Finally,thepaperisconcludedinsectionIV.978-1-5090-5316-2/16/$31.00c2016 VCIP2016,Nov.27–30,2016,.TheProposedTheaimoftheproposedmethodinthispaperistoremovenoisefromcontaminated ,retainmuchmoretexturedetailsandproducefewerblockingartifacts.Inmanyliter-atures,aplentyofnoisemodelsareappliedin orimageprocessing.Inthisstudy,weadoptanadditivewhiteGaussiannoise(AWGN)model.Consideranobserved streamasanoisedimagesequencez:X×Tdefinedasz(x,t)=y(x,t)+η(x,t),x∈X,t∈T wherey(·,·)isamatrixrepresentingtheoriginal(unknown)

AndtheflagindexofeachsuperpixelpathisdefinedF,k=k,k∈Li where,krepresentsthek-thsuperpixelpathof.LabelofSuperpixelGradientPath:HereweGi,k,handGi,k,vasthehorizontalandverticalgradients,krespectively,showninequationGi,k,h,Gi,k,v=Gi,h(x,y),Gi,v(x,y),(x,y)∈XwhereX,kdenotesthesetofcoordinatesof,k.ThenthegradientGi,kof,kisobtainedbyjointingGi,k,hand,η(·,·)~N(0,σ2) .d.AWGN.And(x,t) the3-Dspatiotemporalcoordinateingrayspaceorthe4-D Gi,k= Gi,k,h(m)Gi,k,v(m) ,m∈{1,...,ωi,k} spatiotemporalcoordinateolorspace.Forsimplicity,onlyinthegrayspaceisstudied,thusX?Z2,T?Z. obtainy(·,·),weproposeamethoddesigned whereωi,krepresentsthescalesizeof,k.Gi,kisaωi,k-2matrix.Gi,k,h(m),Gi,k,v(m)denotesthegradientsofy(x,t)=P.?yf(x,t)+(1?P).?yc(x, wherePisatexturemetricmatrixcontainin erallcharac-tersofsuperpixelsfor stream,derivedinsection-

atthecoordinate(xm,AdaptiveSuperpixelSVD:Wedefineacorrelativeco-variancematrixCasyf(·,·)andyc(·,)·areoutputsofnoisefilterswithsucturevarianceσ2andσ2respectively,asshowninsection-B.

CC=

Forsimplicity,weuseGtoreceGi,k.AfterSVDofG, TextureMetricBasedon denoisingfiltering,sometexturedetailsmaybesmooth-filteredcausingunwantedartifacts.To thisdefect,wefirstadoptthemetricPtoestimatetexture.Inspiredby[8],Piscalculatedby

isrepresented

G=U 0

CalculationofOrientationGradient:Whengradientsofneighboursareisotropic,therelativetextureissmooth;when

U ,V= gradientsareanisotropic,thetextureispoignant.Sowe ωi,kgradientoperatorSober(3)and(4)asthebasicoperatorsfortextureestimation.00010000001000000100021

whereUandVarebothorthonormalmatrix,definedinequation(12).s1ands2arethepr ipaleigenvalueofmatrixG.Combining(11)and(12),weobtaDh=

,Dv=

C v2s2+ v11v21(s2? 111

However,Ginpracticeisinterferedbynoise,thusweDh=8?202,Dv=

?torepresenttherealgradientofsuperpixelpathandC??10

representthecorrelativecovariance.Asaresult,equationwhereDhandDvarehorizontalandverticalSoberfilterrespectively.Forthei-thframe ina ,weobtainits

and(13)aremodifiedas(14)and?=?s?10V? gradientGi,h,Gi,v= , ,1≤i≤

2s?2+

C

whereGi,handGi,varehorizontalandverticalgradientsofrespectively,Nfisthenumberof SuperpixelSegmentation:Thereexistsmanyalgorithmstosegmentanimage.Inthispaper,weuse[9]tosplittheframesintomanypaths.FortheframeI,

21 11Herewesetthegradientofsuperpixel-noisepath?=G+ C?=GTG+GTGη+GTG+GT divideitintoNipaths.Thenweobtainsuperpixellabelof,definedas

eCisunknown,soweuseE(?)toestimateE(C)Li={1,..., (?)AssumingnoiseisAWGN,GandGηsatisfytheformulaE(GTGη)=E(GTG)

ηE(GT

2

whereξ=1/2whenusingoperator(3),ξ=3/16whenusingoperator(4).Thus,therelationshipbetweenoriginalandrealisticsinglevalueisobtainedas(

√√

α=v11v21,β

v2+

Fig.2.Expectedvalueof 21,γ=

v?2+v? Inspiredby[10],[11],we

+

Thusweobtaintheexpectedvalueofρi,k describedinequation(27).Fig.2givesthemoreintuitionaldescription

ρ= ?s?12+tocharacterizethenoise-textureofthesuperpixelpath. E(ρi,k)

isan

α,β,γ,ξ,ωi,kareconstantsinafixedpathrespectively,the (ωi,k?1)!!·π

isanonlychangeswiths1,s2andσ2.Now,supposeσ2isconstant,ρwillbelargerwhenthereisaprominenttexture(s1 inthepath.Asaresult,wedefines2?

ThenwesetathresholdτequaltoE(ρi,k),andmark,kasfinestructurepathifρi,k>τandcoarsestructurepathifρi,k<τ.Finally,variancesofall,karecalculated, ρ=s12+ arelabeledas correspondingtofinestructureand correspondingtocoarseasatexturemetricforthesuperpixeObtainmentOfTextureMetric:Considering

2)FilteringwithWeights:First,finestructurevariancefcandcoarsestructurevarianceσ2ofstreamarefcvaryingwithsuperpixelpaths,weuseρi,ktodenotethetexturemetricof,k.Formoreexplicitly,weintroducesuperpixel

respectively

σ2=wf? texturemetricpath?,kcorrespondingto,k,defined

i∈Nf cσ2=w? c

?,k={(x,y)|V(x,y)=ρi,k,(x,y)∈

whereV(x,y)denotesthevalueof atcoordinate(x, wherewfandwcareutilizedtoadjustσ2andσ2,of ThenPisobtainedbyaggregatingall?,k,described thegeneralformulawisdescribed1P={

,k∈

w1+

Combinationof

?=min{mean(σ2?

Inordertoreconstructthestreamwithgood i∈Nf performanceandlessblockartifacts,weadoptanewadaptivestructurevariance.Thisvarianceisestimatedbyadoptingfineandcoarsestructures.Thenanoisefilterbasedontheestimatedweightsofdifferentstructureisapplied.Thedetailsaredescribedby

a,bandcareconstants.Thenthefinestructurestreamyf(·,·)andthecoarsestructurestreamyc(·,·)areobtainedrespectivelyas(yf(x,t)=Filter{z(x,t),ObtainingStructureVariance:First,Weusethecalcu-latedρi,kasateststatistictodecidewhetherasuperpixel

cyc(x,t)=Filter{z(x,t),c

pathhasastrongertexture.Byintroducingeigenvaluesandconditionnumbersofrandommetrics[12],weobtained

whereFilter{z(x,t),σ2}andFilter{z(x,t),σ2} theprocessoffilteringwithσ2andσ2forz(·,· possibilitydensityfp(ρi,k)ofρi,k,described weobserveρi,kcanestimatetheprobabilityofstructures.ThusweutilizePasaweightforfiltering.fp(ρi,k)=(ωi,k?1)ρi,k(1?ρ2 thereconstructedstreamisobtainedbyequationTABLEIComparisonOfPSNRσFig.3.Denoisingresults

streamnamedStandard2.(a)isthefirst

IV.Inthispaper,wepresentaninnovativedenoisingmethodcombininganewadaptivetexturemetricbasedonsuperpixelandanewstructurevariance.Byutilizingthe metrictoweightthefineandcoarse stream,majorartifactsproducedbytraditionalmethodsareeliminateddra-matically.Experimentresultsshowthattheproposedmethodcanachievterperformancecomparedwithtwostate-of-artframeofthe.(b)to(d)respectivelydenotesthefilteringresultsofdifferentalgorithmsasBM3D,BM4D,andours.I.ExperimentSimulationresultsaregivenincomparisonofBM3DandBM4D.Allthemethodsaretestedandcomparedwith40 streamundervariousnoiselevelsand13 stream1.SectionI-AandsectionI-Baresubjectiveandobjectiveassessmentrespectively.ThemajorparametersoftheproposedmethodaresetasTABLE.MajorParametersOfTheProposedδcababSubjectiveAsshowninFig.3(b-c),ourmethodcanadaptivelyadjusttheproportionbetweenfinestructureandcoarsestructuretoretaindetailedtextureandreduceblockartifactsontheprocess denoising.AsshowninFig.3(d),ourmethodretainsasmuchdetailedtextureasBM4DandproducesfewerblockartifactsthanBM4D.ObjectiveTheperformancecomparisonisshowninTABLE1

H.TalebiandP.Milanfar,“Globalimagedenoising,”ImageProcessing,IEEETransactionson,vol.23,no.2,pp.755–768,2014.W.Dong,G.Li,G.Shi,X.Li,andY.Ma,“Low-ranktensorapprox-imationwithlacianscalemixturemodelingformultiframeimagedenoising,”inProceedingsoftheIEEEInternationalConferenceonComputerVision,2015,pp.442–449.H.Yue,X.Sun,J.Yang,andF.Wu,“Imagedenoisingbyexploringexternalandinternalcorrelations,”ImageProcessing,IEEETransactionson,vol.24,no.6,pp.1967–1982,2015.K.Dabov,A.Foi,andK.Egiazarian,“denoisingbysparse3dtransform-collaborativefiltering,”inSignalProcessingConfer-ence,200715thEuropean.IEEE,2007,pp.145–149.M.Maggioni,G.Boracchi,A.Foi,andK.Egiazarian,“denoising,deblocking,andenhancementthroughseparable4-dnonlocalspatiotem-poraltransforms,”ImageProcessing,IEEETransactionson,vol.21,no.9,pp.3952–3966,2012.Z.Liu,L.Yuan,X.Tang,M.Uyttendaele,andJ.Sun,“Fastburstimagesdenoising,”ACMTransactionsonGraphics(TOG),vol.33,no.6,p.232,L.Shao,R.Yan,X.Li,andY.Liu,“Fromheuristicoptimizationtodictionarylearning:areviewandcomprehensivecomparisonofimagedenoisingalgorithms,”Cybeics,IEEETransactionson,vol.44,no.7,pp.1001–1013,2014.L.Ding,G.Li,R.Wang,andW.Wang,“pre-processingwithjnd-basedgaussianfilteringofsuperpixels,”inIS&T/SPIEElectronicImaging.InternationalSocietyforOpticsandPhotonics,2015,pp.941004–941004.R.Achanta,A.Shaji,K.Smith,A.Lucchi,P.Fua,andS.Susstrunk,“Slicsuperpixelscomparedtostate-of-the-artsuperpixelmethods,”Pat-ternysisandMachineInligence,IEEETransactionson,vol.34,no.11,pp.2274–2282,2012.L.Li,R.Wang,W.Wang,andW.Gao,“Alow-lightimageenhancementmethodforbothdenoisingandcontrastenlarging,”inImageProcessing(ICIP),2015IEEEInternationalConferenceon.IEEE,2015,pp.3730–X.ZhuandP.Milanfar,“Automaticparameterselectionfordenoisingalgorithmsusingano-referencemeasureofimagecontent,”ImageProcessing,IEEETransactionson,vol.19,no.12,pp.3116–3132,2010.A.Edelman,“Eigenvaluesandconditionnumbersofrandommatrices,”SIAMJournalonMatrixysisandApplications,vol.9,no.4,pp.543–560,1988.一種基于紋理度量和自適應(yīng)結(jié)構(gòu)方差的視頻去噪方法1,.2,郭3,,1. 4 中國廣播電視研究院,中國北京市西城區(qū)復(fù)興門外大街2號中國,1008663guoxi 入盡可能少的偽影。到目前為止BM3D4]BM4D5]2-D3-DBM3D,將類似的3-D時空體積分組為4-D組。這兩種方法該項目得到了深圳市孔雀計劃 1.(a)Crowded3bBM4D(c)構(gòu)的結(jié)合。首先,我們利用超像素和單值分解(SVD)ρ。通過對1(c)所示。及與兩種最先進(jìn)算法的比較。最后,第四節(jié)對本文進(jìn)行了總結(jié)。978-1-5090-5316-2/16/$31.00c2016IEEEVCIP2016,2016112730聲模型被應(yīng)用于視頻或圖像處理。在本研究中,我們采用加性白噪聲(AWGN)模型。zXTz(x,ty(x,t)+η(x,t),xX,∈Ty(·,·)(未知)視頻的矩陣,η(·,·~N(0,σ2)AWGN間中的視頻,因此XZ2,TZ.為了獲得y(·,·),我們提出了一種設(shè)計為y(x,t)=P的方法。yf(x,t)+(1.P)。。yc(x,t)(2)其中P是包含視頻流超像素整體特征的紋理度量矩陣,在第-A節(jié)中導(dǎo)出;yf(·,·)和yc(·,·)分別是結(jié)構(gòu)方差為σ2和σ2的噪聲濾波器的輸出,如第-B節(jié)所示。而導(dǎo)致不需要的偽影。為了克服這個缺陷,我們首先采用度量P來估計視頻紋理。受[8]的啟發(fā) 的計算公式如下:令人心酸的。因此我們采用梯度算子Sober(3)和(4)作為紋理估計的基本算子。01.Dh=.1200000011.,Dv=.0200.10100.011.Dh.28.10001.112.,Dv0811.202.10(4)其中Dh和Dv分別是水平和垂直Sober濾波器。對于視頻中的第i幀,我們獲得其梯度Gi,h,Gi,v=Dh,Dv,1≤i≤Nf其中Gi,h和Gi,v分別是的水平和垂直梯度,Nf是視頻幀數(shù)9]將幀分割成許多路徑。對于幀,我們將其分為Ni條路徑。然后我們得到的超像素標(biāo)Li1,...,Ni}F,k其中,k表示的第k個超像素路徑Gi,k,h,Gi,k,v=Gi,h(x,y),Gi,v(x,y),(x,y)∈XGi,kGi,k,h(m)Gi,k,v(m,m1,...,ωi,k}其中ωi,k表示,k的尺度大小。Gi,k是ωi,k4)SVDCCGi,kTGi,kG=UVT(11)0秒2U=un,1un,2,V=v21UV(12)s1s2G(112222v1v111s21s2v11v21(s2)C=(13)22v11v21(s1s2)v1v21秒11秒2s.1V.TG=U02222v.11s.1+v.21s.2v.11v.21(.s1.s.2)C.=(15)22v.11v.21(.s1.s.2)v.21s.1+v.11G.=G+Gη(16)C.=GTG+GTGη+GηTG+GTηGηCE(C.)E(C):E(C)E(C.)E(GTGGTGηGTη+GηTGη)E(GTxiωi,kσ20其中,當(dāng)使用算子(3)時,Ψ1/2;當(dāng)使用算子(4)時,Ψ3/16。因此,原始單值與實際單值22s.1.s.2=α(s1.s2)22s.1s.2β(s1s2)+Σγωi,kσ2v11v21v11+v212α=,β=,γ=(21)2222v.11v.21v.11+v.21v.11+ρ=(22)第12條+第22條α、β、γ、xi、ωi、k,ρs1、s2σ2σ2(s1s2)ρ22s1sρ=s1+量。更明確地說,我們引入超像素紋理度量路徑,k。對應(yīng)于,k,定義為,k.={(x,y)|V(x,y)=ρi,k,(x,y)∈Xi,k}(24)其中V(x,y)表示,k的值。在坐標(biāo)(x,y)處。然后通過聚合所有,k獲得P。,描述為={,k。|i?Nf,k?Li}強的紋理。通過引入隨機度量的特征值和條件數(shù)[12],我們得到了ρi,k的可能性密度fp(ρi,k),描述為fp(ρi,k)=(ωi,k.1)ρi,k(1.ρi,k2)(ωi,k.2)/3(26)2ρi,k(27)ρi,k2E(ρi,k)ωi,k,ωi,kE(ρi,k)=ωi,k1)·,ωi,kωi,k!!2σ2和σ2 對應(yīng)于粗結(jié)構(gòu)。2)用權(quán)重過濾:首

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論