




版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
針對有光澤表面三維微觀測量的基于條紋邊界的高精度高穩(wěn)定性結(jié)構(gòu)光方法ZhanSong,RonaldChung,SeniorMember,IEEE,andXiao-Ting:關(guān)鍵字:邊緣檢測,有光澤表面,結(jié)構(gòu)光系統(tǒng),三維重建緒(CMMs),timeofflightsystems[3],stereovision[4],shapefromshading[5],laserscanning6]structuredlightsystemsSLSs)等幾種方法。每種方法都有各CMMs相比,基于光學(xué)的方法具有非接觸、快速(3-5微米,并且表面分布不均勻)將會對最終測量結(jié)果帶來明顯的物理特性,操作速度是受限的。測量精度也會受到激光散斑的影響[14]。SLSs包含KonicaMinoltaVIVID9iFAROLaserScanArm。另外也有專門針對微米級精30mm300Hz的頻率下測量到被激光束照亮的目標(biāo)點(diǎn)0.01mm。在文獻(xiàn)[16]中,研究了9%的誤差百分率。除了激光線,用投影設(shè)備照射出光條紋也可以用在[12],[18]。在空間編碼方案中,一個圖案元素的代號是用元素附近圖案的值SLSs可以得到更大的數(shù)據(jù)密度張二進(jìn)制條紋的序列來編碼這個場景。這樣,域中的場景會被分割成2n個子區(qū)域,像素的中心或者碼圖案的邊緣[24]通常會被編碼。為了實(shí)現(xiàn)更高的測量精度,圖1中的一些方法比如相位移動[9]-[11],[25],[26],和算法被提出[28][29]。在實(shí)際應(yīng)用中,為了提高展開過程的穩(wěn)定性,通常會使用一系列的碼。通過結(jié)合局部相位值和全局碼值,每個圖像點(diǎn)可以獲得一個150mm的球體的實(shí)驗(yàn)中,統(tǒng)計了0.05mm的平均誤差[25]。在文獻(xiàn)[26]中,提出用正弦位移的方法來對倒裝焊接凸起進(jìn)行三維測量。在對一個標(biāo)準(zhǔn)的1mm塊規(guī)的測量中,得到了2微米的平均精度。不過,受位移方法中[27],正弦周期圖案被并行線圖案所代替,如圖1(b)所示。通過把直線圖案分別在x和y方向平移6次,可以獲得x方向的6和y方向的6張圖。由于直線圖案也是周期性的,它有其固有的歧義,但是碼會被引入來減少歧義性。在一個測量平面度的實(shí)驗(yàn)中,報告記錄了測量200x200mm的平面區(qū)域時具有0.028mm的標(biāo)準(zhǔn)偏差。在文獻(xiàn)[30]中,一個和主動視覺系統(tǒng)被影結(jié)構(gòu)光的主動視覺用來三維重建。在一次使用11.65x7.35cm矩形板誤差均在1mm以內(nèi)。然是一個。由于反射,圖像數(shù)據(jù)中被投影圖案[9]-[11],[25],[26]的正弦結(jié)構(gòu)了消除帶狀圖案的周期性誤差,傳統(tǒng)碼圖案被引入。為了更加確切的檢測帶基于條紋邊界的編碼策略P=G+{G∈{0,1,2,…,(2???S∈{0,1,2,…,(m?
其中,S是條紋邊界產(chǎn)生的局部編碼,G是全局碼編碼,P是最終具有唯圖2.碼結(jié)合條紋圖案平移的編碼策略。上:一系列碼圖案(其中n=9)用來構(gòu)造亞像素精度條紋邊界檢測受到光學(xué)限制(調(diào)制傳遞參數(shù),景深,等等,投影出的條紋邊界在獲3(a反射而增加。這使得fpfn的交叉線無法辨別。這樣直接使用零叉邊界????=????? fd+和fd-的零叉位置分別用 精度條紋邊界位置可以用如下公式獲得:xedge=(x{?2fD+=0}+x{?2fD?=0}) 系統(tǒng)標(biāo)定和三位深度計算圖4.相機(jī)、投影儀和世界坐標(biāo)系之間的幾何關(guān)系。幾何關(guān)系標(biāo)定之后,表面任意點(diǎn)P的深度值都可以通過三角算法從相機(jī)和投影儀中的兩個對應(yīng)點(diǎn)和計算出來。CP表示這是相機(jī)或者投影儀的測量參數(shù),mM表示這是特定 集體表示成3x3的矩陣Ac或Ap,其 表示傳感器陣列中u軸和v軸方向的比例因子 表傳感器軸的傾斜度,u0v0表示傳感器平面的原點(diǎn)[32]其中R和T分別表示相機(jī)和投影儀坐標(biāo)系之間的旋轉(zhuǎn)和平移矩陣相機(jī)和投影儀分別有6個外參數(shù)( 集體表示成4x4的矩陣在標(biāo)定過程中,如圖5(a)所示的標(biāo)定板被用來標(biāo)定相機(jī)。打印出的圖案128255,并且該圖案構(gòu)成了標(biāo)定板在標(biāo)定過程中代替對象表面。 夠被標(biāo)定出來。關(guān)于標(biāo)定的細(xì)節(jié)可以在文獻(xiàn)[33]中找到。255標(biāo)定過程可以計算出相對于同一世界坐標(biāo)系的內(nèi)參數(shù)Ac和Ap,相機(jī)和投影儀的畸變參數(shù),以及他們的外參數(shù)Rc,Tc,Rp,Tp。得到Rc,Tc,Rp,Tp之后,相機(jī)和投影儀之間的變換矩陣E可以通過公式(6)得到。假設(shè)相機(jī)圖像平面上提取出的條紋邊界點(diǎn)(這里 示m的投影和投影儀圖案平面上提取出的的條紋邊界點(diǎn),與同一個場景點(diǎn)相關(guān)聯(lián)。假設(shè)二維位置通過x維度編碼。我們可以通過匹配編碼聯(lián)的場景點(diǎn)的深度zc就能通過傳統(tǒng)的三角測量算法[8]得到,如下所示:圖實(shí)驗(yàn)系統(tǒng)配置了一個現(xiàn)成的微型投影儀(3M硅液晶顯示,分辨率30ft/s-30°的夾角。雖然大的夾角可以提高測量精度,但是也會帶來的閉塞和散序用來進(jìn)行三維計算。在標(biāo)準(zhǔn)PC平臺(Core2Duo3.3GHz,4G內(nèi)存)上它測量精度評估如圖7(a)所示,在迎面方向?qū)?zhǔn)投影儀。該平面向著投影儀平移15次,每次0.1mm。每次移動之后,用該系統(tǒng)重建這個平面??偣灿?6個平面被重建。面相對于投影儀坐標(biāo)系的方向是[0.027,0.055,0.998](單位法向量,也就是幾乎7(b)中。圖有光澤硬幣的三維重建9(a(b)..對比與結(jié)論(x,y強(qiáng)度值:I1,I2,I3,I4。相位值可以按照如下方式求解:。投影線的峰值中心由以為信號的卷積圖像發(fā)生變化的位置進(jìn)行線性插值得到[27]。采用相同的碼策略來消除正弦圖案和直線圖案中的周期性歧義。用第三部分C中介紹的相同過程。圖14展示了一個球形網(wǎng)格陣列樣品的實(shí)驗(yàn)。BGA上的凸起被相移法、直線平移法和本文方法三維重建。重建結(jié)果分別在圖14(b)-(d。12.傳統(tǒng)的相和(f)是由形態(tài)、有光澤表面的視覺測量仍然遺留著一個有性的問題。有光澤表面的測量精度微小厚度和不均勻。為了定量地評估這個參考結(jié)果和原本有光澤表面(圖11所示)Geomagic如圖14所示,一個凸起高度0.4mm的標(biāo)準(zhǔn)BGA樣品被重建。BGA上0.1mm14(d)所示的用本文所提方見,得到了所有凸起的平均高度為0.396mm,標(biāo)準(zhǔn)偏差只有0.012mm。結(jié)論及將來的工作澤的硬幣、金屬工件和BGA凸起,證明了該系統(tǒng)強(qiáng)大的穩(wěn)定性和高測量精度。域由于遮擋問題不能被和重建。一個額外的相機(jī)安裝在投影儀的另一端可以當(dāng)前系統(tǒng)需要3秒來完成一次完整掃描。將來,一種具有更快響應(yīng)時間的微型投 A.N.Belbachir,M.Hofst?tter,M.Litzenberger,andP.Sch?n,“Highspeedembedded-objectysisusingadual-linetimed-address-eventtemporal-contrastvisionsensor,”IEEETrans.Ind.Electron.,vol.58,no.3,pp.770–783,Mar.Electron.,vol.55,no.1,pp.348–363,Jan.H.ChoandS.W.Kim,“Mobilerobotlocalizationusingbiasedchirpspreadspectrumranging,”IEEETrans.Ind.Electron.,vol.57,no.8,pp.2826–2835,Aug.2010.E.GrossoandM.Tistarelli,“Activedynamicstereovision,”IEEE . l.,vol.17,no.9,pp.868–879,Sep.S.Y.ChoandW.S.Chow,“Aneural-learning-basedreflectancemodelfor3-Dshapereconstruction,”IEEETrans.Ind.Electron.,vol.47,no.6,pp.1346–1350,Dec.2000.F.Marino,P.DeRuvo,G.DeRuvo,M.Nitti,andE.Sla,“HiPER3-D:Anomnidirectionalsensorforhighprecisionenvironmental3-Dreconstruction,”IEEETrans.Ind.Electron.,vol.59,no.1,pp.579–591,Jan.2012.B.Curless,“Fromrangescansto3Dmodels,”ACMSIGGRAPHComput.Graph.,vol.33,4,pp.38–41,Nov.R.HartleyandP.Sturm,“Triangulation,”Comput.Vis.ImageUnderstanding,vol.68,no.2,146–157,Nov.D.Bergmann,“Newapproachforautomaticsurfacereconstructionwithcodedlight,”inSPIE—RemoteSensingReconstructionThreeDimensionalObjectsScenes,1995,vol.2572,pp.S.Zhang,“Phaseunwraperrorreductionframeworkforamultiplewavelengthphase-shiftingalgorithm,”O(jiān)pt.Eng.,vol.48,no.10,p.105601,Oct.2009.L.Zhang,B.Cudess,andS.M.Seitz,“Rapidshapeacquisitionusingcolorstructuredlightandmulti-passdynamicprogramming,”inProc.Int.Symp.3DDataProcess.,Visual.,Transm.,2002,pp.24–36.J.Salvi,S.Fernandez,T.Pribanic,andX.Llado,“Astateoftheartinstructuredlightpatternsforsurfaceprofilometry,”PatternRecognit.,vol.43,no.8,pp.2666–2680,Aug.2010.A.Weckenmann,G.Peggs,andJ.Hoffmann,“Probingsystemsfordimensionalmicro-andnano-metrology,”Meas.Sci.Technol.,vol.17,no.3,pp.504–509,Mar.2006.S.JecicandN.Drvar,“Theassessmentofstructuredlightandlaserscanningmethodsin3Dshapemeasurements,”inProc.4thInt.Congr.CroatianSoc.Mech.,2003,pp.237–244.M.YaoandB.G.Xu,“Evaluatingwrinklesonlaminatedplasticsheetsusing3Dlaserscanning,”Meas.Sci.Technol.,vol.18,no.12,pp.37243730,Dec.2007.G.SaeedandY.M.Zhang,“Weldpoolsurfacedepthmeasurementusingacalibratedcameraandstructured-light,”Meas.Sci.Technol.,vol.18,no.8,pp.2570–2578,Aug.2007.L.Yao,L.Z.Ma,andD.Wu,“Lowcost3Dshapeacquisitionsystemusingstripshiftingpattern,”Digit.HumanModel.,ser.LectureNotesinComputerScience,vol.4561,pp.276–285,J.Salvi,J.Pagès,andJ.Batlle,“Patterncodificationstrategiesinstructuredlightsystems,”PatternRecognit.,vol.37,no.4,pp.827–849,Apr.2004.K.C.Wong,P.Y.Niu,andX.He,“Fastacquisitionofdensedepthdatabyanewstructured-lightscheme,”Comput.Vis.ImageUnderstanding,vol.98,no.3,pp.398–422,Jun.K.C.Wong,P.Y.Niu,andX.He,“Fastacquisitionofdensedepthdatabyanewstructured-lightscheme,”Comput.Vis.ImageUnderstanding,vol.98,no.3,pp.398–422,Jun.Z.SongandR.Chung,“Determiningbothsurfacepositionandorientationinstructured-lightbasedsensing,”IEEETrans.Pattern .Mach.In l.,vol.32,no.10,pp.1770–1780,Oct.I.C.Albitar,P.Graebling,andC. gnon,“Robuststructured-lightcodingfor3Dreconstructioncomputervision,”inProc.Int.Conf.Comput.Vis.,2007,pp.1–6.J.Salvi,J.Batlle,andE.Mouaddib,“Arobust-codedpatternprojectionfordynamic3Dscenemeasurement,”PatternRecognit.Lett.,vol.19,no.1,pp.1055–1065,Sep.1998.Y.C.Hsieh,“Decodingstructured-lightpatternsforthree-dimensionalimagingsystems,”PatternRecognit.,vol.34,no.2,pp.343–349,Feb.2001.H.B.Wu,Y.Chen,M.Y.Wu,C.R.Guan,andX.Y.Yu,“3Dmeasurementtechnologybystructuredlightusingstripe-edge-basedGraycode,”J.Phys.,Conf.Ser.,vol.48,no.48,pp.537–541,2006.F.SadloandT.Weyrich,“Apracticalstructured-lightacquisitionsystemforpoint-basedgeometryandtexture,”inProc.Eur.Symp.Point-BasedGraph.,2005,pp.89–98.H.N.Yen,D.M.Tsai,andS.K.Feng,“Full-field3Dflip-chipsolderbumpsmeasurementusingDLP-basedphaseshiftingtechnique,”IEEETrans.Adv.Packag.,vol.31,no.4,pp.830–840,Nov.2008.J.Gühring,“Dense3Dsurfaceacquisitionbystructured-lightusingofftheshelfinProc.SPIE,2000,vol.4309,pp.T.Pribanic,H.Dzapo,andJ.Salvi,“Efficientandlow-cost3Dstructuredlightsystembasedonamodifiednumber-theoreticapproach,”EURASIPJ.Adv.SignalProcess.,vol.2010,pp.474389-1–474389-11,2010.V.I.GushovandY.N.Solodkin,“Automaticprocessingoffringepatternsinintegerinterferometers,”O(jiān)pt.LasersEng.,vol.14,no.4/5,pp.311324,1991.C.Y.ChenandY.F.Zheng,“Passiveandactivestereovisionforsmoothsurfacedetectionofdeformedplates,”IEEETrans.Ind.Electron.,vol.42,no.3,pp.300–306,Jun.1995.D.Ziouand ,“Edgedetectiontechniques:Anoverview,”Int.J.Pattern .,vol.8,no.4,pp.537–559,Mach.Inl.,vol.22,no.11,pp.1330–1334,Nov.Z.SongandR.Chung,“UseofLCDpanelforcalibratingstructured-lightbasedrangesensingsystem,”IEEETrans.Instrum.Meas.,vol.57,no.11,pp.2623–2630,Nov.2008.R.HartleyandA.Zisserman,MultipleViewGeometryinComputerVision.Cambridge,U.K.:CambridgeUniv.Press,2004.通過多個基于結(jié)構(gòu)光的商業(yè)深度相機(jī)進(jìn)行的三維場景重建+1 部表情[2],等等。視覺相結(jié)合1213像獻(xiàn)71展示了兩感機(jī)捕獲同場景的情。注意兩個相機(jī)同被打開時,深度信的質(zhì)量明顯下降了(圖。須解決設(shè)置多個深度相機(jī)時的干擾問題。在文獻(xiàn)]中,為了在相同場景中操作多個深度相機(jī),作者們使用了三時(SLDC相關(guān)算法[6]。該方法對于實(shí)時深度重建非常有效,并且被應(yīng)用到了時行的 區(qū)域的相同環(huán)境中時,每個相機(jī)中獲取的深度信息有可能是錯誤的(如圖因此在使用多個SLDC進(jìn)行三維重建時我們兩大:第一,怎樣正確區(qū)分 定相等(例如,當(dāng)額外的紅外機(jī)被添加進(jìn)來。我們把投影儀命名為P1,P2,...,PM,相機(jī)命名為。投影儀發(fā)出隨機(jī)的但是不隨時間改變的模X=(x,,z身
m n xPPPX;xCPm n xPmxCnmn個相機(jī)中的像。PPm
n n
PI0mI P m 式中,αm(αm≥0)是相應(yīng)模式的反射率相機(jī)觀測約束的可能情況當(dāng)只考慮相機(jī)觀測約束時,三維重建問題是一個非常典型的多視點(diǎn)MVS的最大問題之一,特征稀少表面。另一方面,當(dāng)深度相機(jī)的數(shù)量很小的時候(例如23個深度相機(jī))MVS可能仍然會由于自我和相互遮擋導(dǎo)致信在我們的實(shí)現(xiàn)中,我們計算了相關(guān)區(qū)塊之間的均值移除交叉關(guān)聯(lián)(MRCCX,我們用公式(1)取,并表示為ICn。MRCC是通過如下公式計算得出: ICICiICjICj ,
Cj
ICi
CC Iij其中,ICIC分別為相機(jī)Ci和Cj相應(yīng)區(qū)塊的亮度平均值。我們?nèi)〔煌鄼C(jī)對之間MRCC的最大值作為整個系統(tǒng)的相機(jī)觀測約束可能情況:CC IijLmaxMRCCIC,IC i j MRCCI0,用來計算投影儀-
1 2 1
Cj
投影儀-相機(jī)約束的可能2投影儀-相機(jī)約束需要地復(fù)雜性探索,因?yàn)楣剑?)中的線性權(quán)重αm是未知的。實(shí)際上,αm的取值至少跟表面點(diǎn)到每個投影儀之間的距離,表面取2m?margminmmIPmIm
投影儀-I0MRCC ,
L1LL 具體實(shí)現(xiàn)是一個非常具有性的問題。另一方面,雖然多個深度相機(jī)會由于隨1(c4桌子或者桶,用流行的射線追蹤軟件POV-Ray2[11]渲染。每一個場景中,兩個深圖(a(顯示了每個相機(jī)獨(dú)立地通過基于互相關(guān)的方法進(jìn)行深MCC.(MCC(dMS,在平面掃描期間為每條光線將公式()最大化,來重建中心視點(diǎn)的深度圖。在5(e)顯示了該方法((PSNR出的方法執(zhí)行效果遠(yuǎn)好于直接深度合并或者M(jìn)VS。56Batlle,J.,Mouaddib,E.andSalvi,J.,“Recentprogressincodedstructuredlightasatechniquetosolvethecorrespondenceproblem:asurvey,”PatternRecognition,Vol.31,No.7,pp.963—982,1998.Cai,Q.,Gallup,D.,Zhang,C.,andZhang,Z.,“3ddeformablefacetrackingwithacommoditydepthcamera,”inECCV,2010.Crabb,R.,Tracey,C.,Puranik,A.,andDavis,J.,“Real-timefore-groundsegmentationviarangeandcolorimaging,”inCVPRWorkshoponToF-CamerabasedComputerVision,2008.Cui,Y.,Schuon,S.,Chan,D.,Thrun,S.,andTheobalt,C.,“3Dshapescanningwithatime-of-flightcamera,”inCVPR,2010.Faugeras,O.et.al,“Real-timecorrelation-basedstereo:algorithm,implementationsandapplications,”INRIAtechnicalreport#2013,1993.Kang,Y.-S.andHo,Y.-S.,“High-qualitymulti-viewdepthgenerationusingmultiplecoloranddepthcameras,”ICME2010.Kolmogorov,V.andZabih,R.,“Multi-cameraSceneReconstructionviaGraphCuts,ECCVKutulakos,K.andSeitz,S.,“Atheoryofshapebyspacecarving,”IJCV,Vol.38,No.3,pp.199—218,2000., Ray,J.Zhu,L.Wang,R.Yang,andJ.Davis,“Fusionoftime-offlightdepthandstereoforhighaccuracydepthmaps,”CVPR,2008.QingxiongYang,Kar-HanTan,Culbertson,B.,andApostolopoulos,J.,"Fusionofactiveandpassivesensorsforfast3Dcapture,"MultimediaSignalProcessing(MMSP),2010.Sun,J.,Zheng,N.-N.andShum,H.-Y.,“StereoMatchingUsingBeliefPropagation,”IEEETrans.OnPAMI,Vol.25,No.7,pp.787-800,2003.IEEETRANSACTIONSONINDUSTRIALELECTRONICS,VOL.60,NO.3,MARCH AnAccurateandRobustStrip-Edge-BasedStructuredLightMeansforShinySurfaceMicromeasurementin3-DatDigitalObjectIdentifier—Three-dimensionalmeasurementofshinyorreflec-tivesurfaceisachallengingissueforoptical-basedinstru-mentations.Inthispaper,wepresentanovelstructuredlightapproachfordirectmeasurementofshinytargetsoastoskipthecoatingpreprocedure.Incomparisonwithtraditionalimage-intensity-basedstructuredlightcodingstrategieslikesinusoidalintheilluminatedpatterns.Withstripedgesgenerallybetterpthanindividualimageintensityintheimagedatainthepresenceofsurfacereflections,suchacodingstrategyismorerobust.Toremovetheperiodicambiguitywithinstrippatterns,traditionalGraycodepatternsareadopted.Tolocalizethestripedgesmoreprecisely,bothpositiveandnegativestrippatternsareaccuracyisproposedforstrip-edgelocalization.Theexperimentalsetupisconfiguredwithmerelyanoff-the-shelfpico-projectorandacamera.Extensiveexperimentsincludingaccuracyevaluation,comparisonwithpreviousstructuredlightalgorithms,andthere-constructionofsomerealshinyobjectsareshowntodemonstratethesystem’saccuracyandenduranceagainstreflectivenatureofIndexTerms—Edgedetection,shinysurface,structuredlightsystem(SLS),3-Dreconstruction.WWITHTHEdevelopmentofmicrofabricationandelec-tronicpackagingtechnology,thereisinindustryanincreasingneedofdemandingthree-dimensional(3-D)infor-mationinmicrometer-levelprecisionforsurfaceinspectionandqualitycontrolpurposes[1],[2].Eveninmundanetaskslikecoinanticounterfeitingandsignaturerecognition,ithasbeenrecognizedthat3-DmeasurementsthatarenecessarilyatmicrometerlevelconstituteanotherlevelofManuscriptreceivedMay21,2011;revisedAugust12,2011,November2011,December12,2011,andJanuary11,2012;acceptedFebruary13,2012.DateofpublicationFebruary24,2012;dateofcurrentversionOctober16,2012.ThisworkwassupportedinpartbytheNationalNaturalScienceFoun-dationofChina(NSFC)underGrant61002040,inpartbyNSFC-GuangDongunderGrant10171782619-2000007,andinpartbytheIntroducedInnovativeR&DTeamofGuangdongProvince-RobotandInligentInformationTech-nologyTeam.Z.SongandX.T.ZhangarewiththeShenzhenInstitutesofAdvancedTechnology,ChineseAcademyofSciences,Shenzhen518055,China(: ;xt R.ChungiswiththeDepartmentofMechanicalandAutomationEngi-neering,TheChineseUniversityof,Shatin,(:Colorversionsofoneormoreofthefiguresinthispaperareavailabletotheexisting2-Dmethods.However,instrumentsandmeansformicrometer-level3-Dmeasurementinadequateaccuracyandeconomyarestilllacking.Thefactthatmanyofthetargetobjectsareshinyorreflectiveposesadditionalchallengetothemeasurementprocess.Bytheworkingprinciples,3-Dmeasuringsystemscanbeclassifiedintocoordinatemeasuringmachines(CMMs),timeofflightsystems[3],stereovision[4],shapefromshading[5],laserscanning[6],andstructuredlightsystems(SLSs).Eachapproachcomeswithitsownlimitations,advantages,andcost.ComparedwithCMMs,optical-basedmethodshavethead-vantagesofbeingnoncontactandfastandhavebeenwidelyadoptedinthecommercialsector[7].Inthesemethods,laserbeamsorstructuredlightpatternsareprojectedontotheobjectsurface,andthereflectionsarecapturedbyimagingdevices.Depthinformationattheseilluminatedareascanthenbecalcu-latedviatriangulationHowever,existingopticalmethodsgenerallyencounterdiffi-cultieswithshinyorspecularobjects.Surfacesthatareshinygenerallyhavemostofthe inglightbeamsreflectedoffthesurfacestovariousdirections
thanthatoftheimagingdevice,andthatgreatlycompromisesthequalityofthecapturedimages.Subjecttothelowimagequality,exist-ingstructuredlightmeanswhichareusuallyintensitybased[9]–[11]aredifficulttooperate.Theoften-usedpracticeistocoattheshinysurfaceswithathinlayerofwhitepowdertosurfaceswithathinopaquelacquerjustformeasuringpurposeisacommonpractice.However,forapplicationswherehighac-thicknessof3–5μm,whichcouldbeunevenlydistributedoverthesurface)willinducedistincteffecttothefinalmeasurementaccuracy.Moreimportantly,suchatreatmentcomplicatesthewholescanningprocedureandcouldcausecorrosionstothetargetsurface.Allthesemakeexistingstructuredlightscanningtechniquesimpracticalinmanyapplications[12].Inthispaper,anovelstrip-edgebasedstructuredlightcodingstrategyispresented.Informationencodedinthepatternisnotindividualimageintensitybutstrip-edgeposition.Comparedwithrawimageintensities,stripedgeshavepreciselocationsintheimagedatathataremoredistinguishabledespitethepresenceofinfluencefromthespecularnatureoftheimagedscenetotheoverallimage-intensitydistribution.Thatmakesitscodinginformationmorerobust.Toremovetheperiodicambiguitywithstrippatterns,thetraditionalGraycodepatterns0278-0046/$31.00?2012 IEEETRANSACTIONSONINDUSTRIALELECTRONICS,VOL.60,NO.3,MARCHareadopted.Bytheuseofbothpositiveandnegativestrippatternsandazero-crossing-basedfeaturedetectorthatwedeveloped,stripedgescanbeextractedaccuray.Weshowexperimentalresultsfromasystemsetupthatisconfiguredwithmerelyanoff-the-shelfpico-projectorandanindustrialcamera.Bothdevicesareconsumergradetomakethesystemeconomicalandapplicableforwideindustrialapplications.Thispaperisorganizedasfollows.SectionIIgivesabriefreviewofthepreviousworkonoptical3-Dmeasuringtech-nologies.Theproposedstructuredlightcodingstrategy,strip-edgedetector,systemcalibration,and3-DreconstructionaredescribedinSectionIII.Experimentsonaccuracyevaluation,micrometer-levelmeasurementofsomerealshinyobjects,andcomparisonswithsometraditionalmethodsarepresentedinSectionIV.AconclusionandpossiblefutureworkareofferedinSectionV.PreviousThisworkfocusesonhowhigh-accuracy3-Dmeasurementcanbeconductedovershinymicro-objects.Traditionalme-chanicalprobingmeansadoptedinCMMscanachievehighprecision,butattheexpenseofmeasuringspeed[13].Non-contacttechniques,includinglaserscanningandSLS,havebeenadvancingquicklyinthelastdecade.Suchtechniquesaretriangulation-basedopticalmeasuringtechnologies.Laserscannerisoperableonalmostallsurfaces,butbecauseofitsphysicalscanning-basednature,theoperationspeedislimited.Themeasuringaccuracyisalsoaffectedbylaserspeckleeffect[14].SLSsconsistofaprojectiondeviceandcameras.Byprojectingsomespeciallydesignedlightpatternsontothetargetsurfaceandimagingtheilluminatedscene,imagepointsundertheilluminationscaneachbelabeledwithauniquecodeword.Suchcodewordsarep intheimagedata.Sincecode-wordsontheprojectionsideareknownapriori,point-to-pointcorrespondencesbetweentheimageplaneandtheprojectionplanecanbeestablishedreadily.Three-dimensionalinforma-tionatsuchpositionscanthenbedeterminedviatriangulation[8].SLSshavethebenefitsofdemandingonlyasimplesetup,lowcost,andfastoperationspeed,althoughitsperformancehascertaindependenceuponthesurfaceconditionoftheinspectedobjectandupontheworkingenvironment.Inthelaserscanningapproach,abeamoflaserlightpassedovertheobjectwhileacameramountedasideistorecordthepositionoftheprojectedlaserprofiles.Thereareanincreasingnumberofoff-the-shelf3-Dlaserscannersavailableinthemarket.Differencesofthevarioussystemsliemainlyonthelaserstrength,wavelength,workingdistance,andscanningspeed.RepresentativesystemsinthemarketincludeKonicaMinoltaVIVID9iandFAROLaserScanArm.Therearealsosystemsspecificallyformeasuringinmicrometer-levelof0.1×0.5×0.05mminameasuringfieldof150×25mm.wrinklesonlaminatedplasticsheets.Thesensoruses
spotlaserilluminationandperformspositiondetectionthroughsubpixelpeakdetection.ThelaserdisplacementequipmentisslightlytiltedtominimizetheeffectofspecularreflectionfromFig.1.(a)Sinusoidalpatternisshiftedfourtimestoletthephasevalueateachimagepointbedetermined[25].(b)Linepatternisshiftedsixtimestoencodeeachimagepointontheline[27].plasticsurfaces.Thesensorissetataheightof30mmabovetheplaneofthemechanicalstageandmeasuresthedistancetothetargetspotilluminatedbythelaserbeamatarateof300Hz.Intheexperiment,adepthresolutionof0.01mmwasreported.In[16],alaserlinescanningsystemisstudied.Intheimplementation,thecameraandthelaserarefirstcalibrated.Havingfoundthetopandbottomboundariesofthelaserlineintheimage,thecenterofthelaserlinebetweenthetopandbottomboundariesisfoundalongitslength.Thesecenterpointsareusedforraytracinganddepthcalculation.Whenmeasuringaweldpoolofdepthabout0.38mm,apercentageerrorofabout9%wasreportedintheexperiment.Insteadoflaserline,alightstripilluminatedbyaprojectiondevicecanalsobeusedwiththesamescanningstrategy[17].Intheuseofstructuredlightpattern,thecodingstrategiescanbecategorizedtospatialcodingandtemporalcodingschemes[12],[18].Inthespatialcodingscheme,codewordatapatternelementisdefinedbythepatternvaluesintheneighborhood,whichcanbeaboutvariousgraylevels[19],colors[20],orsomegeometricalprimitives[21]inthepattern.De-Bruijnsequences[22]andM-array[23]arethemaincodingschemesoftenemployed.Sinceuniquelabelatanypatternpositioncomeswithonlyacertain“spread”ofpatternvaluesinthevicinityoftheposition,thepatternpositionsthatcanbeuniquelycoded,andinturntheirdepthvaluessubsequentlyrecovered,cannotbetoodense.Inthetemporalcodingscheme,codingisachievednotinthespatial butinthetime.Aseriesofpatternsisprojectedatdifferentinstantsontothetargetsurface.Thecodewordforanygivenpixelonthecamera’simageplaneisdecidedbytheilluminationsatthatpixelovertime.SLSsusingthisencodingschemecanresultinstrongerdatadensityandhigheraccuracyinthemeasurement.Inparticular,Graycodeisawidelyusedcodingschemebecauseofitssimplicityandrobustness.IfabinaryGraycodepatternofcodewordlengthnistobeused,animagesequenceconsistingofn+1binarystrippatternsneedbeprojectedsequentiallytoencodethescene.Withthat,thesceneintheimage canbeseparatedinto2nsubregions,andthepixelcenterortheGraycodepattern’sedges[24]areusuallyencoded.
shifting[9]–[11],[25],[26]andlineshifting[27]areusuallyusedasshowninFig.1.Tosolvetheperiodicambiguitybetweensinusoidalandlinepatterns,variousphaseunwrapalgorithmshavebeenproposed[28],[29].Inrealapplications,SONGetal.:STRIP-EDGE-BASEDSTRUCTUREDLIGHTMEANSFORSHINYSURFACEMICROMEASUREMENTIN3- Theilluminatedobjectsurfaceisthenimaged,andthestripedgesintheimagedataarepreciselylocated.toimprovetherobustnessoftheunwrapprocedures,aseriesofGraycodepatternsareusuallyused.BycombiningthelocalphasevalueandtheglobalGraycodevalue,auniquecodewordforeachimagepointcanbeobtained.Inexperiment-ingwithasphereof150-mmdiameter,aagedeviationof0.05mmwasreported[25].In[26],asinusoidalshiftingSLSisproposedforthe3-Dmeasurementofflipchipsolderbumps.Inthemeasurementofastandard1-mmgaugeblock,aageaccuracyof2μmwasobtained.However,subjecttostrongreflectionsofbumpsurfaces,thereconstructed3-Dmodelsofthebumpshaveobviousdiscrepancywiththeactualones.Intheline-shiftingmethod[27],thesinusoidalperiodicprofileissubstitutedbyaparallellinepatternasshownin1(b).Byconsecutivelyshiftingthelinepatternsixtimesinthex-andy-directions,respectively,siximagesforthex-coordinateandsiximagesforthey-coordinatecanbeob-tained.Sincethelinepatternisalsoperiodic,ithasinherentambiguity,buttheGraycodepatternscanbebroughtintoastandarddeviationof0.028mmisreportedinmeasuringaplanarregionof200×200mm.In[30],apassiveandactivestereovisionsystemwasusedforreconstructingthesurfacethedeformedplate.Inthesystem,passivestereovisionisusedtodetectthesurfaceboundary,whileactivestereovisionwithaprojectionofstructuredlightisusedfor3-Dreconstruction.Inanexperimentwitharectangularplateofsize11.65×7.35cm,ataworkingdistanceofabout40cm,themeasuringerrorindepthwaswithin2mm,andtheerrorinthex-andy-directionswaswithin1mm.Shinysurfacewithstrongreflectivenatureisstillachallengetooptical-basedinstrumentationsincludingstructuredlightmeans.Sinusoidalstructureintheprojectedpattern[9]–[11],[25],[26]isdestroyedintheimagedataowingtothestrongreflections,andtheprojectedlinesarealsousuallyfloodedandundistinguishableinthecapturedimage[27].Toimprovethestructuredlightpattern’santireflectioncapability,astrip-edge-basedcodingstrategyisproposedinthispaper.Comparedwiththeimage-intensity-basedsinusoidalpatternandthethin-line-basedline-shiftingpattern,thestripscanbebetterpintheimagedatainthepresenceofsurfacereflections,andthatmakesthecodingproceduremorerobust.Toremovetheperiodicambiguitywithinstrippatterns,traditionalGraycodepatternsarebroughtin.Todeterminethestripedgesmoreprecisely,boththepositiveandnegativepatternsareused.Inparticular,animprovedzero-crossingedgedetectorisproposedforstrip-edgelocalizationinsubpixelaccuracy.Strip-Edge-BasedStructuredLightPatternDesignandStrip-EdgeDetectionThemajorprocessesinvolvedinthedevisedsystemarethusthefollowing.Inthestructuredlightpattern,auniquecodewordisassignedtoeachstrip-edgepoint,whichisacombinationofalocalstrip-edgecodevalueandaglobalGraycode
betweentheilluminatedpatternandtheimageplanecancolumnoftheFig.2.CodingstrategyofGraycodecombinedwithstripshiftingpat-tern.Top:SeriesofGraycodepatterns(withn=9)isusedtoconstruct256subregionseachwithauniquecodeword.Bottom:Strippatternofwidth4pixelsisshiftedthreetimestoencodepositionswithineachsubregion.Stripedgesoftheshiftingpatternwillbedetectedandencodedinfineraccuracy.established,andspatialdepthatthestrip-edgepointscanbeComparedwiththerawimageintensitiesandthinimagelinesthatarevulnerabletothereflectivenatureofshinysurfaces,edgesofbinarystripsintheilluminatedpatternaremorelocalizableandbetterp intheimagedatadespitethespecularnatureoftheobjectsurfaces.HigherlocalizabilityoftheedgescomeswithmoreaccuratereconstructionandhigherrobustnessofthemeasurementIntheimplementation,aseriesof(n+1)Graycodepatternsisfirstprojectedinordertodividethetargetsurfaceinto2nsubregions(itisnot2n+1regionsbecausetheall-zerocodewordisnotdistinguishableintheimagedataandgenerallynotused),eachwithauniquen-b-longGraycodeword.Supposethateachofsuchsubregionsismpixelwideontheprojector’spatterngenerationplane.Then,astrippatterninhalfofthefineststripwidthoftheGraycodesequenceisshiftedm?1times,instepsSuchaperiodicpattern,byitself,hasaperiodicambiguityofmpixels(i.e.,thewidthofthestrippatternontheprojector’spatterngenerationplane)indistinguishingdifferentpointsofthetargetsurface.However,bycombiningthetwocodewordstogether,onefromtheGraycodeandtheotherfromthestrippattern,themsubdivisionscanbeintroducedtoeachofthe2nsubregionstoachievefiner3-Dreconstruction.TheprocedurecanbeexpressedasP=G+G∈{0,1,2,...,(2n?S∈{0,1,2,...,(m? whereSisthelocalcodewordgeneratedbythestrippat-tern,GistheglobalGraycodewordusedtoremoveperiodicambiguityamongstrippatterns,andPisthefinaluniquecodeword.Forapatterngenerationplaneof1024-pixelwidthintheprojector,Graycodeof9-blength(i.e.,n=9)canseparateitinto256subregions.Then,astrippatternofwidth4pixelsisshiftedthreetimesinsteplengthof1pixelasillustratedbyFig.2.Upontheshifting, IEEETRANSACTIONSONINDUSTRIALELECTRONICS,VOL.60,NO.3,MARCHFig.3.(a)Zero-crossingedgedetector.(b)Withoutsurfacereflect
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 如何處理突發(fā)的財務(wù)危機(jī)計劃
- 《材料熱力學(xué)與動力學(xué)》課程教學(xué)大綱
- 圖書館智能化管理趨勢計劃
- 供應(yīng)鏈中的物流與資金流整合
- 體育賽事贊助商的招募與維護(hù)策略
- 兒童獨(dú)立性格的塑造與影響
- 個人創(chuàng)業(yè)及創(chuàng)新成功案例分析
- 2024年高三數(shù)學(xué)重難點(diǎn)復(fù)習(xí)專練:導(dǎo)數(shù)與三角函數(shù)結(jié)合的解答題(解析版)
- 個人職業(yè)發(fā)展的挑戰(zhàn)與機(jī)遇探索
- 統(tǒng)計(單元測試)-2024-2025學(xué)年三年級數(shù)學(xué)下冊 人教版
- 2024年貴州蔬菜集團(tuán)有限公司招聘筆試參考題庫附帶答案詳解
- 國際貿(mào)易(對外經(jīng)濟(jì)貿(mào)易大學(xué))智慧樹知到期末考試答案2024年
- 高級審計師《審計理論與審計案例分析》真題
- 營養(yǎng)健康食堂建設(shè)指南
- 邯鄲市2024屆高三第三次調(diào)研考試(一模)物理試卷
- 酒店公共區(qū)域電梯安全使用培訓(xùn)
- 【初中語文】第6課《老山界》課件 2023-2024學(xué)年統(tǒng)編版語文七年級下冊
- 銀行法律法規(guī)
- 道路貨物運(yùn)輸經(jīng)營申請表
- 班級家長群管理制度
- 《秘書文檔管理》思考與實(shí)訓(xùn)習(xí)題及答案 -第4章
評論
0/150
提交評論