氣體傳感器——外文翻譯_第1頁
氣體傳感器——外文翻譯_第2頁
氣體傳感器——外文翻譯_第3頁
氣體傳感器——外文翻譯_第4頁
氣體傳感器——外文翻譯_第5頁
已閱讀5頁,還剩14頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、本 科 生 畢 業(yè) 設(shè) 計(jì)外 文 資 料 翻 譯 題 目 傳感器技術(shù) 專 業(yè) * 班 級(jí) * 姓 名 * 指導(dǎo)教師 * 所在學(xué)院 * 附 件1.外文資料翻譯譯文;2.外文原文 多傳感器數(shù)據(jù)融合的多分類器系統(tǒng)一、引言在許多應(yīng)用識(shí)別和自動(dòng)識(shí)別的模式中,從不同的傳感器監(jiān)測(cè)物理現(xiàn)象提供的免費(fèi)信息中獲得數(shù)據(jù)是很罕見的。對(duì)這類信息的適當(dāng)組合通常就叫做數(shù)據(jù)或者信息的融合,而且可以提高分類決策的準(zhǔn)確性和信賴度相對(duì)于那些基于單個(gè)數(shù)據(jù)源的任何單獨(dú)的決策。之前我們已經(jīng)介紹過Learn+,一種以整體分類為基礎(chǔ)的方法,作為一種有效的自動(dòng)分類算法是能逐步學(xué)習(xí)的。該算法能夠獲得額外的數(shù)據(jù),在分類系統(tǒng)設(shè)計(jì)好后就能變成有用的數(shù)

2、據(jù)了。為了實(shí)現(xiàn)增量學(xué)習(xí),Learn+生成一個(gè)整體的分類器(專家),其中每個(gè)分類器都是作為前面的數(shù)據(jù)庫。為了認(rèn)清數(shù)據(jù)融合和增量學(xué)習(xí)之間概念的相似性,我們討論了數(shù)據(jù)融合的一些類似的方法:聘用一個(gè)正義專家,從提供的數(shù)據(jù)中訓(xùn)練每個(gè)數(shù)據(jù),然后戰(zhàn)略性的結(jié)合他們的輸出。我們能發(fā)現(xiàn)這些系統(tǒng)的性能在決策應(yīng)用中是很重要的而且向來是優(yōu)于那些基于單一的數(shù)據(jù)源決策的決策在一些基準(zhǔn)和真實(shí)的數(shù)據(jù)源世界。這樣一個(gè)系統(tǒng)中的應(yīng)用很多,其中的數(shù)據(jù)是從相同的應(yīng)用程序所產(chǎn)生的多種來源(或多個(gè)傳感器)提供的可能包含補(bǔ)充信息中獲得的。例如,在對(duì)管道做非破壞性評(píng)估時(shí),缺陷信息可從渦流,磁泄漏的圖像,超聲波掃描,熱成像獲得,或者幾個(gè)不同的診斷

3、信息可從不同的醫(yī)學(xué)檢測(cè)獲得,如血液分析心電圖,腦電圖或者醫(yī)療成像設(shè)備,如超聲波,磁共振或正電子掃描等。直觀的,如果來自多個(gè)來源的信息可以適當(dāng)?shù)慕Y(jié)合起來,那么分類系統(tǒng)(檢測(cè)是否有缺陷,或是否可以做出診斷決定)的性能可以得到改善。所以,增量學(xué)習(xí)和數(shù)據(jù)融合涉及學(xué)習(xí)不同的數(shù)據(jù)集。在增量學(xué)習(xí)中補(bǔ)充信息必須提取新的數(shù)據(jù)集,其中可能包含新的分類實(shí)例。而在數(shù)據(jù)融合中補(bǔ)充信息也必須提取新的數(shù)據(jù)集,其中可能包含代表數(shù)據(jù)使用不同的特點(diǎn)。傳統(tǒng)的方法一般是根據(jù)概率理論(葉貝斯定理,卡爾曼濾波),或登普斯特-謝弗(DS)和它的變化,其中主要用于軍事上的應(yīng)用開發(fā),特別是目標(biāo)檢測(cè)和跟蹤,如決策理論。以整體分類為基礎(chǔ)的方法尋求

4、一個(gè)新的和更通用的解決方案提供更廣泛的應(yīng)用。還應(yīng)當(dāng)指出的是,在一些應(yīng)用中如上述的無損檢測(cè)和醫(yī)療診斷等,從不同的來源獲得的數(shù)據(jù)可能已產(chǎn)生不同的物理方式,并因此獲得的功能可能是不一樣的。雖然在這種情況下使用概率或者決策理論的方法會(huì)變得更加的復(fù)雜,但異構(gòu)的功能可以很容易的被安置整體的系統(tǒng),討論如下。一個(gè)集成系統(tǒng)結(jié)合了集中不同的分類和特定的輸出。分類的多樣性可以允許使用略有不同的訓(xùn)練參數(shù),如不同的訓(xùn)練數(shù)據(jù)集產(chǎn)生不同的決策邊界。直覺來看,每個(gè)專家會(huì)產(chǎn)生不同的錯(cuò)誤,而這些分類戰(zhàn)略可以降低總的錯(cuò)誤。集成系統(tǒng)由于各種應(yīng)用的報(bào)道比單一的分類系統(tǒng)的優(yōu)越性已在過去十年吸引了極大的關(guān)注。認(rèn)識(shí)到增量學(xué)習(xí)應(yīng)用這種方法的潛

5、力,我們最近開發(fā)了Learn+,并表明Learn+確實(shí)是有能力逐步學(xué)習(xí)新的數(shù)據(jù)。此外,該算法不要求對(duì)以前使用的數(shù)據(jù)的訪問,并沒有忘記以前所學(xué)的知識(shí),還能夠容納從以前在早期培訓(xùn)看不見的類的實(shí)例。在Learn+中一般的方法,就像人臉檢測(cè)在其他集成算法中的方法差不多,創(chuàng)建一個(gè)集成分類,每個(gè)分類學(xué)習(xí)數(shù)據(jù)集的一個(gè)子集。然后結(jié)合使用加權(quán)的多數(shù)表決的分類。在這方面的貢獻(xiàn),我們回顧了Learn+算法能適當(dāng)修改數(shù)據(jù)融合的應(yīng)用。從本質(zhì)上講,從不同的來源或使用不同的功能生成的每個(gè)數(shù)據(jù)集,Learn+生成新的集成分類,然后結(jié)合使用加權(quán)的多數(shù)表決。二、LEARN +Learn+算法的偽碼,應(yīng)用于數(shù)據(jù)融合問題,見圖1,并

6、在下面的段落中詳細(xì)描述。對(duì)于每個(gè)數(shù)據(jù)庫,F(xiàn)Sk,k=1,K,由一組不同的特點(diǎn),提出Learn+,算法的輸入是:(一)mk訓(xùn)練數(shù)據(jù)實(shí)例的xi隨著他們正確的標(biāo)簽yi的序列Sk;(二)監(jiān)督分類算法中相應(yīng)的分類,生成個(gè)人分類(今后,假設(shè));(三)一個(gè)整數(shù)TK為第k個(gè)數(shù)據(jù)庫要生成的分類。每一種假說ht,在第t個(gè)迭代算法中產(chǎn)生,接受不同的訓(xùn)練數(shù)據(jù)集。這是通過初始化一套重量訓(xùn)練數(shù)據(jù),wt,和從wt(第一步)獲得的一個(gè)分布Dt。根據(jù)這個(gè)分布的訓(xùn)練子集TRt是來自訓(xùn)練數(shù)據(jù)Sk (步驟2)。分布Dt決定更有可能被選擇進(jìn)入訓(xùn)練子集TRT訓(xùn)練數(shù)據(jù)的實(shí)例。TRt在步驟3中被分類,返回第t個(gè)假設(shè)ht。這一假說的錯(cuò)誤,t

7、計(jì)算在當(dāng)前數(shù)據(jù)庫Sk 上,作為誤判實(shí)例分配權(quán)重的總和(步驟4)。此錯(cuò)誤是必須小于1/2 ,以確保最低限度的合理性能,可以從ht預(yù)計(jì)。如果是這種情況,假設(shè)ht接受,則錯(cuò)誤歸到獲得規(guī)范化的錯(cuò)誤(步驟5) 。關(guān)于數(shù)據(jù)融合的learn+算法輸入:對(duì)于每個(gè)功能集 FSk, k=1,2,Kl 訓(xùn)練數(shù)據(jù) Sk= (xi, yi), i=1,mkl 監(jiān)督算法中相應(yīng)的分類。l 整數(shù)Tk,指定的分類。對(duì)于每一個(gè)k=1,2,K:初始化w1(i)=D1(i)=1/mk,,i=1,2,mk對(duì)于每一個(gè)t=1,2,.,Tk:1、設(shè)定2、從Dt中畫訓(xùn)練子集TRt.3、通過訓(xùn)練數(shù)據(jù)TRt 獲得ht 4、計(jì)算ht的錯(cuò)誤 對(duì)于Sk

8、.,If,丟棄ht 跳到步驟2.5、對(duì)于,通過加權(quán)的多數(shù)表決,獲得的符合假說。自變量最大值6、計(jì)算Ht的錯(cuò)誤: ( )7.對(duì)于,更新權(quán)重: 計(jì)算表決權(quán)重的調(diào)整系數(shù) 輸出最終假設(shè): 自變量最大值圖1 Learn+數(shù)據(jù)融合算法如果t1/2,目前的假設(shè)不成立,返回到步驟2中選擇一個(gè)新的訓(xùn)練子集。 所有t假設(shè)產(chǎn)生迄今,然后結(jié)合使用加權(quán)多數(shù)投票(WMV)獲得復(fù)合假說Ht。在對(duì)WMV,每個(gè)假設(shè)被分配重量是成反比的錯(cuò)誤,給予較高的權(quán)重較小的訓(xùn)練誤差分類。然后以類似的方式計(jì)算錯(cuò)誤的復(fù)合假說Ht,通過Ht(步驟6)為來誤判實(shí)例分配權(quán)重的總和復(fù)合假說Ht錯(cuò)誤。歸復(fù)合誤差Bt是獲得更新的分配權(quán)被分配到第7步中的個(gè)別

9、實(shí)例,然后使用。正確復(fù)合假說Ht分類實(shí)例的分配權(quán)重降低了Bt的因素;因此,當(dāng)分布于下一次迭代的第1步重新正常化時(shí),誤判實(shí)例的權(quán)重有效地增加。這是因?yàn)椋?dāng)一個(gè)新的數(shù)據(jù)集介紹(尤其是與新類或功能),現(xiàn)有的合奏(Ht)是可能誤認(rèn)的實(shí)例,尚未得到妥善的教訓(xùn),因此這些情況下的權(quán)重增加,迫使該算法把重點(diǎn)放在新的數(shù)據(jù)。介紹了在每個(gè)數(shù)據(jù)融合應(yīng)用中一個(gè)權(quán)重的額外集。這些權(quán)重代表的特定數(shù)據(jù)源的重要性和可靠性,可以根據(jù)以往的經(jīng)驗(yàn)分配,(例如,診斷神經(jīng)系統(tǒng)疾病,我們可以知道,磁共振成像(MRI)是更為可靠的腦電圖,因此,我們可以選擇較高的權(quán)重訓(xùn)練與MRI數(shù)據(jù)分類),或者他們可以在自己的訓(xùn)練數(shù)據(jù)上,根據(jù)整體的表現(xiàn)設(shè)置的

10、特定功能訓(xùn)練。我們一套K等權(quán)重計(jì)算k個(gè)數(shù)據(jù)集的基礎(chǔ)上,第k個(gè)數(shù)據(jù)集上訓(xùn)練的合奏訓(xùn)練中的表現(xiàn),調(diào)整使用K投票權(quán)。我們已經(jīng)算出了這樣的一套關(guān)于第k個(gè)數(shù)據(jù)集的基礎(chǔ)上的第k個(gè)數(shù)據(jù)集上訓(xùn)練的整體訓(xùn)練中的表現(xiàn)的權(quán)重k,使用k調(diào)整表決權(quán)重。每個(gè)分類調(diào)整后的權(quán)重,在最后假說 Hfinal.加權(quán)多數(shù)表決。圖2是該算法的示意圖。Learn+的仿真結(jié)果,使用多個(gè)數(shù)量集的增量學(xué)習(xí),相比較其他增量學(xué)習(xí)的方法,如模糊ARTMAP可以找到。Learn+兩個(gè)數(shù)據(jù)融合應(yīng)用的仿真結(jié)果,現(xiàn)介紹如下,其中主要包括額外的細(xì)節(jié),更新的結(jié)果比提出的有進(jìn)一步了解。一個(gè)涉及超聲波和確定管道缺陷漏磁數(shù)據(jù)的組合,以及其他涉及化學(xué)傳感器的數(shù)據(jù)來自多

11、個(gè)傳感器的組合,這些應(yīng)用程序都是真實(shí)世界的應(yīng)用。圖2 算法示意圖厚度對(duì)ZnO薄膜的CO氣體傳感器的影響3 結(jié)果與討論3.1 結(jié)構(gòu)特征半導(dǎo)體氧化鋅的傳感機(jī)制,屬于表面控制型,氣敏表面吸附位/地區(qū)決定。表面形貌隨薄膜厚度的變化顯著,因此暴露目標(biāo)氣體的總吸附面積也可能隨薄膜厚度變化。在本文中,吸附面積ZnO薄膜厚度的功能控制,通過改變沉積時(shí)間30,60,90,120,150,180分鐘。從89納米到510納米ZnO薄膜的厚度增加,當(dāng)沉積時(shí)間從30至180分鐘不等。圖2 ZnO薄膜的能量彌散的X線分析頻譜正如圖2所示,光譜表明,主峰是鋅線和O線。在光譜的另一高峰期是從SEM觀察到的預(yù)先進(jìn)行金涂層處理的

12、凹峰。圖3 不同厚度的ZnO薄膜表面的SEM形貌(a)89納米(b)為240納米表面的SEM形貌(c)425納米各種ZnO薄膜的厚度,表面形貌如圖3所示??梢钥闯?,薄膜的光滑,作為薄膜厚膜的增加同晶粒尺寸與形貌發(fā)達(dá)的多。3.2 薄膜傳感器的傳感特性在一般情況下,傳感器的靈敏度受工作溫度的影響。溫度越高,薄膜表面的反應(yīng)越大,在一定溫度范圍內(nèi)給出了更高的靈敏度。圖4 2100 ppm的CO氣氛下,氧化鋅薄膜的靈敏度和操作溫度的關(guān)系如圖4所示,2100 ppm的CO濃度下,從50至350,顯示A函數(shù)的操作溫度靈敏度,沉積氧化鋅薄膜各種厚度。觀察快速增長(zhǎng)的敏感度,89納米的薄膜的操作溫度升高到200達(dá)

13、到最大,隨著操作溫度的進(jìn)一步增加而下降。從這個(gè)觀察中由此可以推斷,敏感度的增長(zhǎng)隨著薄膜厚度的減少,相比較另外的薄膜,89納米的薄膜的最佳工作溫度是最低的。金屬氧化物半導(dǎo)體傳感器的靈敏度主要取決于目標(biāo)氣體和傳感器的表面之間的相互作用。材料的表面積越大,吸附氣體和傳感器的表面之間的相互作用越強(qiáng),即氣體傳感靈敏度較高。它可以從掃描時(shí)電子顯微鏡中觀察到形貌,如圖2所示,晶粒尺寸在89納米的薄膜小,而在425納米晶界是最大的。在這項(xiàng)研究中膜的厚度為89納米時(shí)大。圖5 ZnO薄膜(89納米厚)暴露于各種CO濃度下的300攝氏度的瞬態(tài)響應(yīng)如圖5所示,作為一個(gè)CO氣體濃度的功能的靈敏度,89 納米的氧化鋅在3

14、00沉積ZnO氣體傳感器的靈敏度,例如CO氣體濃度的增加從400到2100 ppm,然后當(dāng)CO氣體被轉(zhuǎn)移急劇下降。對(duì)于不同的CO濃度,表明氣體傳感器具有良好的反應(yīng)。此外,傳感器幾乎同一時(shí)間在不同的CO濃度中達(dá)到最高靈敏度。這一結(jié)果與操作溫度響應(yīng)時(shí)間的控制的結(jié)論是一致的。圖6 氣體的靈敏度和不同的CO濃度下,89納米的ZnO薄膜的工作溫度的關(guān)系。如圖6所示,作為各種CO濃度的功能操作溫度的敏感性。由此可以看出, ZnO薄膜的敏感性增加隨著CO濃度從200到2100 ppm的變化。圖7 在2100 ppm的CO氣氛下,對(duì)于89 nm的ZnO薄膜沉積操作溫度靈敏度的影響如圖7所示,在2100 ppm

15、的CO氣體氣氛下,對(duì)于89納米的薄膜沉積在不同操作溫度下靈敏度與操作時(shí)間的動(dòng)態(tài)變化。據(jù)觀察,操作溫度升高到350時(shí),靈敏度達(dá)到最大。4.結(jié)論用射頻磁控濺射系統(tǒng)獲得的各種厚度的CO氣體傳感器的結(jié)構(gòu)和ZnO薄膜的傳感特性進(jìn)行了研究。結(jié)構(gòu)特征顯示晶粒尺寸的提高,因?yàn)楸∧ず穸仍黾?,?dǎo)致總面積的減少,結(jié)果有一個(gè)低的傳感靈敏度。此外,氣體傳感器靈敏度也增加了CO氣體的濃度增加。通過提高操作溫度,靈敏度以及響應(yīng)時(shí)間進(jìn)行了改進(jìn)。在這項(xiàng)研究中獲得最高靈敏度為在200的操作溫度的89納米的薄膜。Multiple Classifier Systems for Multisensor Data FusionI. IN

16、TRODUCTIONIn many applications of pattern recognition and automated identification, it is not uncommon for data obtained from different sensors monitoring a physical phenomenon to provide complimentary information. A suitable combination of such information is usually referred to as data or informat

17、ion fusion, and can lead to improved accuracy and confidence of the classification decision compared to a decision based on any of the individual data sources alone.We have previously introduced Learn+, an ensemble of classifiers based approach, as an effective automated classification algorithm tha

18、t is capable of learning incrementally. The algorithm is capable of acquiring novel information from additional data that later become available after the classification system has already been designed. To achieve incremental learning, Learn+ generates an ensemble of classifiers (experts), where ea

19、ch classifier is trained on the currently available database. Recognizing the conceptual similarity between data fusion and incremental learning, we discuss a similar approach for data fusion: employ an ensemble of experts, each trained on data provided by one of the sources, and then strategically

20、combine their outputs. We have observed that the performance of such a system in decision making applications is significantly and consistently better than that of a decision based on a single data source across several benchmark and real world databases.The applications for such a system are numero

21、us, where data available from multiple sources (or multiple sensors) generated by the same application may contain complementary information. For instance, in non-destructive evaluation of pipelines, defect information may be obtained from eddy current, magnetic flux leakage images, ultrasonic scans

22、, thermal imaging; or different pieces of diagnostic information may be obtained from several different medical tests, such as blood analysis, electrocardiography or electroencephalography, medical imaging devices, such as ultrasonic, magnetic resonance or positron emission scans, etc. Intuitively,

23、if such information from multiple sources can be appropriately combined, the performance of a classification system (in detecting whether there is a defect, or whether a diagnostic decision can be made) can be improved. Consequently, both incremental learning and data fusion involve learning from di

24、fferent sets of data. In incremental learning supplementary information must be extracted from new datasets, which may include instances from new classes. In data fusion, complementary information must be extracted from new datasets, which may represent the data using different features.Traditional

25、methods are generally based on probability theory (Bayes theorem, Kalman filtering),or decision theory such as the Dempster-Schafer (DS) and its variations, which were primarily developed for military applications, such as notably target detection and tracking. Ensemble of classifiers based approach

26、es seek to provide a fresh and a more general solution for a broader spectrum of applications. It should also be noted that in several applications, such as the nondestructive testing and medical diagnostics mentioned above, the data obtained from different sources may have been generated by differe

27、nt physical modalities, and therefore the features obtained may be heterogeneous. While using probability or decision theory based approachesbecome more complicated in such cases, heterogeneous features can easily be accommodated by an ensemble based system, as discussed below. An ensemble system co

28、mbines the outputs of several diverse classifiers or experts. The diversity in the classifiers allows different decision boundaries to be generated by using slightly different training parameters, such as different training datasets. The intuition is that each expert will make a different error, and

29、 strategically combining these classifiers can reduce total error . Ensemble systems have attracted a great deal of attention over the last decade due to their reported superiority over single classifier systems on a variety of applications.Recognizing the potential of this approach for incremental

30、learning applications, we have recently developed Learn+, and shown that Learn+ is indeed capable of incrementally learning from new data. Furthermore, the algorithm does not require access to previously used data, does not forget previously acquired knowledge and is able to accommodate instances fr

31、om classes previously unseen in earlier training. The general approach in Learn+, much like those in other ensemble algorithms, such as AdaBoost, is to create an ensemble of classifiers, where each classifier learns a subset of the dataset. The classifiers are then combined using weighted majority v

32、oting. In this contribution, we review the Learn+ algorithm suitably modified for data fusion applications. In essence, for each dataset generated from a different source and/or using different features, Learn+ generates new ensemble of classifiers, which are then combined using weighted majority vo

33、ting.II. LEARN +The pseudocode of the Learn+ algorithm, as applied to the data fusion problem, is provided in Figure 1, and is described in detail in the following paragraphs. For each database, FSk, k=1,K, that consists of a different set of features that is submitted to Learn+, the inputs to the a

34、lgorithm are (i) a sequence Sk of mk training data instances xi along with their correct labels yi ; (ii) a supervised classification algorithm BaseClassifier, generating individual classifiers (henceforth, hypotheses); and (iii) an integer Tk, the number of classifiers to be generated for the kth d

35、atabase. Each hypothesis ht, generated during the tth iteration of the algorithm, is trained on a different subset of the training data. This is achieved by initializing a set of weights for the training data, wt, and a distribution Dt obtained from wt (step1). According to this distribution a train

36、ing subset TRt is drawn from the training data Sk (step 2). The distribution Dt determines which instances of the training data are more likely to be selected into the training subset TRt. The Base classifier is trained on TRt in step 3, which returns the tth hypothesis ht. The error of this hypothe

37、sis,t ,is computed on the current database Sk as the sum of the distribution weights of the misclassified instances (step 4). This error is required to be less than1/2 to ensure that a minimum reasonable performance can be expected from ht. If this is the case, the hypothesis ht is accepted and the

38、error is normalized to obtain the normalized error (step 5).Fig. 1. The Learn+ algorithm for data fusion If t1/2,the current hypothesis is discarded, and a new training subset is selected by returning to step 2. All t hypotheses generated thus far are then combined using weighted majority voting (WM

39、V) to obtain a composite hypothesis Ht. In WMV, each hypothesis is assigned a weight that is inversely proportional to its error, giving a higher weight to classifiers with smaller training error. The error of the composite hypothesis Ht is then computed in a similar fashion as the sum of the distri

40、bution weights of the instances that are misclassified by Ht (step 6).The normalized composite error Bt is obtained which is then used for updating the distribution weights assigned to individual instances in Step 7. The distribution weights of the instances correctly classified by the composite hyp

41、othesis Ht are reduced by a factor of Bt; hence when the distribution is re-normalized in step 1 of the next iteration, the weights of the misclassified instances are effectively increased. We note that this weight update rule, based on the performance of the current ensemble, facilitates learning f

42、rom new data. This is because, when a new dataset is introduced (particularly with new classes or features), the existing ensemble (Ht) is likely to misclassify the instances that have not yet been properly learned, and hence the weights of these instances are increased, forcing the algorithm to foc

43、us on the new data.An additional set of weights are introduced in data fusion applications for each ensemble. These weights represent the importance and reliability of the particular data source and can be assigned based on former experience, (e.g., for diagnosing a neurological disorder we may know

44、 that magnetic resonance imaging (MRI) is more reliable then electroencephalogram, and we may therefore choose to give a higher weight to the classifiers trained with MRI data), or they can be based on the performance of the ensemble trained on the particular feature set on its own training data. We

45、 have calculated a set of such weights k for the kth dataset based on the training performance of the ensemble trained on the kth dataset, and adjusted the voting weights using k . The adjusted weight of each classifier is then used during the weighted majority voting for the final hypothesis Hfinal

46、.The schematic representation of the algorithm is provided in Figure 2. Simulation results of Learn+ on incremental learning using several datasets, as well as comparisons to the other methods of incremental learning such as Fuzzy ARTMAP can be found in 11. The simulation results of Learn+ on two ap

47、plications of data fusion are presented below, which primarily include additional detail, updated results and further insight than those presented in 14,15. Both of these applications are real world applications, one involving the combination of ultrasonic and magnetic flux leakage data for identifi

48、cation of pipeline defects, and the other involving the combination of chemical sensor data from several sensors for identification of volatile organic Compounds.Fig. 2. Schematic representation of algorithmTHE EFFECT OF THICKNESS ON ZNO THIN FILM CO GAS SENSOR3. Results and discussion3.1 Structural

49、 CharacteristicsThe sensing mechanism of the semiconducting ZnO belongs to the surface-controlled type where the gas sensitivity is determined by the surface adsorption sites/area. The surface morphology varied with the film thickness significantly , therefore the total adsorption area exposed to th

50、e target gas may also vary as the film thickness . In this paper, the adsorption area as a function of ZnO film thickness was controlled by changing the deposition time of 30 , 60, 90, 120,150 , 180 min. The ZnO film thickness increased from 89nm to 510 nm when deposition time varied from 30 to 180

51、min.Figure 2. EDAX spectrum of ZnO thin filmAs shown in Fig. 2, the spectra indicate that the prominent peak is the Zn line followed by O line. The other peak in the spectra is Au peaks from the Gold-coating treatment in preparation for SEM observation.Fig.3. SEM morphology of surface of ZnO films w

52、ith various thickness (a) 89 nm (b) 240 nm (c) 425 nmThe surface morphologies were shown in fig 3 for various ZnO film thickness. It is seen that the film smooth in morphologies with the grain size much more developed as the film thickness was increased.3.2 Sensing Characteristics of the thin film s

53、ensorsIn general, the sensitivity of the sensor is affected by the operating temperature. The higher temperature enhances surface reaction of the thin films and gives higher sensitivity in a temperature range.Fig.4. The relation between sensitivity and operation temperature of ZnO films under 2100 p

54、pm CO atmosphereFig 4 shows the sensitivity as a functions of operation temperature from 50oC to 350oC for the various thickness of deposited Zinc Oxide film under 2100 ppm CO concenteration. A rapid increase in sensitivity was observed as the operation temperature was increased to 200oC and reached

55、 a maximum for the 89 nm films and decreased thereafter with further increase in operation temperature. It can be inferred from this observation that the sensitivity increased as the film thickness decreased and optimum operating temperature for 89 nm films is the lowest compare to other films. The

56、sensitivity of the metal oxide semiconductor sensor is mainly determined by the interaction between the target gas and the surface of the sensor. The greater the surface area of the materials, the stronger the interaction between the adsorbed gases and the sensor surface , i.e. the higher the gas se

57、nsing sensitivity. It can be observed from SEM morphologies as shown in Fig.2 that the grain size is small in the 89 nm film while it is large in the 425 nm one and grain boundary are the largest when the film thickness is 89 nm in this study.Fig. 5. Transient responses of ZnO films (89 nm thick) for exposure to various CO concentration at 300 o C.The sensitivity as a function of CO gas concentration for the as deposited 89 nm ZnO at 30

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論