




已閱讀5頁,還剩14頁未讀, 繼續(xù)免費閱讀
版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
翻譯部分 英文原文 Fault Diagnosis of Three Phase Induction Motor Using Neural Network Techniques Abstract: Fault diagnosis of induction motor is gaining importance in industry because of the need to increase reliability and to decrease possible loss of production d ue to machine breakdown.Due to environmental stress and many others reasons different faults occur in induction motor. Many researchers proposed different techniques for fault detection and diagnosis.However,many techniques available presently require a good deal of expertise to apply them successfully.Simpler approaches are needed which allow relatively unskilled operators to make reliable decisions without a diagnosis specialist to examine data and diagnose problems.In this paper simple,reliable and economical Neural Network(NN)based fault classifier is proposed,in which stator current is used as input signal from motor.Thirteen statistical parameters are extracted from the stator current and PCA is used to select proper input.Data is generated from the experimentation on specially designed 2 Hp,4 pole 50 Hz.three phase induction motor.For classification,NNs like MLP,SVM and statistical classifiers based on CART and Discriminant Analysis are verified.Robustness of classifier to noise is also verified on unseen data by introducing controlled Gaussian and Uniform noise in input and output. Index Terms: Induction motor, Fault diagnosis, MLP, SVM,CART, Discriminant Analysis, PCA I.INTRODUCTION INDUCTION motors play an important role as prime movers in manufacturing,process industry and transportation due to their reliability and simplicity in construction.In spite of their robustness and reliability,they do occasionally fail,and unpredicted downtime is obviously costly hence they required constant attention.The faults of induction motors may not only cause the interruption of product operation but also increase costs,decrease product quality and affect the safety of operators.If the lifetime of induction machines was extended, and efficiency of manufacturing lines was improved,it would lead to smaller production expenses and lower prices for the end user.In order to keep machines in good condition, some techniques i.e.,fault monitoring, fault detection, and fault diagnosis have become increasingly essential.The most common faults of induction motors are bearing failures, stator phase winding failures ,broken rotor bar or cracked rotor end-rings and air-gap irregularities. The objective of this research is to develop an alternative neural network based incipient fault-detection scheme that overcome the limitations of the present schemes in the sense that,they are costly, applicable for large motors, furthermore many design parameters are requested and especially concerning to long time operating machines, these parameters cannot be available easily.As compared to existing schemes, proposed scheme is simple, accurate, reliable and economical. This research work is based on real time data and so proposed neural network based classifier demonstrates the actual feasibility in a real industrial situation. Four different neural network structures are presented in this paper with all kinds of performances and about 100%classification accuracy is achieved. II.FAULT CLASSIFICATION USING NN The proposed fault detection and diagnosis scheme consists of four procedures as shown in Fig.1: 1. Data collection & acquisition 2. Feature extraction 3. Feature selection 4. Fault classification A. Data Collection and Data acquisition In this paper the most common faults namely stator winding interturn short(I),rotor dynamic eccentricity(E)and both of them(B)are considered. Fig.1.General Block Diagram of proposed classifier For experimentation and data generation the specially designed 2 HP, three phase,4 pole,415V,50 Hz induction motor is selected. Experimental set up is as shown in Fig.2. Fig.2.Experimental Setup The load of the motor was changed by adjusting the spring balance and belt.Three AC current probes were used to measure the stator current signals for testing the fault diagnosis system. The maximum frequency of used signal was 5 kHz and the number of sampled data was 2500.From the time waveforms of stator currents as shown in Fig.3,no conspicuous difference exists among the different conditions. Fig.3.Experimental Waveforms of Stator current B. Feature Extraction There is a need to come up with a feature extraction method to classify faults.In order to classify the different faults,the statistical parameters are used.To be precise, sample statistics will be calculated for current data.Overall thirteen parameters are calculated as input feature space.Minimum set of statistics to be examined includes the root mean square (RMS)of the zero mean signal(which is the standard deviation),the maximum, and minimum values the skew ness coefficient and kurtosis coefficient. Pearsons coefficient of skew ness, 2g defined by: xSxxg 32 (1) Where x denotes mean, x denotes median andxSdenotes the sample standard deviation.The sample coefficient of variation rv is defined by; xSv xr (2) The thr sample moment about the sample mean for a data set is given by; nxxmrniir 1)( (3) m2 denotes spread about the center,m3 refers to skewness about the center;m4 denotes how much data is massed at the center. Second,third and fourth moments are used to define the sample coefficient of skewness,3gand the sample coefficient of kurtosis, 4g as follows. 3233 mmg (4) 4244 mmg (5) The sample covariance between dimensions j and k is defined as; )1()(1nxxxxcnikikjijjk (6) The ordinary correlation coefficient for dimensions j and k ,jkris defined as; kjjkjk SScr (7) C. Feature Selection Before a feature set is fed into a classifier,most superior features providing dominant fault-related information should be selected from the feature set,and irrelevant or redundant features must be discarded to improve the classifier performance and avoid the curse of dimensionality.Here Principal Component Analysis(PCA)technique is used to select the most superior features from the original feature set.Principal Components(PCs)are computed by Pearson rule.The Fig.4 is related to a mathematical object,the eigenvalues,which reflect the quality of the projection from the 13-dimensional to a lower number of dimensions. Fig.4.Principal Component, Eigenvalues and percent variability D. Fault Classifier (1)MLP NN Based Classifier Simple Multilayer Perceptron(MLP)Neural Network is proposed as a fault classifier.Four Processing Elements are used in output layer for four conditions of motor namely Healthy, Inter turn fault,Eccentricity and Both faults. From results as shown in Fig.5,five PCAs are selected as inputs;hence number of PEs in input layer is five. Fig.5(a).Variation of Average MSE on training and CV with number of PCs as input Fig.5(b).Variation of Average Classification Accuracy on Testing on Test data, Training data and CV data with number of PCs as input The randomized data is fed to the neural network and is retrained five times with different random weight initialization so as to remove biasing and to ensure true learning and generalization for different hidden layers.This also removes any affinity or dependence of choice of initial connection weights on the performance of NN.It is observed that MLP with a single hidden layer gives better performance.The number of Processing Elements(PEs)in the hidden layer is varied.The network is trained and minimum MSE is obtained when 5 PEs are used in hidden layer as indicated in Fig.6. Fig.6.Variation of Average MSE with number of PEs in Hidden Layer VariousTransferfunctions,namely,Tanh,Sigmoid,Liner-tanh,Linear-sigmoid,Softmax,Bias axon, Linear axon and learning rules, namely, Momentum, Conjugate-Gradient, Quick Propagation, Delta Bar Delta, and Step are verified for training, cross validation and testing.Minimum MSE and average classification accuracy on training and CV data set are compared . With above experimentations finally,the MLP NN classifier is designed with following specifications, Number of Inputs:5;Number of Hidden Layers:01; Number of PEs in Hidden Layer:04; Hidden Layer: Transfer function:tanh Learning Rule:Momentum Step size:0.6 Momentum:0.5 Output Layer: Transfer function:tanh Learning Rule:Momentum Step size:0.1 Momentum:0.5 Number of connection weights:44 Training time required per epoch per exemplar:0.0063 ms (2) SVM NN Based Classifier The support vector machine(SVM)is a new kind of classifier that is motivated by two concepts. First , transforming data into a high-dimensional space can transform complex problems (with complex decision surfaces)into simpler problems that can use linear discriminant functions. Second, SVMs are motivated by the concept of training and using only those inputs that are near the decision surface since they provide the most information about the classification. It can be extended to multi-class.SVMs training always seek a global optimized solution and avoid over fitting,so it has ability to deal with a large number of feature. Generalized Algorithm for the classifier: For N dimensional space data ix (i =1 N) this algorithm can be easily extended to network by substituting the inner product of patterns in the input space by the kernel function, leading to the following quadratic optimization problem: Ni Nj jijijiNi i xxGddJ 1 1 21 )2,(21)( (8) Subject to 01Ni iid Nii ,.1,0 (9) where 2,xG represents a Gaussian function, N is the number of samples,iare a set of multipliers(one for each sample), Ni jijjiibxxGddxJ12 )2,()( (10) and )(minii xgM (11) and choose a common starting multiplier i,learning rate , and a small threshold. Then, while Mt, we choose a pattern ixand calculate an update )(1(ii xg and perform the update If 0)( ii n )()()1( nnniii iidnbnb )()1( (12) And if 0)( ii n )()1( nnii )()1( nbnb (13) After adaptation only some of the iare different from zero (called the support vectors). It is easy to implement the kernel Adatron algorithm since )(ixgcan be computed locally to each multiplier,provided that the desired response is available in the input file.In fact,the expression for )(ixgresembles the multiplication of an error with an activation,so it can be included in the framework of neural network learning.The Adatron algorithm essentially prunes the RBF network so that its output for testing is given by, )2,(s gn()( 2 Nv e c t o r ss p p o r tiiiii bxxGdxf (14) And cost function in error criterion is 12)(,( t a n h ()(21)(i itytdtJ (15) Number of PCs as input and step size is selected by checking the average minimum MSE and average classification accuracy; results are shown in Fig 7. Fig.7(a).Variation of Average MSE on training and CV with number of PCs as input Fig.7(b).Variation of Average Classification Accuracy on Testing on Test data,Training data and CV data with number of PCs as input Finaly the SVM based classifier is designed with following specifications, Number of Inputs:5; Step Size:0.7 Time required per epoch per exemplar:0.693 ms Number of connection weights:264 Designed classifier is trained and tested using the similar datasets and results are as shown in Fig.8 and Fig.9 Fig.8.Variation of Average Minimum MSE on Testing on Test data,CV data and Training data with number of rows shifted(n) Fig.9.Variation of Average Minimum MSE on Training and CV with various groups (3)Classification and Regression Trees(CART) CART induces strictly binary trees through a process of binary recursively partitioning of feature space of a data set. The first phase is called tree building,and the other is tree pruning.Classification tree is developed using XLSTAT-2009.Various methods, measures and maximum tree depth are checked and results are shown in Fig.10.It is observed that optimum average classification accuracy on testing on test data and CV data is found to be 90.91 and 80 percent,respectively. Fig.10(a).Variation of Average Classification Accuracy on Testing on Test data and CV data with Method and Measure of Trees Fig.10(b).Variation of Average Classification Accuracy on Testing on Test data and CV data with Depth of Trees (4) Discriminant Analysis Discriminant analysis is a technique for classifying a set of observations into predefined classes.The purpose is to determine the class of an observation based on a set of variables known as predictors or input variables.The model is built based on a set of observations for which the classes are known. Based on the training set,the technique constructs a set of linear functions of the predictors,known as discriminant functions ,such that cxbxbxbLnn .2211, where the sb are discriminant coefficients, the sx are the input variables or predictors and c is a constant. Discriminant analysis is done using XLSTAT-2009.Various models are checked and results are shown in Fig.11.It is observed that optimum average classification accuracy on testing on test data and CV data is found to be 91.77 and 80 percent,respectively. Fig.11.Variation of Average Classification Accuracy on Testing on Test data and CV data with Model of DA III.NOISE SUSTAINABILITY OF CLASSIFIER Since the proposed classifier is to be used in real time,where measurement noise is anticipated,it is necessary to check the robustness of classifier to noise.To check the robustness, Uniform and Gaussian noise with mean value zero and variance varies from 1 to 20%is introduced in input and output and average classification accuracy on testing data i.e.unseen data is checked.It is seen that SVM based classifier is the most robust classifier in the sense that it can sustain both uniform and Gaussian noise with 14%and 20%variance in input and output, respectively. Results are as shown in Table I G-Gaussian Noise U-Uniform Noise IV.RESULTS AND DISCUSSION In this paper,the authors evaluated the performance of the developed ANN based classifiers for detection of four fault conditions of three phase induction motor and examined the results.MLP NN,and SVM are optimally designed and after completion of the training,the learned network is tested to detect different types of faults. Similarly step size is varied in SVM and 0.7 step size is found to be optimum. These confirm our idea that the proposed feature selection method based on the PCA can select the most superior features from the original feature set,and therefore,is a powerful feature selection method.Also proposed classifier is enough robust to the noise,in the sense that classifier gives satisfactory results for Uniform and Gaussian noise with 14%variance in input and with 20% variance in output.Comparative results are shown in Fig.12 and Table II. Fig.12.Comparative analysis of various classifier w.r.t.Average classification accuracy. TABLE II COMPARATIVE RESULTS OF NN BASED CLASSIFIERS 中文譯文 基于 神經(jīng)網(wǎng)絡(luò)技術(shù)的三相異步電動機故障診斷 摘要: 異步電機故障診斷在工業(yè) 中十分重要 ,因為需要提高可靠性和降低 由于 機器故障 造成的生產(chǎn) 損失。由于環(huán)境壓力和許多其他 的 原因 使 異步電動機發(fā)生不同的故障。許多研究者提出了不同的故障檢測和診斷技術(shù)。然而,目前許多 技術(shù) 需要提供良好的專業(yè)知識才能在 應(yīng)用中獲得成功。更簡單的方法是 可以 使 用 相對不熟練的操作在 不需要診斷專家仔細考察數(shù)據(jù)和診斷問題的情況下做出可靠的判斷 。本文 提出了 簡單,可靠,經(jīng)濟的 基 于 神經(jīng)網(wǎng)絡(luò)( NN) 的 故障分類,其中電機定子電流作為輸入信號。 從 定子電流 中 提取 13 個參數(shù) 并使用 PCA 來 選擇合適的輸入。 數(shù)據(jù)來自于特別設(shè)計的 2馬力、 4 極 50HZ 的三相異步電動機試驗。為了 分類,如 神經(jīng)網(wǎng)絡(luò)像 MLP、 SVM 和基于 CART 的統(tǒng)計分類器以及 判別分析 歐進行了驗證。 對噪聲分類器的魯棒性的驗證,也通過在輸入和輸出引進高斯和統(tǒng)一控制進行了驗證。 關(guān)鍵詞: 異步電動機,故障診斷, MLP, SVM, CART,判別分析, PCA 1.引言 異步電機 作為主驅(qū)動設(shè)備 在生產(chǎn) 、 工業(yè)和運輸中由于其可靠性和 結(jié)構(gòu) 簡單發(fā)揮重要作用。盡管 由 于它 們的 穩(wěn)定 性和可靠性, 它 們偶爾也會 發(fā)生故障和 意外停機 ,造成很大的損失。 因此, 它 們需要不斷的關(guān)注。感應(yīng)電動機的故障不僅會導(dǎo)致產(chǎn)品的運作中斷,而且增加成本,降低產(chǎn)品質(zhì)量,影響操作人員的安全。如果 延長 異步電機壽命和提高生產(chǎn)線效率,這將 花費更少 的生產(chǎn)費用, 使 終用戶 可以以 更低的價格 購買 。為了保持機器的狀態(tài)良好,一些技 術(shù) , 如 故障監(jiān)測,故障檢測和故障診斷變得越來越重要。感應(yīng)電動機的最常見的故障是軸承故障,定子繞組故障,轉(zhuǎn)子斷條或氣隙 不合理 。 本研究的目的是發(fā)展一種替代人工神經(jīng)網(wǎng)絡(luò)的早期故障檢測計劃,克服在這個 方案 上的局 限 性 ,目前的 方法很 昂貴,對于大型電動機適用,而且許多設(shè)計參數(shù)要求,特別是涉及到長時間運作的機器,這些參數(shù)不能提供方便。相對于現(xiàn)有的 方案 ,這項 方案 很簡單,準(zhǔn)確,可靠和經(jīng)濟。這項研究工作是基于實時數(shù)據(jù)等,提出基于神經(jīng)網(wǎng)絡(luò)分類器顯示一個真正的產(chǎn)業(yè)狀況的實際可行性。 本文提出四個不同神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)在與各種各樣的表現(xiàn),并且達到 100%分類準(zhǔn)確性。 2.用神經(jīng)網(wǎng)絡(luò)分析故障 如圖 .1 所顯示,提出的故障檢測和診斷計劃包括四個步驟: 1.數(shù)據(jù) 的收集與采集 2.特征提取 3.特征選擇 4.故障分類 A.數(shù)據(jù)收集和數(shù)據(jù)采集 在 本 中最 常見的故障,即定子繞組匝間短 路 ( 1),轉(zhuǎn)子動態(tài)偏心( E)和 二者皆有( B) 。 圖 1 一般結(jié)構(gòu)圖 用于實驗和數(shù)據(jù)生成而特別設(shè)計的 2 馬力,三相,四極, 415V, 50 Hz 的異步電動機已選好。所做實驗設(shè)圖 2所示。 圖 2 實驗裝置 調(diào)整彈簧秤和傳送帶改變馬達的裝載。三 個 交流電流探針,用于測量定子測試故障診斷系統(tǒng)的電流信號。 最大使用頻率信號是 5 kHz,被抽樣的數(shù)據(jù)的數(shù)量是 2500。 定子電流波形圖 3所示,在不同的條件 下 ,沒有明顯的差異存在。 圖 3 定子電流實驗波形 B.特征提取 有必要得出一個根據(jù)提取特征判 斷分類故障的方法。為了對不同的故障,使用了統(tǒng)計參量。 確切地說,對當(dāng)前的數(shù)據(jù) 進行統(tǒng)計將會得出樣品資料 。全部 13 個參數(shù)作為輸入特征計算。 考察的 最小 統(tǒng)計量 包括平均信號(這 里 是標(biāo)準(zhǔn)差)的均方根( RMS),最高和最低值的偏斜系數(shù)和峰度系數(shù)。皮爾遜的 偏斜 系數(shù), 2g 定義為: xSxxg 32 ( 1) 其中 x 表示 平均值, x 表示位數(shù) ,xS表示樣本標(biāo)準(zhǔn)差。 樣本變化參數(shù)rv定義為: xSv xr ( 2) 數(shù)據(jù)的樣本平均數(shù)在 thr 采樣時刻的值為: nxxmrniir 1)( ( 3) 2m 表示中心的范圍, 3m 是指對中心偏度 ; 4m 表示中心集合的數(shù)據(jù)數(shù)量。其次,第三和第四的時刻是用來定義樣本偏度系數(shù)的3g和樣品的峰度系數(shù) 4g 如下: 3233 mmg ( 4) 4244 mmg ( 5) 尺寸之間的樣本協(xié)方差 j和 k 是指 )1()(1nxxxxcnikikjijjk ( 6) 普通關(guān)聯(lián)系數(shù) j 和 k ,jkr被定義為 : kjjkjk SScr ( 7) C.特征選擇 在將一個特征送入分類器前, 最 明顯的 故障提供相關(guān)的信息優(yōu)勢,應(yīng) 從 設(shè)置功能 中 選擇功能不相干的或多余的功能必須被丟棄,以提高分類器的性能,避免 維度的危害 。這里的主成分分析( PCA)技術(shù)用于從原來的 特征中選擇最優(yōu)的特征 。主成分( PCs)的計算由皮爾遜規(guī)則 完成 。圖 4 與 一個數(shù)學(xué)對象有關(guān) ,即 特征值,這 個特征值 反映了從 13 個 維到一個較低維的 投影質(zhì)量。 圖 4 主成分,特征值和百分比變化 D.故障分類 ( 1)基于多層感知器神經(jīng)網(wǎng)絡(luò)分類器 簡單的多層感知器( MLP)神經(jīng)網(wǎng)絡(luò),作為一個故障分類。四個處理單元輸出層中使用的電機 有 四個條件,即 合理 ,匝間故障,偏心 這兩種故障。如 圖 5 所示的結(jié)果,選擇 5個項目合作安排為輸入,因此在輸入層 PE 的數(shù)量為 5。 圖 5(a)微型和小型企業(yè)的平均變化對培訓(xùn)和電腦的數(shù)量作為輸入 圖 5(b)變異的平均分類精度對測試的測試數(shù)據(jù),培訓(xùn)資料,并與電腦的數(shù)量作為輸入數(shù)據(jù) 將 隨機數(shù)據(jù)輸入到神經(jīng)網(wǎng)絡(luò),并已培訓(xùn)了 5 倍不同的隨機初始化重量,以消除偏置,并確保真正的學(xué)習(xí)和不同的隱藏層的 推廣。 研究表明 ,這種方法用單隱層可提供較好的性能。這個數(shù)目的處理單元 (PEs)在隱層是多種多樣的。 該網(wǎng)絡(luò)進行訓(xùn)練最小均方誤差為 5時獲得 處理單元 是在隱藏層 如 圖 6 所示。 圖 6 平均均分誤差和處理單元的數(shù)目隱藏層 各種傳輸功能,即雙曲正切、 S狀彎曲、線性曲線、線性彎曲、 偏置軸突 、 線性軸突 、學(xué)習(xí)規(guī)則 ,即原動力、 共軛梯度 、快速傳播,進行訓(xùn)練驗證要按步驟進行,交叉驗證測試最小均方誤差與平均分類精度的訓(xùn)練和變異系數(shù)數(shù)據(jù)集進行了比較。 結(jié)果表明,雙曲正切傳遞函數(shù)和學(xué)習(xí)規(guī)則的勢頭給予最佳的成果。 最后 ,利用上述試驗 ,該神經(jīng)網(wǎng) 分類器設(shè)計 ,規(guī)格: 數(shù)字的輸入 ,以隱藏層個數(shù) :1; 在隱層處理單元數(shù)目: 04; 隱藏層: 雙曲正切函數(shù) 學(xué)習(xí)規(guī)則:動量 步驟: 6 動量: 0.5 輸出層: 雙曲正切傳遞函數(shù) 學(xué)習(xí)規(guī)則:動量 步驟: 0.1 動量: 0.5 連接權(quán)數(shù): 44 所需培訓(xùn)的時間: 0.0063ms ( 2) 基于神經(jīng)網(wǎng)絡(luò)支持向量機分類器 支持向量機 (SVM)是一種新的有兩個概念的分類器。首先,進入一個高維轉(zhuǎn)換可將復(fù)雜的問題轉(zhuǎn)換為更簡單的問題,可以使用線性判別函 數(shù)。其次,支持向量機的動機是基于概念的基礎(chǔ)上 ,利用投入訓(xùn)練 ,只用那些在它們的表面附近最大限度的信息分類。它可以擴展到多個級別,支持向量機的訓(xùn)練總是尋求一個全面性的優(yōu)化方案 ,避免了擬合 ,所以它有能力去處理大量的特征。 通用算法的分類器: 對于 N 維空間數(shù)據(jù) ,該算法可以輕易ix (i =1 N)延伸到網(wǎng)絡(luò)代替的內(nèi)積空間的輸入核函數(shù) ,從而導(dǎo)致下列二次優(yōu)化問題。 Ni Nj jijiji
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025至2030中國白葡萄酒行業(yè)市場深度分析及發(fā)展預(yù)測與投資策略報告
- 2025至2030中國電力濾波器行業(yè)發(fā)展研究與產(chǎn)業(yè)戰(zhàn)略規(guī)劃分析評估報告
- 2025至2030中國生姜油行業(yè)市場占有率及投資前景評估規(guī)劃報告
- 2025至2030中國玄武巖連續(xù)纖維行業(yè)產(chǎn)業(yè)運行態(tài)勢及投資規(guī)劃深度研究報告
- 2025至2030中國物聯(lián)網(wǎng)專業(yè)服務(wù)行業(yè)產(chǎn)業(yè)運行態(tài)勢及投資規(guī)劃深度研究報告
- 應(yīng)對叛逆期的教育與心理引導(dǎo)技巧探索
- 地質(zhì)專業(yè)培訓(xùn)課件
- 打造跨文化教育資源平臺促進全球教育交流
- 學(xué)生自我認知與教育心理學(xué)的關(guān)系探討
- 醫(yī)療健康領(lǐng)域中教師的科研方向與挑戰(zhàn)分析
- 氣瓶充裝質(zhì)量保證體系手冊
- 《布病防控知識》課件
- 2024年社區(qū)工作者考試必考1000題及完整答案
- 起重裝卸機械3級復(fù)習(xí)試題附答案
- 2024年09月2024秋季中國工商銀行湖南分行校園招聘620人筆試歷年參考題庫附帶答案詳解
- 《北京市道路橋梁試驗檢測費用定額》
- 中醫(yī)院人才引進績效考核方案
- 高速公路施工安全培訓(xùn)課件
- 2024年中級經(jīng)濟師考試經(jīng)濟基礎(chǔ)知識必考重點總結(jié)全覆蓋
- 轉(zhuǎn)崗人員安全知識培訓(xùn)
- 金屬非金屬地下礦山安全生產(chǎn)標(biāo)準(zhǔn)化定級評分標(biāo)準(zhǔn)(2023版)
評論
0/150
提交評論