版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
Advanced
Digital
SignalProcessing(Modern
Digital
Signal
Processing)Chapter
3
Adaptive
Linear
FilterOutput
signaly(n)Input
signalx(n)3.1
IntroductionBasic
Form
of
the
Adaptive
FilterAdaptive
filter
withadjustable
parametersSupervising
signald(n)AdaptivealgorithmErrore(n)Desired
signal
or-Adaptive
linear
filter:the
adaptive
filter
is
linear.Wiener
Filter
&
Adaptive
Linear
FilterAdaptive
linear
filterh(n)w(n)x(n)y(n):
estimation
of
s(n)Optimum
criteria:
MMSE;are
known;h(n):
nonadjustable.d(n)s(n)Wiener
filterv(n)x(n)The
statistics
of
s(n)
and
v(ont)hers;The
statistics
of
s(n)
and
v(nStationary
random
signals;
are
unknown,
but
with
anavailable
d(n)
or
e(n)
;Deterministic,
stationary
ornon-stationary
random
signalse(n)y(n)
:
estimation
of
d(n)Optimum
criteria:
MMSE
orThe
Classes
of
Adaptive
Linear
FilterBy
the
length
of
linear
filterFIR:
always
stable;
good
convergenceproperties;
possibly
linear-phasedIIR:
probably
less
estimation
error
(residual)than
FIRBy
the
structure
of
linear
filterTransversalLattice:
fast
convergence;
insensitive
to
finitword-length
effects;
modular
structureBy
the
adaptive
algorithmLeast
mean
square
(LMS)Recursive
least
square
(RLS)Other
variances
of
LMS
or
RLSOnly
the
transversal
adaptive
FIR
filters
are
discusPerformances
of
Adaptive
FilterConvergence
rate
of
adaptive
algorithmMisadjustment
Computational
complexity
of
adaptivealgorithm
Expected
properties
of
adaptive
filterstructure:
high
modularity,
parallelism,concurrency
(suitable
for
implementationwith
VLSI)Numerical
stability
and
numerical
accuracyRobustness
Adaptive
algorithm
is
insensitive
to
theinitial
values3.2
Transversal
Adaptive
FIR
FilterMultiple
Input
Adaptive
Linear
CombinerSingle
Input
Adaptive
FIR
Filter
Optimum
Solution
(MMSE)
of
AdaptiveFIR
FilterSolution
for
FIWiener
filter3.3
MSE
Performance
SurfaceMSE
Performance
FunctionQuadratic
function
with
single
globaloptimumOne
weight:
parabolaTwo
weights:
paraboloidMore
than
two
weights:
hyper-paraboloidL+1
weights:
a
hyper-paraboloid
in
the
L+2domain
space■Weight
Deviation
VectorWeight
deviation
vectorThe
v(n)
is
the
deviation
of
the
weight
vector
w(n)from
the
optimal
weight
vector
w*.Any
departure
of
the
w(n)
from
the
w*
would
causean
excess
mean-square
error
with
a
quadratic
formThe
performance
function
in
v(n)
coordinate
sysThe
v(n)
coordinate
system
is
a
shifting
of
the
w(n)coordinate
system.Principle
Axes
Coordinate
Systemprinciple
axes
coordinate
systemThe
principle
axes
coordinate
system
is
a
rotation
ofthe
v(n)
coordinate
system.The
performance
function
in
the
principle
axescoordinate
systemThe
natural
coordinate
systemThe
shifted
coordinate
systemThe
principle
axes
coordinate
systemPerformance
SurfaceSearching
the
Performance
SurfaceThe
objective
of
adaptive
algorithms
is
tosearch
the
single
optimum
point
of
performancesurface
from
an
arbitrary
start
point.3.4
LMS
Adaptive
AlgorithmThe
Gradient
of
Performance
SurfaceNatural
coordinate
systemShifted
coordinate
systemPrinciple
axes
coordinate
systemis
the
step
size
or
adaptive
constant.
Itgoverns
the
stability
of
algorithm,misadjustment
and
the
rate
of
convergence.
Steepest
Descent
Method
(B.
Widrow,1959)Basic
principlesSearching
the
optimum
point
along
thenegative
gradient
direction,
such
a
directionis
the
one
with
the
steepest
descent
of
theperformance
function.Natural
coordinate
systemPrinciple
axes
coordinate
systemShifted
coordinate
systemSufficient
condition
for
convergenceifi.e.orthenSufficientconditionTransition
processThe
convergence
takes
place
independentlyalong
each
of
the
principal
axes.
As
the
iteratprocess
advances,
the
rate
of
convergence
oneach
axis
is
governed
by
a
unique
geometricratio
determined
by
the
correspondingeigenvalue.UnstableStable
but
withdamp
vibrationStable
andconverge
graduallyTheμshould
be
a
balance
between
the
stabilityand
the
convergence
rate.Limitations
of
steepest
descent
algorithm
The
modification
of
weights
in
each
iteratioisWhen
the
weight
deviation is
very
small,the
weight
modifications
in
each
iteration
is
alvery
little,
hence
the
convergence
rate
of
steepdescent
algorithm
is
slow.
The
steepest
descent
algorithm
is
not
applicif
the
statistics
of
random
signal
is
unknownLeast
Mean
Square
(LMS)
AlgorithmEstimation
of
the
gradientLMS
algorithmUnbiasedestimatioLearning
curves
of
a
weightDeterministic
signalStationaryrandomsignalThe
average
of
50
learning
curvesDeterministic
signalStationaryrandomsignalMisadjustmentExcess
mean
square
errorHence,With
some
rational
assumptions,
it
can
be
justifthat
after
transition
processMisadjustmentIt
means
that
the
misadjustment
is
proportional
tostep
size
μ.The
μ
should
be
a
tradeoff
between
the
rate
ofconvergence
and
the
misadjustment.Some
variable
step-size
algorithms
in
which
the
μreduced
gradually
along
with
the
transition
procmay
be
adopted
sometimes
to
accommodate
therequirements
for
both
the
convergence
rate
and
tmisadjustment.Comments
on
LMS
algorithmSimplicity
and
low
computation
loadsRelatively
slow
convergence
rate
and
longtransition
processThe
BP
algorithm
of
feed-forward
neuralnetwork
is
the
generalization
of
LMS
algorithm3.5
RLS
Adaptive
AlgorithmLeast
Square
(LS)
EstimationTransversal
FIR
filterOptimum
criterionAn
accumulatederror
functionλ:
forgetting
factor,
0<λ≤1;λ<1:
suitable
for
nonstationary
signalLS
estimationLetandThenWiener-Hopf
equationby
substituting
theUnbiased
and
consistentestimation
when
theobservation
noise
is
whiteexpectation
with
the
sumIt
is
cumbersome
and
almost
impractical
tocalculate
the
M×M
inverse
matrix
R-1(n)
at
eachinstant
n,
hence
such
a
LS
estimation
is
hardlyused
in
real-time
applications.Recursive
Least
Square
(RLS)
AlgorithmThe
matrix
inversion
lemmaLet
A
and
B
be
two
positive
definite
M×Mmatrices
related
bywhere,
D:
N×N
positive
definite
matrix;C:
M×N
matrix.Then
the
inverse
matrix
of
A
can
be
expressedthenRecursive
solution
for
R-1(n)andRecursive
solution
for
w(n)RLS
algorithmInitializationGain
vectorPrediction
errorWeight
updatingT(n)
updating
Comparison
between
LMS
and
RLSalgorithmThe
LMS
is
more
simple
and
has
lesscomputation
complexity
than
the
RLSThe
RLS
usually
converges
more
quickly
thanthe
LMSThe
RLS
is
more
suitable
for
non-stationaryrandom
signals3.6
Applications
of
Adaptive
FilterAdaptive
Modeling
(System
IdentificatioNoise-free
casex(n):
known
input
signal,
often
pseudorandom
(whitnoise
exerted
to
the
unknown
system
deliberately;d(n):
desired
signal,
the
output
of
the
unknown
sysNoise-included
caseN(n):
observation
noise,
uncorrelated
with
the
x(This
makes
it
possible
to
obtain
a
system
model
onliModel:
the
adaptive
filterLinear
model:
FIR
or
IIR
filter
withcorresponding
adaptive
algorithmsNonlinear
model:
e.g.
feed-forward
neuralnetwork
(with
BP
algorithm)Adaptive
Inverse
Filtering
(Inverse
ModelDelayed
inverse
modelingDelay:
make
the
inverse
system
H-1(z)
casual;If
the
H
(z)
is
not
minimum
phase
(with
aunstable
H-1(z)),
then
a
FIR
adaptive
filter
can
beused
to
approximate
the
inverse
system.Adaptive
channel
equalizerp(n):
Pilot
signal
(training
signal),
uncorrelwith
x(n)
and
known
by
the
receiver,
its
delayserved
as
a
desired
signal
for
adaptive
equalizeSecondarysignaly(n)Reference
signalx(n)AdaptivefilterAdaptive
CancellingBasic
principlesPrimary
signal
d(n)AdaptivealgorithmThe
reference
signal
x(n)
should
be
correlated
wiall
or
some
parts
of
the
primary
signal
d(n).
Only
thcorrelated
parts
of
x(n)
and
d(n)
can
be
cancelled.Error
orresidualsignale(n)-
Canceling
maternal
heartbeat
in
fetalelectrocardiographys(n):
fetal
electrocardiography;
x(n):
maternalCanceling
noise
in
speech
signals
Adaptive
echo
canceller
in
long-distancetelephone
Adaptive
notch
filter
(cancelling
a
singlefrequency
interference)Referenceinput
x(n)AdaptivefilterActive
noise
control
(ANC)ANC
systemPrimary
noise
d(n)Residualnoisee(n)Secondary
noise
y(n)-
Secondarysound
pathy(n)x(n)W(z)d(n)-C(z)e(n)Filtered-x
LMS
algorithmIn
ANC
system,
the
secondary
sound
path
between
theadaptive
filter
and
the
combiner
makes
the
x(n)e(n)
an
iestimation
of
the
performance
surface
gradient
which
wiprobably
lead
the
LMS
algorithm
to
be
divergent.
The
filalgorithm
rectifies
the
gradient
estimation
by
filteriwith
an
estimation
of
the
secondary
sound
path.y(n)W(z)d(n
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 《脊柱的運動解剖》課件
- 第6單元 科技文化與社會生活(A卷·知識通關(guān)練)(解析版)
- 中華傳統(tǒng)文化宣傳教育2
- 雙十二時尚之道
- 駛向輝煌共創(chuàng)精彩
- 音樂制作師勞動合同三篇
- 深部護(hù)理科護(hù)士的工作總結(jié)
- 競選班干部的演講稿模板集錦八篇
- 2023年-2024年安全管理人員安全教育培訓(xùn)試題附答案(A卷)
- 2024年企業(yè)主要負(fù)責(zé)人安全培訓(xùn)考試題附參考答案【突破訓(xùn)練】
- 第二章 粉體制備
- 預(yù)應(yīng)力空心板計算
- 2024版珠寶鑒定技師勞動合同范本3篇
- GA/T 1740.2-2024旅游景區(qū)安全防范要求第2部分:湖泊型
- 2023年開封糧食產(chǎn)業(yè)集團(tuán)有限公司招聘筆試真題
- 2024年全國“紀(jì)檢監(jiān)察”業(yè)務(wù)相關(guān)知識考試題庫(附含答案)
- 2025年社區(qū)工作者考試試題庫及答案
- 期末檢測卷(三)2024-2025學(xué)年人教PEP版英語四年級上冊(含答案無聽力原文無聽力音頻)
- 2024-2030年中國兒童內(nèi)衣行業(yè)運營狀況及投資前景預(yù)測報告
- 吉首大學(xué)《高等數(shù)學(xué)》2023-2024學(xué)年第一學(xué)期期末試卷
- 打印和復(fù)印服務(wù)協(xié)議
評論
0/150
提交評論