Zoom out Search Issue

[
lecture
NOTES
]
continued
IEEE SIGNAL PROCESSING MAGAZINE [166] MARCH 2015
By applying the Schur determinant
formula [8], [11]
,
J
J
J
J
JJ JJJ
JJ JJJ
1
1
=-
=-
a
ba
ab
b
ab ba
a
ab
ba ab
b
ba
-
-
=
G
along with | | | | ,JJ
11
=
--
to (5)–(7), we
can now state the CRB analogs of the
chain rule (3),
,|CRBCRBCRB
ab a b b=
^^
^
hh
h
(8)
and of Bayes’ rule (4),
|
|.CRB
CRB
CRB
CRB
a
ba
b
ab
=
^
^
^
^
h
h
h
h
(9)
The results are, of course, symmetric, i.e.,
one can interchange a and .b
From (8) we see that the joint error
bound for
a and b equals the error bound
for ,
a when b is known, multiplied by the
error bound for .
b More interestingly, (9)
tells us that the error bound for
a is equal
to the bound for
a when b is known,
multiplied by a factor, viz. /CRB
b
^h
|,1CRB $ba
^h
that quantifies the influ-
ence of
b on one’s ability to estimate .a
REMARK 1
The rules can be applied to cases with any
number of additional parameters, besides
a and .b Consider, for instance, the case
of ,
a
,b
and ,c where c is an unknown
nuisance parameter. Then applying the
chain rule twice yields
,, |,
|
|,
|,
CRBCRB
CRBCRB
CRB
CRBCRB
abc c ab
ab b
cab
ba a
=
=
^
^
^
^
^
^
^
h
h
h
h
h
h
h
(10)
where the factors without c signify that
the nuisance parameter is unknown.
Combining the two expressions in (10)
yields the analog of Bayes’ rule (9) for any
number of additional parameters.
The joint error bound for a set of param-
eters
,,,
123
faaa can be similarly decom-
posed by a recursive application of the chain
rule to analyze their interdependency and
its impact on estimation.
REMARK 2
The CRB analog of Bayes’ rule (9) general-
izes the result in [7], which concerns only
scalar parameters
a
and
b
amid a vector
of nuisance parameters .c Our proof of (9)
is also more direct than in [7].
REMARK 3
These results are also applicable to the pos-
terior, or Bayesian, CRB (PCRB), in which
i is modeled as a random variable with a
prior distribution. The PCRB is valid for the
entire class of estimators
,
i
t
whether
biased or not [2]. The posterior Cramér–
Rao inequality is then ,
PJ
1
*
i
i
-
t
where
,plE nJy
2
2_ i-
i
i
^h
6
@
is the Bayesian
Fisher information matrix, ,p y
i
^h
is the
joint pdf and the expectation is with respect
to this pdf. Letting
[],
iab=
<<<
the
matrix can be partitioned correspondingly,
,J
J
J
J
J
=
i
a
ba
ab
b
=
G
and thereby the results (8)–(10) can be
applied to the PCRB as well.
EXAMPLES
Next, we illustrate via two examples how a
decomposition like (9) can be used for anal-
ysis. The examples show that, by quantify-
ing the impact of nuisance parameters, it is
possible to study the tradeoff between the
gain of obtaining them through indepen-
dent side information versus estimating
them jointly with the parameters of interest.
LINEAR MIXED MODEL
Consider a linear model
,y AxBzw R
n
!=++
where w is Gaussian noise with covariance
matrix ,vI and x R
k
x
! and z R
k
z
! are
unknown parameters. The matrices are
known and
([ ]) ,kk nrank AB
xz
1=+
which implies that the parameters x and z
are embedded into two distinct range
spaces,
()AR and ( ),BR respectively.
Here ()AR denotes the linear subspace
spanned by the columns of .A Under these
conditions the joint Fisher information
matrix equals [9]
.
v
v
n
1
2
00
0
0
J
J
J
J
J
J
J
AA
BA
AB
BB
J
x
zx
vx
xz
z
vz
xv
zv
v
=
<
<
<
<
J
R
T
S
S
S
S
R
T
S
S
S
S
S
V
X
W
W
W
W
V
X
W
W
W
W
W
From this expression, we see that the
bound for v is independent of that
for x and .z That is, ( , )vCRB,xz =
() ().vCRB,CRBxz This is a CRB analog
of the independence for random variables.
Furthermore, we obtain
()CRB|zx=
|||( )| | |vvJBB BB
z
k
1
11
z
==
<<
-
--
and
() |( ) | | (vCRB zJJJJ BB
zzx
x
xz
1
1
=- = -
<
-
-
() )| | |,vBAAA AB B B
k11 1
A
z
P=
<< < <
=
-- -
where
A
P
=
is the projector onto the
orthogonal complement of ().AR
The increase in the error bound for x
due to the lack of information about z can
now be quantified using (9)
()
||
||
(),CRBCRB|x
BB
BB
xz
A
P
=
<
=
<
(11)
where the factor ||BB
A
P
=
<
measures the
alignment of ()AR and ( ) .BR When the
range spaces are orthogonal we have that
||||,BBBB
A
P =
=
<<
and by (11) the
bound for x is unaffected by one’s igno-
rance about .z In scenarios where it is pos-
sible to obtain z through additional
side-information or calibration instead of
estimation, the cost can be weighed against
the reduction of the error bound for
x by
the given factor | |/| |.BBBB
A
P
=
<<
This example has illustrated the inter-
dependencies between the unknown
parameters
,x,z and .v Next we consider
an example where the unknown parame-
ters become asympotically independent as
the number of samples
n grows large.
SINE-WAVE FITTING
Sine-wave fitting is a problem that arises
in system testing, e.g., of waveform
recorders, and IEEE Standard 1057 for-
malizes procedures to do so (see [12] and
references therein).
Consider
n uniform samples of a sinu-
soid in noise
() ( ) (),sinyk k C wka~z=+++
where ( )wk is a Gaussian white noise pro-
cess with variance v and , , .kn01f=-
The amplitude
a and phase z of the sinu-
soidal signal, along with the offset ,C are of
interest. In certain cases, the frequency
~
of the test signal may be obtained separately
from the estimation of
,
az and .C For
simplicity, we first consider an alternative
parameterization of the sinusoid:
( ) () (),sin cos sinkAkBka~z ~ ~+= +
where ()sinA
az= and ( ).cosB az=
The parameters are [].ABC v
i ~=
<
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®