Zoom out Search Issue
IEEE SIGNAL PROCESSING MAGAZINE [153] MARCH 2015
■ Low-rank tensor approximation via a rank-2 CPD was used
to estimate A as the third factor matrix, which was then
inverted to yield the sources. The accuracy of CPD was com-
promised as the components of tensor
X cannot be repre-
sented by rank-1 terms.
■ Low multilinear rank approximation via TKD for the mul-
tilinear rank (4, 4, 2) was able to retrieve the column space of
the mixing matrix but could not find the individual mixing
vectors because of the nonuniqueness of TKD.
■ BTD in multilinear rank-(2, 2, 1) terms matched the data
structure [78]; it is remarkable that the sources were recov-
ered using as few as six samples in the noise-free case.
HIGHER-ORDER COMPRESSED SENSING (HO-CS)
The aim of CS is to provide a faithful reconstruction of a signal of
interest, even when the set of available measurements is (much)
smaller than the size of the original signal [80]–[83]. Formally, we
have available
M (compressive) data samples ,y R
M
! which are
assumed to be linear transformations of the original signal x R
I
!
().MI1 In other words,
yx
,U= where the sensing matrix
R
MI
!U
#
is usually random. Since the projections are of a lower
dimension than the original data, the reconstruction is an ill-posed
inverse problem whose solution requires knowledge of the physics
of the problem converted into constraints. For example, a two-
dimensional image
X R
II
12
!
#
can be vectorized as a long vector
()Xx vec R
I
!= )(III
12
= that admits sparse representation in a
known dictionary B R
II
!
#
so that ,Bxg= where the matrix B
may be a wavelet or discrete cosine transform dictionary. Then,
faithful recovery of the original signal
x requires finding the spars-
est vector g such that
, ,WWB,yg gKwith
0
# U==(13)
where ·
0
is the
0
, -norm (number of nonzero entries) and
.KI%
Since the
0
, -norm minimization is not practical, alternative
solutions involve iterative refinements of the estimates of vector g
using greedy algorithms such as the orthogonal matching pur-
suit (OMP) algorithm, or the
1
, -norm minimization algorithms
g
1
=
^
g
i
i
I
1=
j
/
[83]. Low coherence of the composite dictionary
matrix W is a prerequisite for a satisfactory recovery of g (and
hence )x —we need to choose U and B so that the correlation
between the columns of W is minimum [83].
When extending the CS framework to tensor data, we face
two obstacles:
■ loss of information, such as spatial and contextual relation-
ships in data, when a tensor RX
II I
N12
!
## #g
is vectorized.
0.05 0.1 0.15 0.2
−0.3
−0.2
−0.1
0
0.1
Time (s)
0.05 0.1 0.15 0.2
Time (s)
0.05 0.1 0.15 0.2
Time (s)
s
1
−0.2
−0.1
0
0.1
s
1
s
ˆ
s
PCA
ˆ
s
ICA
ˆ
s
CPD
s
ˆ
s
CPD
ˆ
s
TKD
ˆ
s
BTD
s
ˆ
s
CPD
ˆ
s
TKD
ˆ
s
BTD
−0.2
−0.1
0
0.1
0.2
0.3
s
2
0 10 20 30 40
0
20
40
60
SNR (dB)
(a) (b)
(c)
(d)
SAE (dB)
PCA ICA CPD TKD BTD
[FIG6] The blind separation of the mixture of a pure sine wave and an exponentially modulated sine wave using PCA, ICA, CPD, TKD,
and BTD. The sources s
1
and s
2
are correlated and of short duration; the symbols s
1
t
and s
2
t
denote the estimated sources. (a)–(c)
Sources ()ts
1
and ( )ts
2
and their estimates using PCA, ICA, CPD, TKD, and BTD; (d) average squared angular errors (SAE) in estimation
of the sources.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®