Zoom out Search Issue
IEEE SIGNAL PROCESSING MAGAZINE [152] MARCH 2015
Then, the singular values of X
()n
are the Frobenius norms of
the corresponding slices of the core tensor :S ()
,nrr
nn
R =
(:,:, , , :, ,:) ,rS
F
n
ff with slices in the same mode being
mutually orthogonal, i.e., their inner products are zero. The col-
umns of
U
n
may thus be seen as multilinear singular vectors,
while the norms of the slices of the core are multilinear singular
values [15]. As in the matrix case, the multilinear singular values
govern the multilinear rank, while the multilinear singular vectors
allow, for each mode separately, an interpretation as in PCA [8].
LOW MULTILINEAR RANK APPROXIMATION
Analogous to PCA, a large-scale data tensor
X can be approxi-
mated by discarding the multilinear singular vectors and slices of
the core tensor that correspond to small multilinear singular val-
ues, i.e., through truncated matrix SVDs. Low multilinear rank
approximation is always well posed; however, the truncation is not
necessarily optimal in the LS sense, although a good estimate can
often be made as the approximation error corresponds to the
degree of truncation. When it comes to finding the best approxi-
mation, the ALS-type algorithms exhibit similar advantages and
drawbacks to those used for CPD [8], [70]. Optimization-based
algorithms exploiting second-order information have also been
proposed [71], [72].
CONSTRAINTS AND TUCKER-BASED
MULTIWAY COMPONENT ANALYSIS
Besides orthogonality, constraints that may help to find unique
basis vectors in a Tucker representation include statistical inde-
pendence, sparsity, smoothness, and nonnegativity [21], [73], [74].
Components of a data tensor seldom have the same properties in
its modes, and for physically meaningful representation, different
constraints may be required in different modes so as to match the
properties of the data at hand. Figure 1 illustrates the concept of
multiway component analysis (MWCA) and its flexibility in choos-
ing the modewise constraints; a Tucker representation of MWCA
naturally accommodates such diversities in different modes.
OTHER APPLICATIONS
We have shown that TKD may be considered a multilinear
extension of PCA [8]; it therefore generalizes signal subspace
techniques, with applications including classification, feature
extraction, and subspace-based harmonic retrieval [27], [39],
[75], [76]. For instance, a low multilinear rank approximation
achieved through TKD may yield a higher SNR than the SNR in
the original raw data tensor, making TKD a very natural tool for
compression and signal enhancement [7], [8], [26].
BLOCK TERM DECOMPOSITIONS
We have already shown that CPD is unique under quite mild con-
ditions. A further advantage of tensors over matrices is that it is
even possible to relax the rank-1 constraint on the terms, thus
opening completely new possibilities in, e.g., BSS. For clarity, we
shall consider the third-order case, whereby, by replacing the
rank-1 matrices
bb bb
() () () ()
rr rr
T12 12
% = in (3) by low-rank matrices
,AB
r
r
T
the tensor X can be represented as [Figure 5(a)]
().AB cX
r
R
r
r
T
r
1
%=
=
/
(11)
Figure 5(b) shows that we can even use terms that are only
required to have a low multilinear rank (see the “Tucker Decom-
position” section) to give
.ABCXG
r
r
R
rrr
1
123
###=
=
/
(12)
These so-called block term decompositions (BTDs) in (11) and
(12) admit the modeling of more complex signal components
than CPD and are unique under more restrictive but still fairly
natural conditions [77]–[79].
EXAMPLE 2
To compare some standard and tensor approaches for the separa-
tion of short duration correlated sources, BSS was performed on
five linear mixtures of the sources
() ( )sinst t6
1
r= and
() ( ) ( ),exp sinst t t10 20
2
r= which were contaminated by white
Gaussian noise, to give the mixtures ,X AS E R
560
!=+
#
where
() [ (), ()]S ssttt
T
12
= and A ! R
52#
was a random matrix whose
columns (mixing vectors) satisfy .,aa 01
T
1
2
= .aa1
12
22
==
The 3-Hz sine wave did not complete a full period over the 60 sam-
ples so that the two sources had a correlation degree of
(| |)/( ) . .ss s s 035
T
1
21
2
2
2
= The tensor approaches, CPD, TKD,
and BTD employed a third-order tensor X of size 24 # 37 # 5
generated from five Hankel matrices whose elements obey
(, , )ijkX = (, )X ki j 1+- (see the section “Tensorization—
Blessing of Dimensionality”). The average squared angular error
(SAE) was used as the performance measure. Figure 6 shows the
simulation results, illustrating the following.
■ PCA failed since the mixing vectors were not orthogonal
and the source signals were correlated, both violating the
assumptions for PCA.
■ The ICA [using the joint approximate diagonalization of
eigenmatrices (JADE) algorithm [10]] failed because the sig-
nals were not statistically independent, as assumed in ICA.
A
1
(I × J × K )
(I × L
1
)
(L
1
× J )
B
T
A
R
+ ··· +
(I × L
R
)
(L
R
× J )
c
1
c
R
(K )(K )
1
B
T
R
=
~
(K × N
1
)
(I × J × K )
(I × L
1
)
(M
1
× J )
+ ··· +
C
1
C
R
A
1
A
R
(L
R
× M
R
× N
R
)
B
T
1
B
T
R
=
~
1
R
(a)
(b)
[FIG5] BTDs find data components that are structurally more
complex than the rank-1 terms in CPD. (a) Decomposition into
terms with multilinear rank
( , , ) .LL1
rr
(b) Decomposition into
terms with multilinear rank ( , , ) .LMN
rrr
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®