Zoom out Search Issue
IEEE SIGNAL PROCESSING MAGAZINE [148] MARCH 2015
cases of (1), and owe their uniqueness to hard and restrictive
constraints such as triangularity and orthogonality. On the
other hand, certain properties of the factors in (1) can be repre-
sented by appropriate constraints, making possible the unique
estimation or extraction of such factors. These constraints
include statistical independence, sparsity, nonnegativity, expo-
nential structure, uncorrelatedness, constant modulus, finite
alphabet, smoothness, and unimodality. Indeed, the first four
properties form the basis of ICA [12]–[14], SCA [32], NMF [21],
and harmonic retrieval [35].
TENSORIZATION—BLESSING OF DIMENSIONALITY
While one-way (vectors) and two-way (matrices) algebraic struc-
tures were, respectively, introduced as natural representations
for segments of scalar measurements and measurements on a
grid, tensors were initially used purely for the mathematical
benefits they provide in data analysis; for instance, it seemed
natural to stack together excitation–emission spectroscopy
matrices in chemometrics into a third-order tensor [7].
The procedure of creating a data tensor from lower-dimen-
sional original data is referred to as tensorization, and we propose
the following taxonomy for tensor generation:
1) Rearrangement of lower-dimensional data structures:
Large-scale vectors or matrices are readily tensorized to
higher-order tensors and can be compressed through tensor
decompositions if they admit a low-rank tensor approxima-
tion; this principle facilitates big data analysis [23], [29], [30]
[see Figure 2(a)]. For instance, a one-way exponential signal
()xk az
k
= can be rearranged into a rank-1 Hankel matrix or
a Hankel tensor [36]
()
()
()
()
()
()
()
()
()
,H
bb
x
x
x
x
x
x
x
x
x
a
0
1
2
1
2
3
2
3
4
hhh
g
g
g
==
%
J
L
K
K
K
K
K
N
P
O
O
O
O
O
(2)
where
.[, , , ]b zz1
T2
f= Also, in sensor array processing,
tensor structures naturally emerge when combining snap-
shots from identical subarrays [19].
2) Mathematical construction: Among many such examples,
the
Nth-order moments (cumulants) of a vector-valued random
variable form an Nth-order tensor [9], while in second-order
ICA, snapshots of data statistics (covariance matrices) are effect-
ively slices of a third-order tensor [12], [37]. Also, a (channel
#
time) data matrix can be transformed into a (channel# time#
frequency) or (channel# time# scale) tensor via time-frequency
or wavelet representations, a powerful procedure in multi-
channel electroencephalogram (EEG) analysis in brain sci-
ence [21], [38].
3) Experiment design: Multifaceted data can be naturally
stacked into a tensor; for instance, in wireless communica-
tions the so-called signal diversity (temporal, spatial, spec-
tral, etc.) corresponds to the order of the tensor [20]. In the
same spirit, the standard eigenfaces can be generalized to
tensor faces by combining images with different illumina-
tions, poses, and expressions [39], while the common modes
in EEG recordings across subjects, trials, and conditions are
best analyzed when combined together into a tensor [28].
4) Natural tensor data: Some data sources are readily gen-
erated as tensors [e.g., RGB color images, videos, three-
dimensional (3-D) light field displays] [40]. Also, in scientific
computing, we often need to evaluate a discretized multivariate
function; this is a natural tensor, as illustrated in Figure 2(b) for
a trivariate function
(, , )fxyz [23], [29], [30].
The high dimensionality of the tensor format is therefore
associated with blessings, which include the possibilities to obtain
compact representations, the uniqueness of decompositions, the
flexibility in the choice of constraints, and the generality of com-
ponents that can be identified.
CANONICAL POLYADIC DECOMPOSITION
DEFINITION
A polyadic decomposition (PD) represents an Nth-order tensor
RX
II I
N12
!
## #g
as a linear combination of rank-1 tensors in
the form
.bb bX
() () ( )
r
r
R
rr r
N
1
12
gm=
=
%%%
/
(3)
Equivalently, X is expressed as a multilinear product with a
diagonal core
BB BXD
() ()
()
N
N
1
1
2
2
## #g=
,;,,,BB BD
() () ()N12
f=
"
,
(4)
where ( , , , )diagD
N R12
fmm m= [cf. the matrix case in (1)].
Figure 3 illustrates these two interpretations for a third-order
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
I = 2
6
(64 × 1)
(8 × 8)
+
···
+
(2 × 2 × 2 × 2 × 2 × 2)
z
0
z
0
+ Δz
z
0
+ 2Δz
y
0
y
0
+ Δy
y
0
+ 2Δy
x
0
+ Δx
x
0
+ 2Δx
x
0
z
x
y
(a)
(b)
[FIG2] Construction of tensors. (a) The tensorization of a vector
or matrix into the so-called quantized format; in scientific
computing, this facilitates supercompression of large-scale
vectors or matrices. (b) The tensor is formed through the
discretization of a trivariate function (,,).fxyz