Zoom out Search Issue

IEEE SIGNAL PROCESSING MAGAZINE [154] MARCH 2015
data handling since the size of vectorized data and the
associated dictionary B R
II
!
#
easily becomes prohibitively
large (see the section “Large-Scale Data and the Curse of
Dimensionality”), especially for tensors of high order.
Fortunately, tensor data are typically highly structured, a per-
fect match for compressive sampling, so that the CS framework
relaxes data acquisition requirements, enables compact storage,
and facilitates data completion (i.e., inpainting of missing samples
due to a faulty sensor or unreliable measurement).
KRONECKER-CS FOR FIXED DICTIONARIES
In many applications, the dictionary and the sensing matrix
admit a Kronecker structure (Kronecker-CS model), as illustrated
in Figure 7(a) [84]. In this way, the global composite dictionary
matrix becomes
,WW W W
() ( ) ()NN11
777g=
-
where each
term W B
() () ()nnn
U= has a reduced dimensionality since
B R
()n II
nn
!
#
and .R
()nMI
nn
!U
#
Denote MMM M
N12
g= and
,III I
N12
g= then, since ,M I
nn
# ,, , ,nN12f= this reduces
storage requirements by a factor of ()/().I MMI
nn n
R The compu-
tation of Wg is affordable since g is sparse; however, computing
W y
T
is expensive but can be efficiently implemented through a
sequence of products involving much smaller matrices W
()n
[85].
We refer to [84] for links between the coherence of factor matri-
ces
W
()n
and the coherence of the global composite dictionary
matrix .W
Figure 7 and Table 3 illustrate that the Kronecker-CS model
is effectively a vectorized TKD with a sparse core. The tensor
equivalent of the CS paradigm in (13) is therefore to find the
sparsest core tensor
G such that
,WW WYG
() () ( )
N
N
1
1
2
2
## #g, (14)
with ,KG
0
# for a given set of modewise dictionaries B
()n
and
sensing matrices
()
n
U ,, ,().nN12f= Working with several
small dictionary matrices, appearing in a Tucker representation,
instead of a large global dictionary matrix, is an example of the
use of tensor structure for efficient representation; see also the
section “Large-Scale Data and the Curse of Dimensionality.”
A higher-order extension of the OMP algorithm, referred to as
the Kronecker-OMP algorithm [85], requires
K iterations to find
the K nonzero entries of the core tensor .G Additional computa-
tional advantages can be gained if it can be assumed that the K
nonzero entries belong to a small subtensor of ,G as shown in
Figure 7(b); such a structure is inherent to, e.g., hyperspectral
imaging [85], [86] and 3-D astrophysical signals. More precisely, if
the
K L
N
= nonzero entries are located within a subtensor of size
(),LL L## #g where ,L I
n
% then, by exploiting the block-
tensor structure, the so-called N-way block OMP algorithm
(N-BOMP) requires at most NL iterations, which is linear in N
[FIG7] CS with a Kronecker-structured dictionary. OMP can
perform faster if the sparse entries belong to a small subtensor,
up to permutation of the columns of ,W
()1
,W
()2
and .W
()3
[FIG8] The multidimensional CS of a 3-D hyperspectral image
using Tucker representation with a small sparse core in wavelet
bases. (a) The Kronecker-CS of a 32-channel hyperspectral image.
(b) The original hyperspectral image-RGB display. (c) The
reconstruction (SP = 33%, PSNR = 35.51 dB)-RGB display.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
W
y
W
(3)
W
(3)
W
(2)
W
(2)
W
(1)
W
(1)
g
Sparse Vector Representation (Kronecker-CS)
Measurement Vector (CS)
Sparse
Vector
(M
1
M
2
M
3
× I
1
I
2
I
3
)
(M
1
× M
2
× M
3
)(M
1
× l
1
)(l
1
× l
2
× l
3
)(M
2
× l
2
)
(M
3
× I
3
)
(I
1
I
2
I
3
)
(M
1
M
2
M
3
)
Measurement Tensor (CS)
Block Sparse
Core Tensor
Block Sparse Tucker Representation
=
~
=
~
(a) Vector Representation
(b)
Tensor Representation
(a)
(b)
(c)
(1,024 × 1,024 × 32) (256 × 256 × 32)
(1,024 × 1,024 × 32) (256 × 256 × 32)
=
(M
1
× M
2
× M
3
)
(
l
1
× l
2
× l
3
)
Φ
(3)
(M
3
× I
3
)
Φ
(1)
Φ
(2)T
M
3
= 32
M
1
= 585
M
2
= 585
I
1
= 1,024
I
3
= 32
I
2
= 1,024
(I
2
× M
2
)
(
M
1
× I
1
)