Zoom out Search Issue

[
standards
IN A NUTSHELL
]
continued
IEEE SIGNAL PROCESSING MAGAZINE [176] MARCH 2015
INTRAPREDICTION
Intraprediction is used to reduce the
redundancy existing in the spatial domain
of the picture. Block partition-based direc-
tional prediction is used for AVS2 [5]. As
shown in Figure 2, besides the square PU
partitions, nonsquare partitions, called
short distance intra prediction (SDIP), are
adopted by AVS2 for more efficient intralu-
minance prediction [4], where the nearest
reconstructed boundary pixels are used as
the reference sample in intraprediction.
For SDIP, a
NN22# PU is horizontally/
vertically partitioned into four prediction
blocks. SDIP is more adaptive to the image
content, especially in edge area. But for the
complexity reduction, SDIP is used in all
CU sizes except a 64
# 64 CU. For each
prediction block in the partition modes, a
total of 33 prediction modes are supported
for luminance, including 30 angular
modes [5], a plane mode, a bilinear mode,
and a DC mode. Figure 3 shows the distri-
bution of the prediction directions associ-
ated with the 30 angular modes. Each
sample in a PU is predicted by projecting
its location to the reference pixels applying
the selected prediction direction. To
improve the intraprediction accuracy, the
subpixel precision reference samples must
be interpolated if the projected reference
samples locate on a noninteger position.
The noninteger position is bounded to 1/32
sample precision to avoid floating point
operation, and a four-tap linear interpola-
tion filter is used to get the subpixel.
For the chrominance component, the
PU size is always
,NN# and five prediction
modes are supported, including vertical pre-
diction, horizontal prediction, bilinear pre-
diction, DC prediction, and the prediction
mode derived from the corresponding lumi-
nance prediction mode [6].
INTERPREDICTION
Compared to the spatial intraprediction,
interprediction focuses on exploiting the
temporal correlation between the consec-
utive pictures to reduce the temporal
redundancy. Multireference prediction has
been used since the H.264/AVC standard,
including both short-term and long-term
reference pictures. In AVS2, long-term ref-
erence picture usage is extended further,
which can be constructed from a sequence
of long-term decoded pictures, e.g., back-
ground picture used in surveillance cod-
ing, which will be discussed separately
later. For short-term reference prediction
in AVS2, F frames are defined as a special
P frame [7], in addition to the traditional P
and B frames. More specifically, a P frame
is a forward-predicted frame using a single
reference picture, while a B frame is a
bipredicted frame that consists of forward,
18
6
30
22
20
16
14
10
8
4
26
28
32
23
21
15
19
17
11 13
9
7
5
3
25
27
29
31
12
24
Zone 2
Zone 1
DC: 0
Plane: 1
Bilinear: 2
Zone 3
[FIG3]
An illustration of directional prediction modes.
ref_blk2
ref_blk1
MV
Current
PU
Distance
2
Distance 1
Scaled
MV
Mode 3
Mode 1
Mode 2
Mode 4
Pixel Indicated by MV
Distance
(a)
ref2 ref1 Current Frame
(b)
[FIG4]
(a) Temporal multihypothesis mode. (b) Spatial multihypothesis mode.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®