Zoom out Search Issue
[
standards
IN A NUTSHELL
]
continued
IEEE SIGNAL PROCESSING MAGAZINE [182] MARCH 2015
the performance of AVS2 with three different
coding configurations AI, RA, and LD, simi-
lar to the high-efficiency video coding
(HEVC) common test conditions and Bjøn-
tegaard delta bit rate is used for bit rate sav-
ing evaluation. The ultrahigh-definition
(UHD) and 1080p test sequences are the
common test sequences used in AVS, includ-
ing partial test sequences used in HEVC,
such as Traffic (UHD) and Kimono1(1080P),
etc. All of these sequences and the sur-
veillance/videoconference sequences
used for LD testing are available on the
AVS Web site [21].
Table 3 summarizes the rate distortion
performance of AVS2 for three test cases.
As shown in the table, for RA and AI con-
figurations, AVS2 shows comparable per-
formance as HEVC and outperforms AVS1
with significant bits saving, up to 50% for
RA. For surveillance and videoconference
video coding, AVS2 outperforms HEVC by
32.1%, and the curves in Figure 11 show
the results on two surveillance video
sequences. For the coding configurations
more reasonable for scene video coding,
AVS2’s gain is more significant. It should
be pointed out that the results are tested
with the current AVS2 reference software
RD9.2, which is still under optimization,
and the performance of AVS2 may be
improved further.
CONCLUSIONS
This column gives an overview of the
upcoming AVS2 standard. AVS2 is an
application-oriented coding standard, and
different coding tools have been developed
according to various application charac-
teristics and requirements. For high-qual-
ity broadcasting, flexible prediction and
transform coding tools have been incorpo-
rated. For surveillance video and video-
conferencing applications, AVS2 bridges
video compression with machine vision by
incorporating smart coding tools, e.g.,
background picture modeling and loca-
tion/time information etc., thereby mak-
ing video coding smarter and more
efficient. Compared to the previous AVS1
coding standards, AVS2 achieves signifi-
cant improvement in coding efficiency
and flexibility. AVS2 has been developed in
accordance with AVS and IEEE IPR poli-
cies to ensure rapid licensing of essential
patents at competitive royalty rates. In the
development of AVS2, the favorability of
licensing terms was also considered in the
adoption of proposals for AVS standards,
and the formation of a patent pool is
expected in the near future.
Several directions are currently being
explored for future extensions of AVS2,
including three-dimensional video cod-
ing and media description for smarter
coding. Related standardization work has
started in the AVS Working Group.
RESOURCES
AVS documents and reference software
can be found in [21]. AVS products infor-
mation can be found in [22].
ACKNOWLEDGMENT
This research was sponsored by the
National Science Foundation of China
under award 61322106.
AUTHORS
Siwei Ma (swma@pku.edu.cn) is a professor
at the National Engineering Lab of Video
Technology, Peking University, China, and a
cochair of the AVS Video Subgroup.
Tiejun Huang (tjhuang@pku.edu.cn) is
a professor at the National Engineering Lab
of Video Technology, Peking University,
China, and the secretary-general of the AVS
Working Group.
Cliff Reader (cliff@reader.com) is an
adjunct professor at the National
Engineering Lab of Video Technology, and
the chair of the AVS Intellectual Property
Rights Subgroup.
Wen Gao (wgao@pku.edu.cn) is a pro-
fessor at the National Engineering Lab of
Video Technology, Peking University, China,
and the chair of the AVS Working Group.
REFERENCES
[1] ITU-T, “HSTP-MCTB Media coding toolbox for
IPTV: Audio and video codecs,” technical paper, ITU-
T Study Group 16 Working Party 3 meeting, Geneva,
Switzerland, 10 July 2009.
[2] S. Ma, S. Wang, and W. Gao, “Overview of IEEE
1857 video coding standard,” in Proc. IEEE Int. Conf.
Image Processing, Melbourne, Australia, Sept.
2013, pp. 1500–1504.
[3]Q.Yu,S.Ma,Z.He,Y.Ling,Z.Shao,L.Yu,W.
Li,X.Wang,Y.He,M.Gao,X.Zheng,J.Zheng,I.-K.
Kim, S. Lee, and J. Park, “Suggested video platform
for AVS2,” in Proc. 42nd AVS Meeting, Guilin, China,
Sept. 2012, AVS_M2972.
[4]Q.Yu,X.Cao,W.Li,Y.Rong,Y.He,X.Zheng, and
J. Zheng, “Short distance intra prediction,” in Proc.
46th AVS Meeting, Shenyang, China, Sept. 2013,
AVS_M3171.
[5] Y. Piao, S. Lee, and C. Kim, “Modified intra mode
coding and angle adjustment,” in Proc. 48th AVS
Meeting, Beijing, China, Apr. 2014, AVS_M3304.
[6] Y. Piao, S. Lee, I.-K. Kim, and C. Kim, “Derived
mode (DM) for chroma intra prediction,” in Proc.
44th AVS Meeting, Luoyang, China, Mar. 2013, AVS_
M3042.
[7] Y. Lin and L. Yu, “F frame CE: Multi forward
hypothesis prediction,” in Proc. 48th AVS Meeting,
Beijing, China, Apr. 2014, AVS_M3326.
[8] Z. Shao and L. Yu, “Multi-hypothesis skip/direct
mode in P frame,” in Proc. 47th AVS Meeting,Shen-
zhen, China, Dec. 2013, AVS_M3256.
[9] Y. Ling, X. Zhu, L. Yu, J. Chen,
S. Lee, Y. Piao, and
C. Kim, “Multi-hypothesis mode for AVS2,” in Proc.
47th AVS Meeting, Shenzhen, China, Dec. 2013,
AVS_M3271.
[10] I.-K. Kim, S. Lee, Y. Piao, and C. Kim, “Direc-
tional multi-hypothesis prediction (DMH) for AVS2,”
in Proc. 45th AVS Meeting, Taicang, China, June
2013, AVS_M3094.
[11]H.Lv,R.Wang,Z.Wang,S.Dong,X.Xie,S.
Ma, and T. Huang, “Sequence level adaptive inter-
polation filter for motion compensation,” in Proc.
47th AVS Meeting, Shenzhen, China, Dec. 2013,
AVS_M3253.
[12] Z. Wang, H. Lv, X. Li, R. Wang, S. Dong, S. Ma,
T. Huang, and W. Gao, “Interpolation improve-
ment for chroma motion compensation,” in Proc.
48th AVS Meeting, Beijing, China, Apr., 2014, AVS_
M3348.
[13]J.Ma,S.Ma,J.An,K.Zhang, and S.Lei,
“Progressive motion vector precision,” in Proc.
44th AVS Meeting, Luoyang, China, Mar. 2013,
AVS_M3049.
[14] S. Lee, I.-K. Kim, M.-S. Cheon, N. Shlyakhov,
and Y. Piao, “Proposal for AVS2.0 reference software,”
in Proc. 42nd AVS Meeting, Guilin, China, Sept.
2012, AVS_M2973.
[15] W. Li, Y. Yuan, X. Cao, Y. He, X. Zheng, and J.
Zhen, “Non-square quad-tree transform,” in Proc.
45th AVS Meeting, Taicang, China, June 2013, AVS_
M3153.
[16] J. Wang, X. Wang, T. Ji, and D. He, “Two-level
transform coefficient coding,” in Proc. 43rd AVS
Meeting, Beijing, China, Dec. 2012, AVS_M3035.
[17] X. Wang, J. Wang, T. Ji, and D. He, “Intra
prediction mode based context design,” in Proc.
45th AVS Meeting, Taicang, China, June 2013,
AVS_M3103.
[18]J.Chen,S.Lee,C.Kim,C.-M.Fu,Y.-W.Huang,
and S. Lei, “Sample adaptive offset for AVS2,” in Proc.
46th AVS Meeting, Shenyang, China, Sept. 2013,
AVS_M3197.
[19]X.Zhang,J.Si,S.Wang,S.Ma,J.Cai,Q.Chen,
Y.-W. Huang, and S. Lei, “Adaptive loop filter for
AVS2,” in Proc. 48th AVS Meeting,Beijing, China,
Apr. 2014, AVS_M3292.
[20] S. Dong, L. Zhao, P. Xing, and X. Zhang, “Sur-
veillance video coding platform for AVS2,” in Proc.
47th AVS Meeting, Shenzhen, China, Dec. 2013,
AVS_M3221.
[21] AVS Working Group Web Site. [Online]. Avail-
able: http://www.avs.org.cn
[22] AVS Industry Alliance Web Site. [Online]. Avail-
able: http://www.avsa.org.cn
[SP]
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
__________
__________
____________
___________