Zoom out Search Issue
IEEE SIGNAL PROCESSING MAGAZINE [99] MARCH 2015
Department of Signal Processing and Acoustics at Aalto
University, Espoo, Finland. His research interests include
sound reproduction, headphone audio, and digital filtering. He
was a member of the organizing committee of the 2013 Audio
Engineering Society 51st International Conference on
Loudspeakers and Headphones, Helsinki, Finland.
Hannes Gamper (hannes.gamper@aalto.fi) received his
Ph.D. degree in media technology from Aalto University, Espoo,
Finland, in 2014. His doctoral research focused on enabling
technologies for audio-augmented reality. In 2012, he was a
visiting scholar at the Human Interface Technology Laboratory
(HIT Lab) in Christchurch, New Zealand. He currently works
as a postdoctoral researcher at Microsoft Research in Redmond,
Washington, United States, but the work reported here was
conducted outside of Microsoft Research. His research interests
include binaural modeling, and the analysis, synthesis, and per-
ception of spatial sound.
Lauri Savioja (lauri.savioja@aalto.fi) received the M.Sc.
(Tech.) and D.Sc. (Tech.) degrees in computer science from the
Helsinki University of Technology (TKK), Espoo, Finland, in
1991 and 1999, respectively. The topic of his doctoral thesis
was room acoustic modeling. He worked at the TKK
Laboratory of Telecommunications Sxoftware and Multimedia
as a researcher, lecturer, and professor from 1995 until the for-
mation of the Aalto University, where he is currently a profes-
sor and heads the Department of Media Technology in the
School of Science. His research interests include room acoustics,
virtual reality, and parallel computing.
REFERENCES
[1] J. White, D. C. Schmidt, and M. Golparvar-Fard, “Applications of augmented real-
ity,” Proc. IEEE, vol. 102, no. 2, pp. 120–123, Feb. 2014.
[2] B. Rafaely, “Active noise reducing headset—an overview,” in Proc. Int. Congress
Exhibition Noise Control Engineering (Internoise), The Hague, The Netherlands,
Aug. 2001.
[3] C. H. Taal, J. Jensen, and A. Leijon, “On optimal linear filtering of speech for near-
end listening enhancement,” IEEE Signal Processing Lett., vol. 20, no. 3, pp. 225–
228, Mar. 2013.
[4] J. Rämö, V. Välimäki, and M. Tikander, “Perceptual headphone equalization for
mitigation of ambient noise,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Pro-
cessing (ICASSP-13), Vancouver, Canada, May 2013, pp. 724–728.
[5] M. Christoph, “Noise dependent equalization control,” in Proc. Audio Engineering
Society 48th Int. Conf. Automotive Audio, Munich, Germany, Sept. 2012.
[6] J.-M. Jot, V. Larcher, and O. Warusfel, “Digital signal processing issues in the con-
text of binaural and transaural stereophony,” in Proc. Audio Engineering Society 98th
Conv.,Paris, France,Feb.1995.
[7] J. P. Blauert, Spatial Hearing: The Psychophysics of Human Sound Localization,
Revised Ed. Cambridge, MA: MIT Press, 1997.
[8] V. R. Algazi and R. O. Duda, “Headphone-based spatial sound,” IEEE Signal Pro-
cess. Mag., vol. 28, no. 1, pp. 33–42, Jan. 2011.
[9] F. L. Wightman and D. J. Kistler, “Headphone simulation of free-field listening. I:
Stimulus synthesis,” J. Acoust. Soc. Am., vol. 85, no. 2, pp. 858–867, 1989.
[10] D. R. Begault, E. M. Wenzel, and M. R. Anderson, “Direct comparison of the im-
pact of head tracking, reverberation, and individualized head-related transfer functions
on the spatial perception of a virtual speech source,” J. Audio Eng. Soc., vol. 49, no. 10,
pp. 904–916, Oct. 2001.
[11] V. Välimäki, J. D. Parker, L. Savioja, J. O. Smith, and J. S. Abel, “Fifty years of
artificial reverberation,” IEEE Trans. Audio, Speech, Lang. Processing, vol. 20, no. 5,
pp. 1421–1448, July 2012.
[12] I. E. Sutherland, “A head-mounted three dimensional display,” in Proc. Fall Joint
Computer Conf., New York, NY, 1968, pp. 757–764.
[13] J. Blauert, G. Boerger, and P. Laws, “Method and equipment to avoid the
localization shifts of the auditory events caused by head movements with earphones
on,” DE 2331619.0, U.S. patent 3,962,543, Eugen Beyer Elektrotechnische Fabrik,
Heilbronn, Germany, 1973.
[14] K. Sunder, J. He, E.-L. Tan, and W.-S. Gan, “Natural sound rendering for head-
phones,” IEEE Signal Process. Mag., vol. 32, no. 2, pp. 98–111, Mar. 2015.
[15] W. Zhang, M. Zhang, R. Kennedy, and T. Abhayapala, “On high-resolution head-
related transfer function measurements: An efficient sampling scheme,” IEEE Trans.
Audio, Speech, Lang. Processing, vol. 20, no. 2, pp. 575–584, Feb. 2012.
[16] J. Huopaniemi and J. O. Smith III, “Spectral and time-domain preprocessing and
the choice of modeling error criteria for binaural digital filters,” in Proc. Audio Engi-
neering Society 16th Int. Conf., Rovaniemi, Finland, Mar. 1999, pp. 301–312.
[17] D. S. Brungart, “Near-field virtual audio displays,” Presence: Teleoper. Virtual En-
viron., vol. 11, no. 1, pp. 93–106, 2002.
[18] H. Gamper, “Head-related transfer function interpolation in azimuth, elevation,
and distance,” J. Acoust. Soc. Amer., vol. 134, pp. EL547–554, Dec. 2013.
[19] D. N. Zotkin, R. Duraiswami, and L. S. Davis, “Rendering localized spatial audio
in a virtual auditory space,” IEEE Trans. Multimedia, vol. 6, no. 4, pp. 553–564, Aug.
2004.
[20] T. I. Laakso, V. Välimäki, M. Karjalainen, and U. K. Laine, “Splitting the unit
delay—Tools for fractional delay filter design,” IEEE Signal Process. Mag., vol. 13,
no. 1, pp. 30–60, Jan. 1996.
[21] J. Nam, M. Kolar, and J. S. Abel, “On the minimum-phase nature of head-related
transfer functions,” in Proc. Audio Engineering Society 125th Conv., San Francisco,
CA, 2008.
[22] A. Kulkarni, S. K. Isabelle, and H. S. Colburn, “Sensitivity of human subjects
to head-related transfer-function phase spectra,” J. Acoust. Soc. Am., vol. 105, no. 5,
pp. 2821–2840, May 1999.
[23] F. Wefers and M. Vorländer, “Optimal filter partitions for real-time FIR filtering
using uniformly partitioned FFT-based convolution in the frequency-domain,” in Proc.
14th Int. Conf. Digital Audio Effects, Paris, France, Sept. 2011, pp. 155–161.
[24] A. Franck, “Efficient frequency-domain filter crossfading for fast convolu-
tion with application to binaural synthesis,” in Proc. Audio Engineering Society
55th Int. Conf., Helsinki, Finland, Aug. 2014.
[25] J. Rämö and V. Välimäki, “Digital augmented reality audio headset,” J. Electr. Com-
put. Eng., vol. 2012, Oct. 2012.
[26] P. F. Hoffmann, F. Christensen, and D. Hammershøi, “Insert earphone cali-
bration for hear-through options,” in Proc. Audio Engineering Soc. 51st Int. Conf.
Loudspeakers and Headphones, Helsinki, Finland, Aug. 2013.
[27] R. W. Lindeman, H. Noma, and P. G. de Barros, “Hear-through and mic-
through augmented reality: using bone conduction to display spatialized audio,”
in Proc. IEEE and ACM Int. Symp. Mixed and Augmented Reality, Nara, Japan,
Nov. 2007, pp. 173–176.
[28] J. Rämö and V. Välimäki, “An allpass hear-through headset,” in Proc. EUSIP-
CO, Lisbon, Portugal, Sept. 2014.
[29]J. E.Larsen,A.Stopczynski,J.Larsen,C.Vesterskov,P.Krogsgaard, and
T.
Sondrup, “Augmenting the sound experience at music festivals using mobile
phones,” in Proc. 16th Int. Conf. Intelligent User Interfaces (IUI-11), New York,
NY, 2011, pp. 383–386.
[30] J. Rämö, V. Välimäki, and M. Tikander, “Live sound equalization and attenu-
ation with a headset,” in Proc. Audio Engineering Society 51st Int. Conf. Loud-
speakers and Headphones, Helsinki, Finland, Aug. 2013.
[31] A. Härmä, J. Jakka, M. Tikander, M. Karjalainen, T. Lokki, J. Hiipakka, and
G. Lorho, “Augmented reality audio for mobile and wearable appliances,” J. Audio
Eng. Soc., vol. 52, pp. 618–639, June 2004.
[32] B. F. G. Katz, S. Kammoun, G. Parseihian, O. Gutierrez, A. Brilhault, M. Auvray,
P. Truillet, M. Denis, S. Thorpe, and C. Jouffrais, “NAVIG: Augmented reality guid-
ance system for the visually impaired,” Virtual Reality, vol. 16, no. 4, pp. 253–269,
2012.
[33] A. Zimmermann and A. Lorenz, “LISTEN: a user-adaptive audio-augmented mu-
seum guide,” User Model. User-Adapt. Interact., vol. 18, no. 5, pp. 389–416, 2008.
[34]A.Damala,T.Schuchert,I.Rodriguez,J.Moragues,K.Gilleade, and N.Sto-
janovic, “Exploring the affective museum visiting experience: adaptive augmented
reality (A2R) and cultural heritage,” Int. J. Heritage Digital Era, vol. 2, no. 1,
pp. 117–142, 2013.
[35] K. Lyons, M. Gandy, and T. Starner, “Guided by voices: An audio augmented
reality system,” in Proc. Int. Conf. Auditory Display (ICAD), Atlanta, GA, Apr.
2000, pp. 57–62.
[36] M. Tikander, “Usability issues in listening to natural sounds with an aug-
mented reality audio headset,” J. Audio Eng. Soc., vol. 57, no. 6, pp. 430–441,
June 2009.
[SP]
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
_______________
_____________