Zoom out Search Issue
IEEE SIGNAL PROCESSING MAGAZINE [30] MARCH 2015
acoustic media. He founded Sensear in 2006 and is also its technol-
ogy inventor. He is a member of the IEEE Signal Processing Society
Technical Committee on Audio and Acoustic Signal Processing and
is an associate editor of Journal of the Franklin Institute.
REFERENCES
[1]H.Luts,K.Eneman,J.Wouters,M.Schulte,M.Vormann,M.Buechler,N.Dil-
lier,R.Houben,W. A.Dreschler,M.Froehlich,H.Puder,G.Grimm,V.Hohmann,
A. Leijon, A. Lombard, D. Mauler, and A. Spriet, “Multicenter evaluation of signal
enhancement algorithms for hearing aids,” J. Acoust. Soc. Amer., vol. 127, no. 3, pp.
2054–2063, Mar. 2010.
[2]M.Brandstein and D.Ward,Microphone Arrays: Signal Processing Techniques
and Applications.New York:Springer,2001.
[3]J.Blauert,The Technology of Binaural Listening.New York:Springer,2013.
[4]S.Makino,T.-W.Lee, and H.Sawada,Blind Speech Separation.New York:
Springer, 2007.
[5]J.Blauert,Spatial Hearing: The Psychophysics of Human Sound Localisation.
Cambridge, MA: MIT, 1997.
[6] R. Martin, U. Heute, and C. Antweiler, Advances in Digital Speech Transmission.
Hoboken, NJ: Wiley, 2008.
[7] R. O. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE
Trans. Antennas Propag., vol. 34, pp. 276–280, Mar. 1986.
[8] C. Knapp and G. C. Carter, “The generalized correlation method for estimation
of time delay,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 24, no. 4, pp.
320–327, 1976.
[9] J. Benesty, “Adaptive eigenvalue decomposition algorithm for passive acoustic
source localization,” J. Acoust. Soc. Amer., vol. 107, no. 1, pp. 384–391, Jan. 2000.
[10] A. Lombard, Y. Zheng, H. Buchner, and W. Kellermann, “TDOA estimation for
multiple sound sources in noisy and reverberant environments using broadband inde-
pendent component analysis,” IEEE Trans. Audio, Speech, Lang. Processing, vol. 19,
no. 6, pp. 1490–1503, Aug. 2011.
[11] R. O. Duda and W. L. Martens, “Range dependence of the response of a spherical
head model,” J. Acoust. Soc. Amer., vol. 104, no. 5, pp. 3048–3058, 1998.
[12] D. L. Wang and G. Brown, Computational Auditory Scene Analysis: Principles,
Algorithms and Applications.New York:IEEE Press/Wiley-Interscience,2007.
[13] T. May, S. van de Par, and A. Kohlrausch, “A probabilistic model for robust local-
ization based on a binaural auditory front-end,” IEEE Trans. Audio, Speech, Lang.
Processing, vol. 19, no. 1, pp. 1–13, Jan. 2011.
[14] J. Scheuing and B. Yang, “Disambiguation of TDOA estimation for multiple sourc-
es in reverberant environments,” IEEE Trans. Audio, Speech, Lang. Processing, vol.
16, no. 8, pp. 1479–1489
,2008.
[15] G. W. Elko, “Microphone array systems for hands-free telecommunication,”
Speech Commun., vol. 20, no. 3–4, pp. 229–240, 1996.
[16] J. M. Kates and M. R. Weiss, “A comparison of hearing-aid array-processing tech-
niques,” J. Acoust. Soc. Amer., vol. 99, no. 5, pp. 3138–3148, May 1996.
[17] S. Doclo and M. Moonen, “Superdirective beamforming robust against micro-
phone mismatch,” IEEE Trans. Audio, Speech, Lang. Processing, vol. 15, no. 2, pp.
617–631, Feb. 2007.
[18] E. Mabande, A. Schad, and W. Kellermann, “Design of robust superdirective
beamformers as a convex optimization problem,” in Proc. IEEE Int. Conf. Acoustics,
Speech, Signal Processing,Taipei, Taiwan,Apr.2009, pp. 77–80.
[19] T. Lotter and P. Vary, “Dual-channel speech enhancement by superdirective
beamforming,” EURASIP J. Appl. Signal Processing, vol. 2006, no. 1, pp. 175–175,
Jan. 2006.
[20] B. V. Veen and K. Buckley, “Beamforming: A versatile approach to spatial filter-
ing,” IEEE ASSP Mag., vol. 5, no. 2, pp. 4–24, 1988.
[21] J. Capon, “High resolution frequency-wavenumber spectrum analysis,” Proc.
IEEE, vol. 57, no. 7, pp. 1408–1418, Aug. 1969.
[22] L. J. Griffiths and C. W. Jim, “An alternative approach to linearly constrained
adaptive beamforming,” IEEE Trans. Antennas Propag., vol. 30, no. 1, pp. 27–34,
Jan. 1982.
[23] O. Hoshuyama, A. Sugiyama, and A. Hirano, “A robust adaptive beamformer for
microphone arrays with a blocking matrix using constrained adaptive filters,” IEEE
Trans. Signal Processing, vol. 47, no. 10, pp. 2677–2684, Oct. 1999.
[24] S. Gannot, D. Burshtein, and E. Weinstein, “Signal enhancement using beam-
forming and non-stationarity with applications to speech,” IEEE Trans. Signal Pro-
cessing, vol. 49, no. 8, pp. 1614–1626, Aug. 2001.
[25] A. Krueger, E. Warsitz, and R. Haeb-Umbach, “Speech enhancement with a GSC-
like structure employing eigenvector-based transfer function ratios estimation,” IEEE
Trans. Audio, Speech, Lang. Processing, vol. 19, no. 1, pp. 206–218, Jan. 2011.
[26] H. Cox, R. M. Zeskind, and M. M. Owen, “Robust adaptive beamforming,”
IEEE Trans. Acoust., Speech, Signal Processing, vol. 35, no. 10, pp. 1365–1376,
Oct. 1987.
[27] A. Spriet, M. Moonen, and J. Wouters, “Spatially pre-processed speech distortion
weighted multi-channel Wiener filtering for noise reduction,” Signal Process., vol. 84,
no. 12, pp. 2367–2387, Dec. 2004.
[28] F. L. Luo, J. Y. Yang, C. Pavlovic, and A. Nehorai, “Adaptive null-forming scheme
in digital hearing aids,” IEEE Trans. Signal Processing, vol. 50, no. 7, pp. 1583
–1590,
2002.
[29] J. B. Maj, L. Royackers, M. Moonen, and J. Wouters, “Comparison of adaptive
noise reduction algorithms in dual microphone hearing aids,” Speech Commun., vol.
48, no. 8, pp. 957–960, Aug. 2006.
[30] B. Cornelis, M. Moonen, and J. Wouters, “Speech intelligibility improvements
with hearing aids using bilateral and binaural adaptive multichannel Wiener filtering
based noise reduction,” J. Acoust. Soc. Amer., vol. 131, no. 6, pp. 4743–4755, June
2012.
[31] S. Doclo, A. Spriet, J. Wouters, and M. Moonen, “Frequency-domain criterion
for speech distortion weighted multichannel Wiener filter for robust noise reduction,”
Speech Commun., vol. 49, no. 7–8, pp. 636–656, July–Aug. 2007.
[32] L. W. Brooks and I. S. Reed, “Equivalence of the likelihood ratio processor, the
maximum signal-to-noise ratio filter, and the Wiener filter,” IEEE Trans. Aerosp. Elec-
tron. Syst., vol. 8, no. 5, pp. 690–692, 1972.
[33] L. Parra and C. Spence, “Convolutive blind separation of non-stationary sources,”
IEEE Trans. Speech Audio Processing, vol. 8, no. 3, pp. 320–327, May 2000.
[34] S. Araki, R. Mukai, S. Makino, T. Nishikawa, and H. Saruwatari, “The fundamental
limitation of frequency domain blind source separation for convolutive mixtures of
speech,” IEEE Trans. Speech Audio Processing, vol. 11, no. 2, pp. 109–116, Mar. 2003.
[35] H. Buchner, R. Aichner, and W. Kellermann, “A generalization of blind source
separation algorithms for convolutive mixtures based on second order statistics,” IEEE
Trans. Speech Audio Processing, vol. 13, no. 1, pp. 120–134, Jan. 2005.
[36] P. Smaragdis, “Blind separation of convolved mixtures in the frequency domain,”
Neurocomputing, vol. 22, nos. 1–3, pp. 21–34, Nov. 1998.
[37] H. Sawada, R. Mukai, S. Araki, and S. Makino, “A robust and precise method for
solving the permutation problem of frequency-domain blind source separation,” IEEE
Trans. Speech Audio Processing, vol. 12, no. 5, pp. 530–538, Sept. 2004.
[38] K. Matsuoka and S. Nakashima, “Minimal distortion principle for blind source
separation,” in Proc. Independent Component Analysis and Signal Separation,Dec.
2001, pp. 722–727.
[39] S. Araki, S. Makino, Y. Hinamoto, R. Mukai, T. Nishikawa, and H. Saruwatari,
“Equivalence between frequency domain blind source separation and frequency do-
main adaptive beamforming for convolutive mixtures,” EURASIP J. Appl. Signal Pro-
cess., vol. 2003, no. 11, pp. 1157–1166, Nov. 2003.
[40] Y. Zheng, K. Reindl, and W. Kellermann, “Analysis of dual-channel ICA-based
blocking matrix for improved noise estimation,” EURASIP J. Appl. Signal Processing,
vol. 26, 2014. Doi:10.1186/1687-6180-2014-26. [Online]. Available: http://asp.eurasip
journals.com/content/2014/1/26
[41] S. Nordholm, I. Claesson, and M. Dahl, “Adaptive microphone array employing
calibration signals: An analytical evaluation,”
IEEE Trans. Speech Audio Processing,
vol. 7, no. 3, pp. 241–252, May 1999.
[42] T. Gerkmann, C. Breithaupt, and R. Martin, “Improved a posteriori speech pres-
ence probability estimation based on a likelihood ratio with fixed priors,” IEEE Trans.
Audio, Speech, Lang. Processing, vol. 16, no. 5, pp. 910–919, July 2008.
[43]K.Reindl,S.Markovich-Golan,H.Barfuss,S.Gannot, and W.Kellermann,
“Geometrically constrained TRINICON-based relative transfer function estimation in
underdetermined scenarios,” in Proc. IEEE Workshop Applications of Signal Pro-
cessing to Audio and Acoustics (WASPAA),Oct.2013.
[44] T. Van den Bogaert, S. Doclo, J. Wouters, and M. Moonen, “The effect of multi-
microphone noise reduction systems on sound source localization by users of binaural
hearing aids,” J. Acoust. Soc. Amer., vol. 124, no. 1, pp. 484–497, July 2008.
[45] G. Grimm, V. Hohmann, and B. Kollmeier, “Increase and subjective evaluation
of feedback stability in hearing aids by a binaural coherence-based noise reduction
scheme,” IEEE Trans. Audio, Speech, Lang. Processing, vol. 17, no. 7, pp. 1408–1419,
Sept. 2009.
[46]K.Reindl,Y.Zheng,A.Schwarz,S.Meier,R.Maas,A.Sehr, and W.Kellermann,
“A stereophonic acoustic signal extraction scheme for noisy and reverberant environ-
ments,” Comput. Speech Lang. (CSL), vol. 27, no. 3, pp. 726–745, May 2012.
[47] D. Welker, J. Greenberg, J. Desloge, and P. Zurek, “Microphone-array hearing
aids with binaural output–Part II: A two-microphone adaptive system,” IEEE Trans.
Speech Audio Processing, vol. 5, no. 6, pp. 543–551, Nov. 1997.
[48] B. Cornelis, S. Doclo, T. Van den Bogaert, J. Wouters, and M. Moonen, “Theoreti-
cal analysis of binaural multi-microphone noise reduction techniques,” IEEE Trans.
Audio, Speech, Lang. Processing, vol. 18, no. 2, pp. 342–355, Feb. 2010.
[49] E. Hadad, S. Gannot, and S. Doclo, “Binaural linearly constrained minimum vari-
ance beamformer for hearing aid applications,” in Proc. Int. Workshop on Acoustic
Signal Enhancement (IWAENC), Aachen, Germany, Sept. 2012, pp. 117–120.
[50] D. Marquardt, V. Hohmann, and S. Doclo, “Perceptually motivated coherence
preservation in multi-channel Wiener filtering based noise reduction for binaural
hearing aids,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing,Flor-
ence, Italy, May 2014, pp. 3688–3692.
[SP]
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®