Zoom out Search Issue
IEEE SIGNAL PROCESSING MAGAZINE [112] MARCH 2015
pie chart suggests that 61% of the subjects preferred the natu-
ral sound rendering, while only 33% preferred the conven-
tional stereo rendering.
To sum up the subjective test results, we found that the
natural sound rendering system using the various signal pro-
cessing techniques explained in this article enhances the lis-
tening experience compared to a conventional stereo system.
Additionally, the presence of head tracking in the system will
only improve the natural sound
rendering as observed in several
studies [10].
CONCLUSIONS AND
FUTURE TRENDS
With the advent of low cost, low
power, small form factor, and high-
speed multicore embedded proces-
sor, we can now implement the
aforementioned signal processing
techniques in real time and embed
processors into the headphone design. However, various imple-
mentation issues regarding the computation cost of sound
scene decomposition, HRTF/BRIR filtering in virtualization,
and equalization as well as the latency in head tracking should
be carefully considered. One example of such a natural sound
rendering system is the four-emitter 3-D audio headphone [39]
developed at the Digital Signal Processing Lab at NTU. This
system has been psychophysically validated and found to per-
form much better than the conventional stereo headphone
playback system.
Besides the five types of techniques discussed in this article,
there have been other efforts to enhance the natural experience
of headphone listening. To enable the natural pass through of
the sound from outside world without coloration, headphones
can be designed with suitable acoustically transparent materi-
als. When this is not effective, microphones integrated into
headphones and associated signal processing techniques, such
as equalization, and active noise control, are employed. The
headphones with built-in microphones open a new dimension
to augment the listening experience with the physical world.
The future of headphones for assistive listening applications
would be where listeners cannot dif-
ferentiate between the virtual acous-
tic space created from headphone
playback and the real acoustic space.
This would require a combined effort
from the whole audio community—
from the headphone manufacturers
and sound engineers to audio scien-
tists. More information about the
content production has to be distrib-
uted from the content developers to
the end user to enhance the extrac-
tion process. Moreover, obtaining and exploiting every individual’s
anthropometrical feature or hearing profile is crucial for a natural
listening experience. Finally, with more sensors, such as global
positioning systems, gyroscopes, and microphones that can be
integrated into headphones, future headphones are becoming
more content-, location-, and listener-aware, and hence more
intelligent and assistive.
ACKNOWLEDGMENTS
This work is supported by the Singapore National Research
Foundation Proof-of-Concept program under grant NRF 2011
NRF-POC001-033. We thank the guest editors and reviewers for
their constructive comments and suggestions.
1 2 3 4
50
60
70
80
90
100
MOS for Four Measures
Measure
Mean Opinion Score
Natural Sound Rendering
Conventional Stereo
0 50 100
0
20
40
60
80
100
Scatter Plot for All Scores
Conventional Stereo
(a)
(b) (c)
Natural Sound Rendering
61%
6%
33%
Preference of the Tracks
Prefer Natural Sound Rendering
Not Sure
Prefer Conventional Stereo
[FIG6] Results of the subjective experiments: (a) MOS, (b) scatter plot, and (c) overall preference.
THE FUTURE OF HEADPHONES
FOR ASSISTIVE LISTENING
APPLICATIONS WOULD BE WHERE
LISTENERS CANNOT DIFFERENTIATE
BETWEEN THE VIRTUAL
ACOUSTIC SPACE CREATED
FROM HEADPHONE PLAYBACK
AND THE REAL ACOUSTIC SPACE.
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®
Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next Page
q
q
M
M
q
q
M
M
q
M
THE WORLD’S NEWSSTAND
®