User`s guide
E-Prime User’s Guide
Chapter 3: Critical Timing
Page 73
accurately, even when the computer or operating system code reports timing inaccurately.
However, you need to take the time to understand what precision you need, recognize artifacts in
your timing data, and apply methods to reduce or eliminate errors. In section 3.4, we give a brief
review of the tests of timing accuracy we are performing. You can download some of these tests
from the Psychology Software Tools web site and run them on your own computer.
There are thousands of hardware cards currently available for computers, with new cards and
device driver software being generated daily. A number of these hardware configurations can fail
to provide acceptable timing accuracy for professional research. They are optimized for the
business world, not for real-time display. There is a chance that the computer came pre-
configured or that someone installed a program that operates in the background, stealing
computer cycles tens of times per second in a way that will distort timing. We provide the tools to
check the configuration of a machine to determine whether or not this is a serious problem.
There has been a long history of attempts to achieve accurate behavioral timing via computerized
experimental research methods (Schneider & Shultz, 1973; Chute, 1986; Schneider, 1988;
Schneider, 1989; Segalowtz & Graves, 1990; Schneider, Zuccolotto, & Tirone, 1993; Cohen et.
al., 1994; Chute & Westall, 1996). As the complexity of computer hardware and operating
systems has increased, it has become more difficult to program and assess the precision of
behavioral experiments. As computer operating systems have developed more capacity to
perform multiple tasks (e.g., network communications and resource sharing, disk caching,
concurrent processing) and more virtual tasks (e.g., virtual memory management, interrupt
reflection, low-level hardware commands being routed through layers of virtual device drivers
rather than direct hardware access), more effort must be expended to achieve and verify accurate
timing.
Many researchers will say “I want millisecond precision timing for my experiments." It would
be helpful to take a moment and write down an operational definition of what that means. If you
were to look at two software packages that both claimed millisecond precision, what quantifiable
evidence would you want before you would be comfortable reporting scientific data with them?
If you interpret millisecond precision to mean that no individual time measurement is ever off by
more than a millisecond, then you cannot use modern personal computers running common
desktop operating systems to conduct the research. At times, operating systems take away
processing time from any program. Even if the program is reading a microsecond precision
clock, the precision is only as good as the accuracy with which the computer processor is actually
free to read the clock.
For example, assume a software application attempted to continuously read a hardware crystal
clock that updated exactly every millisecond. If the application read the clock for a period of 10
seconds (10,000 milliseconds) it would be expected that the application would observe the value
read from the clock to change exactly 10,000 times and the difference in time reported between
sequential reads to be either 0 or 1 millisecond. If such a test application is run on a modern
operating system it is often the case that the application will observe that, occasionally, the
difference between sequential reads of the clock is significantly greater than 1 millisecond.
Assuming that the hardware crystal clock itself is independent and runs continuously, these
results can only be explained by the application having its reading of the clock paused at times
during execution (e.g., while the program is paused the clock continues to increment and thus
some individual clock ticks are “missed” by the software). We refer to the percentage of times for
which these individual reads of the clock tick are missed by the software as the “miss tick rate”.
The maximum amount of time that elapses between two consecutive reads of the clock is
referred to as the “maximum miss tick duration."