HP-UX Floating-Point Guide

182 Chapter 7
Performance Tuning
Denormalized Operands
Denormalized Operands
Denormalized values usually occur as the result of an operation that
underflows. On HP 9000 systems, the occurrence of a denormalized
operand or result, either at an intermediate stage of a computation or at
the end, can reduce the speed of an operation significantly. There are
several solutions to this problem:
Change single-precision data to double-precision.
Assign the value zero to data that would normally be denormalized.
On systems that support it, enable flush-to-zero mode.
Scale the entire data set upwards in magnitude so that the smallest
values that occur are guaranteed to be normalized.
The first solution applies primarily to code that uses single-precision
data. Because of the range of the two formats, a value that is
denormalized in single-precision will be normalized in double-precision.
On HP 9000 systems, denormalized numbers are so costly that it is
worth converting to double-precision even if this means converting
several operands from single-precision to double-precision, performing
several operations, and then converting the result back to
single-precision. The code will still run much faster than it would if it
had to process a single denormalized operand.
The second solution is useful only if you can determine that your
algorithm will work equally well when a denormalized value is treated
as zero. In this case, you can explicitly assign it the value 0 before
entering a loop where it is repeatedly accessed. For example, if A is a
denormalized operand in the following example, this code will run very
slowly on a HP 9000 system:
SUBROUTINE VECTOR_SCALE(A, V)
REAL A, V(1000)
PRINT *, 'A IS', A
DO 10 I = 1,1000
10 V(I) = V(I) * A
RETURN
END
If A is likely to be denormalized once in a while, it may be a good idea to
add the following line before the loop:
IF (ABS(A) .LT. 1.1754944E-38)A=0.0