HP Fortran Programmer's Guide (March 2010)

Controlling data storage
Sharing data among programs
Chapter 3 115
Sharing data among programs
If you are designing an application that requires multiple threads of control that share the
same data, the design can take either of two forms:
The program makes calls to the threads library:
/usr/lib/libpthread.sl
which creates multiple threads executing in a single process and therefore all sharing the
same address space.
The application consists of several programs that run simultaneously in separate
processes and that access an HP-UX shared memory segment.
The first approach is beyond the scope of this manual and requires that you have an
understanding of how to call the threads library.
1
The second approach is described here.
To share data among several HP Fortran programs that are executing simultaneously in
separate processes, use the $HP$ SHARED_COMMON directive. This directive enables you to
create a common block that is accessible by HP Fortran programs executing in different
processes.
The $HP$ SHARED_COMMON directive causes the compiler to insert HP-UX system calls to
perform shared memory operations. To the programmer, the programs sharing the memory
segment appear as though they were program units in the same program, accessing a set of
common block variables.
Following are two programs to illustrate how the $HP$ SHARED_COMMON directive works:
The first program, go_to_sleep.f90, must execute first. Because it executes first, it
creates the shared memory segment and then enters a DO loop, where it waits until the
second program starts to execute. You can use the ipcs -m command to confirm that a
shared memory segment has been created.
When the second program, wake_up.f90, starts to execute, it writes to the shared
common block variables, one of which causes go_to_sleep.f90 to break out of the DO loop
and run to completion.
1. Specifying the +Oparallel option causes the compiler to transform eligible loops in
an HP Fortran program for parallel execution on HP 9000 systems. For information
about compiling for parallel execution, see “Compiling for parallel execution” on
page 162.