Specifications
Remote Procedure Call Programming Guide Page 17
4.2.1. Broadcast RPC Synopsis
#include <rpc/pmap_clnt.h>
...
enum clnt_stat clnt_stat;
...
clnt_stat = clnt_broadcast(prognum, versnum, procnum,
inproc, in, outproc, out, eachresult)
u_long prognum; /* program number */
u_long versnum; /* version number */
u_long procnum; /* procedure number */
xdrproc_t inproc; /* xdr routine for args */
caddr_t in; /* pointer to args */
xdrproc_t outproc; /* xdr routine for results */
caddr_t out; /* pointer to results */
bool_t (*eachresult)();/* call with each result gotten */
The procedure eachresult() is called each time a valid result is obtained. It returns a boolean that indicates
whether or not the user wants more responses.
bool_t done;
...
done = eachresult(resultsp, raddr)
caddr_t resultsp;
struct sockaddr_in *raddr; /* Addr of responding machine */
If done is TRUE, then broadcasting stops and clnt_broadcast() returns successfully. Otherwise, the routine
waits for another response. The request is rebroadcast after a few seconds of waiting. If no responses come
back, the routine returns with RPC_TIMEDOUT.
4.3. Batching
The RPC architecture is designed so that clients send a call message, and wait for servers to reply that the
call succeeded. This implies that clients do not compute while servers are processing a call. This is ineffi-
cient if the client does not want or need an acknowledgement for every message sent. It is possible for
clients to continue computing while waiting for a response, using RPC batch facilities.
RPC messages can be placed in a “pipeline” of calls to a desired server; this is called batching. Batching
assumes that: 1) each RPC call in the pipeline requires no response from the server, and the server does not
send a response message; and 2) the pipeline of calls is transported on a reliable byte stream transport such
as TCP/IP. Since the server does not respond to every call, the client can generate new calls in parallel with
the server executing previous calls. Furthermore, the TCP/IP implementation can buffer up many call mes-
sages, and send them to the server in one write() system call. This overlapped execution greatly decreases
the interprocess communication overhead of the client and server processes, and the total elapsed time of a
series of calls.
Since the batched calls are buffered, the client should eventually do a nonbatched call in order to flush the
pipeline.
A contrived example of batching follows. Assume a string rendering service (like a window system) has
two similar calls: one renders a string and returns void results, while the other renders a string and remains
silent. The service (using the TCP/IP transport) may look like: