HP Caliper Advisor Rule Writer Guide

You should use the first example shown above whenever a derived metric is going to
be calculated from metric1 and metric2, because consistent data is required for
correct results.
You should use the second example shown above when both metrics are needed, but
are not going to be combined or compared. In the latter case, it is incorrect to request
them together because that request might fail while separate requests might succeed.
Every time an executable program is run, its performance characteristics are different.
If it is run under the same conditions, then the differences are slight. If it is run using
different options, databases, and so forth, then the performance metrics can be very
different. There is no way for the Advisor to tell if performance data from different
datasets fall under the first or second situation, so it will never return them together.
Advanced Features
All of the datasets for the current analysis object are sequentially numbered, starting
at zero. Every accessor function defines a special n metric (a “metric” named n) that
can be used like any other metric. Its return value is the ID number of the dataset that
supplied all of the other metric values in the call.
Simple rules typically use the base version of the accessor functions to retrieve a
matching set of metrics and perform their analysis with that. More sophisticated rules
can make use of the nth version of the accessor functions to retrieve multiple sets of
data to analyze for the same metric list.
A rule can use both the special n metric and the nth accessor functions to chain several
accessor functions. The technique of retrieving performance data from some (unknown)
dataset and retrieving more metrics from the same dataset is known as chaining.
Accessor Functions 35