User`s guide
1 Data Processing
Tip The inputs in each of the data segm ents must be consistently exciting
the system. Splitting data into meaningful segments for steady-state
data results in minimum information loss. A void m aking data segments
too small.
• Manu a lly r e pla ce ou tl iers with NaNs and then use the misdata command
to reconstruct flagged data. This approach treats outliers as m issing data
and is described in “Handling Missing Data” on page 1-90. Use this method
when your data contains several inputs and outputs, and when you have
difficulty finding reliable data seg m en t s i n a ll va riab les.
• Remove outliers by prefiltering the data for high-frequency content because
outliers often result from abrupt changes. For more information about
filtering, see “Filtering Data” on page 1-107.
Note The e stimation algorithm handles outliers automatically by assigning
a smaller weight to outlier data. A robust error criterion applies a n error
penalty that is quadratic for small and moderate prediction errors, and is
linear for large prediction errors. Because outliers produce large prediction
errors, this approach gives a smaller weight to the corresponding data points
during model estima t ion. T he valu e
LimitError (see A lgor ithm Properties)
quantitatively distinguishes between moderate and large outliers.
Example – Extracting and Modeling Specific Data
Segments
The following example shows how to create a multiexperiment, time-domain
data set by merging only the accurate-data segments and ignoring the rest.
Modeling multiexperiment data sets produces an average model for the
different experiments.
You cannot simply concatenate the good data s egm ents because the transients
at the connection points compromise the model. Instead, you must create a
multiexperiment
iddata object, where each experiment corresponds to a good
segment of data, as follows:
1-92