User Guide
331
Crosstabs
chi-square is computed for all other 2 × 2 tables. For tables with any number of
rows and columns, select Chi-square to calculate the Pearson chi-square and the
likelihood
-ratio chi-square. When both table variables are quantitative,
Chi-square
yields the linear-by-linear association test.
Correlations. For tables in which both rows and columns contain ordered values,
Correlations yields Spearman’s correlation coefficient, rho (numeric data only).
Spearman’
s rho is a measure of association between rank orders. When both table
variables (factors) are quantitative,
Correlations yields the Pearson correlation
coefficient, r, a measure of linear association between the variables.
Nominal. For nominal data (no intrinsic order, such as Catholic, Protestant, and
Jewish),
you can select
Phi (coefficient) and Cramér’s V, Contingency coefficient,
Lambda (symmetric and asymmetric lambdas and Goodman and Kruskal’s tau), and
Uncertainty coefficient.
Contingency coefficient. A measure of association based on chi-square. The
value ranges between zero and 1, with zero indicating no association between
the row an
d column variables and values close to 1 indicating a high degree of
association between the variables. The maximum value possible depends on the
number of rows and columns in a table.
Phi and Cramer's V. Phi is a chi-square based measure of association that involves
dividin
g the chi-square statistic by the sample size and taking the square root of
the result. Cramer's V is a measure of association based on chi-square.
Lambda. A measure of association which reflects the proportional reduction in
error when values of the independent variable are used to predict values of the
depend
ent variable. A value of 1 means that the independent variable perfectly
predicts the dependent variable. A value of 0 means that the independent variable
is no help in predicting the dependent variable.
Uncertainty coefficien t. A measure of association that indicates the proportional
reduct
ion in error when values of one variable are used to predict values of the
other variable. For example, a value of 0.83 indicates that knowledge of one
variable reduces error in predicting values of the other variable by 83%. The
progr
am calculates both symmetric and asymmetric versions of the uncertainty
coefficient.