SPSS Advanced Models 11.
For more information about SPSS® software products, please visit our Web site at http://www.spss.com or contact SPSS Inc. 233 South Wacker Drive, 11th Floor Chicago, IL 60606-6412 Tel: (312) 651-3000 Fax: (312) 651-3668 SPSS is a registered trademark and the other product names are the trademarks of SPSS Inc. for its proprietary computer software.
Preface SPSS® 11.5 is a comprehensive system for analyzing data. SPSS can take data from almost any type of file and use them to generate tabulated reports, charts, and plots of distributions and trends, descriptive statistics, and complex statistical analyses. The Advanced Models option is an add-on enhancement that provides additional statistical analysis techniques. The procedures in Advanced Models must be used with the SPSS 11.5 Base and are completely integrated into that system.
Compatibility The SPSS system is designed to operate on many computer systems. See the installation instructions that came with your system for specific information on minimum and recommended requirements. Serial Numbers Your serial number is your identification number with SPSS Inc. You will need this serial number when you call SPSS Inc. for information regarding support, payment, or an upgraded system. The serial number was provided with your Base system.
Technical Support The services of SPSS Technical Support are available to registered customers of SPSS. Customers may contact Technical Support for assistance in using SPSS products or for installation help for one of the supported hardware environments. To reach Technical Support, see the SPSS Web site at http://www.spss.com, or call your local office, listed on page vii. Be prepared to identify yourself, your organization, and the serial number of your system.
About This Manual This manual documents the graphical user interface. Illustrations of dialog boxes are taken from SPSS for Windows. Dialog boxes in other operating systems are similar. The Advanced Models command syntax is included in the SPSS 11.5 Syntax Reference Guide, available on the product CD-ROM. Contacting SPSS If you would like to be on our mailing list, contact one of our offices, listed on page vii, or visit our Web site at http://www.spss.com.
SPSS Inc. Chicago, Illinois, U.S.A. Tel: 1.312.651.3000 or 1.800.543.2185 www.spss.com/corpinfo Customer Service: 1.800.521.1337 Sales: 1.800.543.2185 sales@spss.com Training: 1.800.543.6607 Technical Support: 1.312.651.3410 support@spss.com SPSS Denmark Tel: +45.45.46.02.00 www.spss.com SPSS Mexico SA de CV Tel: +52.5.682.87.68 www.spss.com SPSS East Africa Tel: +254 2 577 262 www.spss.com SPSS Miami Tel: 1.305.627.5700 www.spss.com SPSS Finland Oy Tel: +358.9.4355.920 www.spss.
Contents 1 GLM Multivariate Analysis 1 To Obtain a GLM Multivariate Analysis of Variance . . . . . . . . . . . . . 3 GLM Multivariate Model . . . . . . . . . . . . . . . . GLM Multivariate Contrasts . . . . . . . . . . . . . . GLM Multivariate Profile Plots . . . . . . . . . . . . GLM Multivariate Post Hoc Multiple Comparisons for Observed Means . . . . . . . . . . . . . . . . . . GLM Multivariate Save. . . . . . . . . . . . . . . . . GLM Multivariate Options . . . . . . . . . . . . . . .
3 Variance Components Analysis 33 To Obtain a Variance Components Analysis . . . . . . . . . . . . . . . . 34 Variance Components Model . . . . . . . . Variance Components Options . . . . . . . Sums of Squares (Variance Components) . Variance Components Save to New File. . VARCOMP Command Additional Features. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Mixed Models . . . . .
6 General Loglinear Analysis 63 To Obtain a General Loglinear Analysis . . . . . . . . . . . . . . . . . . .65 General Loglinear Analysis Model. . . . General Loglinear Analysis Options . . . To Specify Options. . . . . . . . . . . . . General Loglinear Analysis Save . . . . GENLOG Command Additional Features 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Logit Loglinear Analysis .
9 Life Tables 87 To Create a Life Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Life Tables Define Event for Status Variable Life Tables Define Range . . . . . . . . . . . Life Tables Options . . . . . . . . . . . . . . . SURVIVAL Command Additional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Kaplan-Meier Survival Analysis . . . . 90 90 91 91 93 To Obtain a Kaplan-Meier Survival Analysis . . . . . .
12 Computing Time-Dependent Covariates 107 To Compute a Time-Dependent Covariate . . . . . . . . . . . . . . . . . 108 Cox Regression with Time-Dependent Covariates Additional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Appendix A Categorical Variable Coding Schemes 111 Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Helmert . . . . . . . . . . . . . . . . .
Chapter 1 GLM Multivariate Analysis The GLM Multivariate procedure provides regression analysis and analysis of variance for multiple dependent variables by one or more factor variables or covariates. The factor variables divide the population into groups. Using this general linear model procedure, you can test null hypotheses about the effects of factor variables on the means of various groupings of a joint distribution of dependent variables.
2 Chapter 1 (interaction plots) of these means allow you to visualize some of the relationships easily. The post hoc multiple comparison tests are performed for each dependent variable separately. Residuals, predicted values, Cook’s distance, and leverage values can be saved as new variables in your data file for checking assumptions.
3 GLM Multiva riate Analy sis for all cells are the same. Analysis of variance is robust to departures from normality, although the data should be symmetric. To check assumptions, you can use homogeneity of variances tests (including Box’s M) and spread-versus-level plots. You can also examine residuals and residual plots. Related procedures. Use the Explore procedure to examine the data before doing an analysis of variance. For a single dependent variable, use GLM Univariate.
4 Chapter 1 GLM Multivariate Model Figure 1-2 Multivariate Model dialog box Specify Model. A full factorial model contains all factor main effects, all covariate main effects, and all factor-by-factor interactions. It does not contain covariate interactions. Select Custom to specify only a subset of interactions or to specify factor-by-covariate interactions. You must indicate all of the terms to be included in the model. Factors and Covariates.
5 GLM Multiva riate Analy sis Build Terms For the selected factors and covariates: Interaction. Creates the highest-level interaction term of all selected variables. This is the default. Main effects. Creates a main-effects term for each variable selected. All 2-way. Creates all possible two-way interactions of the selected variables. All 3-way. Creates all possible three-way interactions of the selected variables. All 4-way. Creates all possible four-way interactions of the selected variables. All 5-way.
6 Chapter 1 Any regression model. A purely nested design. (This form of nesting can be specified by using syntax.) Type III. This method, the default, calculates the sums of squares of an effect in the design as the sums of squares adjusted for any other effects that do not contain it and orthogonal to any effects (if any) that contain it.
7 GLM Multiva riate Analy sis Contrasts are used to test whether the levels of an effect are significantly different from one another. You can specify a contrast for each factor in the model. Contrasts represent linear combinations of the parameters. Hypothesis testing is based on the null hypothesis LBM = 0, where L is the contrast coefficients matrix, M is the identity matrix, which has dimension equal to the number of dependent variables, and B is the parameter vector.
8 Chapter 1 GLM Multivariate Profile Plots Figure 1-4 Multivariate Profile Plots dialog box Profile plots (interaction plots) are useful for comparing marginal means in your model. Profile plots are created for each dependent variable. A profile plot is a line plot in which each point indicates the estimated marginal mean of a dependent variable (adjusted for covariates) at one level of a factor. The levels of a second factor can be used to make separate lines.
9 GLM Multiva riate Analy sis After a plot is specified by selecting factors for the horizontal axis, and optionally, factors for separate lines and separate plots, the plot must be listed in the Plots list. GLM Multivariate Post Hoc Multiple Comparisons for Observed Means Figure 1-6 Post Hoc Multiple Comparisons for Observed Means dialog box Post hoc multiple comparison tests.
10 Chapter 1 Hochberg’s GT2 is similar to Tukey’s honestly significant difference test, but the Studentized maximum modulus is used. Usually, Tukey’s test is more powerful. Gabriel’s pairwise comparisons test also uses the Studentized maximum modulus and is generally more powerful than Hochberg’s GT2 when the cell sizes are unequal. Gabriel’s test may become liberal when the cell sizes vary greatly. Dunnett’s pairwise multiple comparison t test compares a set of treatments against a single control mean.
11 GLM Multiva riate Analy sis Tests displayed. Pairwise comparisons are provided for LSD, Sidak, Bonferroni, Games and Howell, Tamhane’s T2 and T3, Dunnett’s C, and Dunnett’s T3. Homogeneous subsets for range tests are provided for S-N-K, Tukey’s-b, Duncan, R-E-G-W F, R-E-G-W Q, and Waller. Tukey’s honestly significant difference test, Hochberg’s GT2, Gabriel’s test, and Scheffé’s test are both multiple comparison tests and range tests.
12 Chapter 1 residuals are also available. If a WLS variable was chosen, weighted unstandardized residuals are available. Save to New File. Writes an SPSS data file containing a variance-covariance matrix of the parameter estimates in the model. Also, for each dependent variable, there will be a row of parameter estimates, a row of significance values for the t statistics corresponding to the parameter estimates, and a row of residual degrees of freedom.
13 GLM Multiva riate Analy sis Compare main effects. Provides uncorrected pairwise comparisons among estimated marginal means for any main effect in the model, for both between- and within-subjects factors. This item is available only if main effects are selected under the Display Means For list. Confidence interval adjustment. Select least significant difference (LSD), Bonferroni, or Sidak adjustment to the confidence intervals and significance.
14 Chapter 1 GLM Command Additional Features These features may apply to univariate, multivariate, or repeated measures analysis. The SPSS command language also allows you to: Specify nested effects in the design (using the DESIGN subcommand). Specify tests of effects versus a linear combination of effects or a value (using the TEST subcommand). Specify multiple contrasts (using the CONTRAST subcommand). Include user-missing values (using the MISSING subcommand).
Chapter 2 GLM Repeated Measures The GLM Repeated Measures procedure provides analysis of variance when the same measurement is made several times on each subject or case. If between-subjects factors are specified, they divide the population into groups. Using this general linear model procedure, you can test null hypotheses about the effects of both the betweensubjects factors and the within-subjects factors. You can investigate interactions between factors as well as the effects of individual factors.
16 Chapter 2 Residuals, predicted values, Cook’s distance, and leverage values can be saved as new variables in your data file for checking assumptions. Also available are a residual SSCP matrix, which is a square matrix of sums of squares and cross-products of residuals, a residual covariance matrix, which is the residual SSCP matrix divided by the degrees of freedom of the residuals, and the residual correlation matrix, which is the standardized form of the residual covariance matrix.
17 GLM Repeated Measures group. A within-subjects factor is defined for the group with the number of levels equal to the number of repetitions. For example, measurements of weight could be taken on different days. If measurements of the same property were taken on five days, the within-subjects factor could be specified as day with five levels. For multiple within-subjects factors, the number of measurements for each subject is equal to the product of the number of levels of each factor.
18 Chapter 2 Univariate or GLM Multivariate. If there are only two measurements for each subject (for example, pre-test and post-test), and there are no between-subjects factors, you can use the Paired-Samples T Test procedure. GLM Repeated Measures Define Factor(s) GLM Repeated Measures analyzes groups of related dependent variables that represent different measurements of the same attribute. This dialog box lets you define one or more within-subjects factors for use in GLM Repeated Measures.
19 GLM Repeated Measures Figure 2-1 Repeated Measures Define Factor(s) dialog box Type a within-subject factor name and its number of levels. Click Add. Repeat these steps for each within-subjects factor. To define measure factors for a doubly multivariate repeated measures design: Click Measure. Figure 2-2 Expanded Repeated Measures Define Factor(s) dialog box Type the measure name. Click Add.
20 Chapter 2 After defining all of your factors and measures: Click Define. Figure 2-3 Repeated Measures dialog box Select a dependent variable that corresponds to each combination of within-subjects factors (and optionally, measures) on the list. To change positions of the variables, use the up and down pushbuttons. To make changes to the within-subjects factors, you can reopen the Repeated Measures Define Factor(s) dialog box without closing the main dialog box.
21 GLM Repeated Measures GLM Repeated Measures Model Figure 2-4 Repeated Measures Model dialog box Specify Model. A full factorial model contains all factor main effects, all covariate main effects, and all factor-by-factor interactions. It does not contain covariate interactions. Select Custom to specify only a subset of interactions or to specify factor-by-covariate interactions. You must indicate all of the terms to be included in the model. Between-Subjects.
22 Chapter 2 Main effects. Creates a main-effects term for each variable selected. All 2-way. Creates all possible two-way interactions of the selected variables. All 3-way. Creates all possible three-way interactions of the selected variables. All 4-way. Creates all possible four-way interactions of the selected variables. All 5-way. Creates all possible five-way interactions of the selected variables. Sums of Squares For the model, you can choose a type of sum of squares.
23 GLM Repeated Measures Type III. This method, the default, calculates the sums of squares of an effect in the design as the sums of squares adjusted for any other effects that do not contain it and orthogonal to any effects (if any) that contain it. The Type III sums of squares have one major advantage in that they are invariant with respect to the cell frequencies as long as the general form of estimability remains constant.
24 Chapter 2 Hypothesis testing is based on the null hypothesis LBM=0, where L is the contrast coefficients matrix, B is the parameter vector, and M is the average matrix that corresponds to the average transformation for the dependent variable. You can display this transformation matrix by selecting Transformation matrix in the Repeated Measures Options dialog box.
25 GLM Repeated Measures GLM Repeated Measures Profile Plots Figure 2-6 Repeated Measures Profile Plots dialog box Profile plots (interaction plots) are useful for comparing marginal means in your model. A profile plot is a line plot in which each point indicates the estimated marginal mean of a dependent variable (adjusted for any covariates) at one level of a factor. The levels of a second factor can be used to make separate lines. Each level in a third factor can be used to create a separate plot.
26 Chapter 2 After a plot is specified by selecting factors for the horizontal axis and, optionally, factors for separate lines and separate plots, it must be added to the Plots list. GLM Repeated Measures Post Hoc Multiple Comparisons for Observed Means Figure 2-8 Post Hoc Multiple Comparisons for Observed Means dialog box Post hoc multiple comparison tests.
27 GLM Repeated Measures Hochberg’s GT2 is similar to Tukey’s honestly significant difference test, but the Studentized maximum modulus is used. Usually, Tukey’s test is more powerful. Gabriel’s pairwise comparisons test also uses the Studentized maximum modulus and is generally more powerful than Hochberg’s GT2 when the cell sizes are unequal. Gabriel’s test may become liberal when the cell sizes vary greatly.
28 Chapter 2 Tests displayed. Pairwise comparisons are provided for LSD, Sidak, Bonferroni, Games and Howell, Tamhane’s T2 and T3, Dunnett’s C, and Dunnett’s T3. Homogeneous subsets for range tests are provided for S-N-K, Tukey’s-b, Duncan, R-E-G-W F, R-E-G-W Q, and Waller. Tukey’s honestly significant difference test, Hochberg’s GT2, Gabriel’s test, and Scheffé’s test are both multiple comparison tests and range tests.
29 GLM Repeated Measures Save to New File. Writes an SPSS data file containing a variance-covariance matrix of the parameter estimates in the model. Also, for each dependent variable, there will be a row of parameter estimates, a row of significance values for the t statistics corresponding to the parameter estimates, and a row of residual degrees of freedom. For a multivariate model, there are similar rows for each dependent variable.
30 Chapter 2 Compare main effects. Provides uncorrected pairwise comparisons among estimated marginal means for any main effect in the model, for both between- and within-subjects factors. This item is available only if main effects are selected under the Display Means For list. Confidence interval adjustment. Select least significant difference (LSD), Bonferroni, or Sidak adjustment to the confidence intervals and significance. This item is available only if Compare main effects is selected.
31 GLM Repeated Measures GLM Command Additional Features These features may apply to univariate, multivariate, or repeated measures analysis. The SPSS command language also allows you to: Specify nested effects in the design (using the DESIGN subcommand). Specify tests of effects versus a linear combination of effects or a value (using the TEST subcommand). Specify multiple contrasts (using the CONTRAST subcommand). Include user-missing values (using the MISSING subcommand).
Chapter 3 Variance Components Analysis The Variance Components procedure, for mixed-effects models, estimates the contribution of each random effect to the variance of the dependent variable. This procedure is particularly interesting for analysis of mixed models such as split plot, univariate repeated measures, and random block designs. By calculating variance components, you can determine where to focus attention in order to reduce the variance.
34 Chapter 3 Data. The dependent variable is quantitative. Factors are categorical. They can have numeric values or string values of up to eight characters. At least one of the factors must be random. That is, the levels of the factor must be a random sample of possible levels. Covariates are quantitative variables that are related to the dependent variable. Assumptions. All methods assume that model parameters of a random effect have zero means and finite constant variances and are mutually uncorrelated.
35 Variance Components Analysis Figure 3-1 Variance Components dialog box Select a dependent variable. Select variables for Fixed Factor(s), Random Factor(s), and Covariate(s), as appropriate for your data. For specifying a weight variable, use WLS Weight.
36 Chapter 3 Specify Model. A full factorial model contains all factor main effects, all covariate main effects, and all factor-by-factor interactions. It does not contain covariate interactions. Select Custom to specify only a subset of interactions or to specify factor-by-covariate interactions. You must indicate all of the terms to be included in the model. Factors and Covariates. The factors and covariates are listed with (F) for a fixed factor, (R) for a random factor, and (C) for a covariate. Model.
37 Variance Components Analysis Variance Components Options Figure 3-3 Variance Components Options dialog box Method. You can choose one of four methods used to estimate the variance components. MINQUE (minimum norm quadratic unbiased estimator) produces estimates that are invariant with respect to the fixed effects. If the data are normally distributed and the estimates are correct, this method produces the least variance among all unbiased estimators.
38 Chapter 3 Random-Effect Priors. Uniform implies that all random effects and the residual term have an equal impact on the observations. The Zero scheme is equivalent to assuming zero random-effect variances. Available only for the MINQUE method. Sum of Squares. Type I sums of squares are used for the hierarchical model, which is often used in variance component literature.
39 Variance Components Analysis considered useful for an unbalanced model with no missing cells. In a factorial design with no missing cells, this method is equivalent to the Yates’ weighted-squares-ofmeans technique. The Type III sum-of-squares method is commonly used for: Any models listed in Type I. Any balanced or unbalanced model with no empty cells.
40 Chapter 3 VARCOMP Command Additional Features The SPSS command language also allows you to: Specify nested effects in the design (using the DESIGN subcommand). Include user-missing values (using the MISSING subcommand). Specify EPS criteria (using the CRITERIA subcommand). See the SPSS Syntax Reference Guide for complete syntax information.
Chapter 4 Linear Mixed Models The Linear Mixed Models procedure expands the general linear model so that the data are permitted to exhibit correlated and non-constant variability. The mixed linear model, therefore, provides the flexibility of modeling not only the means of the data but their variances and covariances as well. The Linear Mixed Models procedure is also a flexible tool for fitting other models that can be formulated as mixed linear models.
42 Chapter 4 Data. The dependent variable should be quantitative. Factors should be categorical and can have numeric values or string values. Covariates and the weight variable should be quantitative. Subjects and repeated variables may be of any type. Assumptions. The dependent variable is assumed to be linearly related to the fixed factors, random factors, and covariates. The fixed effects model the mean of the dependent variable.
43 Linear Mixed Models All of the variables specified in the Subjects list are used to define subjects for the residual covariance structure. You can use some or all of the variables to define subjects for the random-effects covariance structure. Repeated. The variables specified in this list are used to identify repeated observations.
44 Chapter 4 Selecting Subjects/Repeated Variables for Linear Mixed Models From the menus choose: Analyze Mixed Models Linear... Figure 4-1 Linear Mixed Models: Specify Subjects/Repeated Variables dialog box Optionally, select one or more subjects variables. Optionally, select one or more repeated variables. Optionally, select a residual covariance structure. Click Continue.
45 Linear Mixed Models Obtaining a Linear Mixed Models Analysis Figure 4-2 Linear Mixed Models dialog box Select a dependent variable. Select at least one factor or covariate. Click Fixed or Random and specify at least a fixed-effects or random-effects model. Optionally, select a weighting variable.
46 Chapter 4 Linear Mixed Models Fixed Effects Figure 4-3 Linear Mixed Models: Fixed Effects dialog box Fixed Effects. There is no default model, so you must explicitly specify the fixed effects. Alternatively, you can build nested or non-nested terms. Include Intercept. The intercept is usually included in the model. If you can assume the data pass through the origin, you can exclude the intercept. Sum of squares. The method of calculating the sums of squares.
47 Linear Mixed Models All 2-Way. Creates all possible two-way interactions of the selected variables. All 3-Way. Creates all possible three-way interactions of the selected variables. All 4-Way. Creates all possible four-way interactions of the selected variables. All 5-Way. Creates all possible five-way interactions of the selected variables. Build Nested Terms You can build nested terms for your model in this procedure.
48 Chapter 4 A polynomial regression model in which any lower-order terms are specified before any higher-order terms. A purely nested model in which the first-specified effect is nested within the second-specified effect, the second-specified effect is nested within the third, and so on. (This form of nesting can be specified only by using syntax.) Type III. The default.
49 Linear Mixed Models Linear Mixed Models Random Effects Figure 4-4 Linear Mixed Models: Random Effects dialog box Random Effects. There is no default model, so you must explicitly specify the random effects. Alternatively, you can build nested or non-nested terms. You can also choose to include an intercept term in the random-effects model. You can specify multiple random-effects models. After building the first model, click Next to build the next model.
50 Chapter 4 Ante-Dependence: First Order AR(1) AR(1): Heterogeneous ARMA(1,1) Compound Symmetry Compound Symmetry: Correlation Metric Compound Symmetry: Heterogeneous Diagonal Factor Analytic: First Order Factor Analytic: First Order, Heterogeneous Huynh-Feldt Scaled Identity Toeplitz Toeplitz: Heterogeneous Unstructured Unstructured: Correlation Metric Variance Components For more information, see the appendix Covariance Structures. Subject Groupings.
51 Linear Mixed Models Linear Mixed Models Estimation Figure 4-5 Linear Mixed Models: Estimation dialog box Method. Select the maximum likelihood or restricted maximum likelihood estimation. Iterations: Maximum iterations. Specify a non-negative integer. Maximum step-halvings. At each iteration, the step size is reduced by a factor of 0.5 until the log-likelihood increases or maximum step-halving is reached. Specify a positive integer. Print iteration history for every n step(s).
52 Chapter 4 Log-likelihood Convergence. Convergence is assumed if the absolute change or relative change in the log-likelihood function is less than the value specified, which must be non-negative. The criterion is not used if the value specified equals 0. Parameter Convergence. Convergence is assumed if the maximum absolute change or maximum relative change in the parameter estimates is less than the value specified, which must be non-negative. The criterion is not used if the value specified equals 0.
53 Linear Mixed Models Summary Statistics. Produces tables for: Descriptive statistics. Displays the sample sizes, means, and standard deviations of the dependent variable and covariates (if specified). These statistics are displayed for each distinct level combination of the factors. Case Processing Summary. Displays the sorted values of the factors, the repeated measure variables, the repeated measure subjects, and the random-effects subjects and their frequencies. Model Statistics.
54 Chapter 4 Linear Mixed Models EM Means Figure 4-7 Linear Mixed Models: EM Means dialog box Estimated Marginal Means of Fitted Models. This group allows you to request modelpredicted estimated marginal means of the dependent variable in the cells and their standard errors for the specified factors. Moreover, you can request that factor levels of main effects be compared. Factor(s) and Factor Interactions.
55 Linear Mixed Models comparisons are made. If no reference category is selected, all pairwise comparisons will be constructed. The options for the reference category are the first, last, or custom-specified (in which case, you enter the value of the reference category). Linear Mixed Models Save Figure 4-8 Linear Mixed Models: Save dialog box This dialog box allows you to save various model results to the working file. Fixed Predicted Values.
56 Chapter 4 MIXED Command Additional Features The SPSS command language also allows you to: Specify tests of effects versus a linear combination of effects or a value (using the TEST subcommand). Include user-missing values (using the MISSING subcommand). Compute estimated marginal means for specified values of covariates (using the WITH keyword of the EMMEANS subcommand). Compare simple main effects of interactions (using the EMMEANS subcommand).
Chapter 5 Model Selection Loglinear Analysis The Model Selection Loglinear Analysis procedure analyzes multiway crosstabulations (contingency tables). It fits hierarchical loglinear models to multidimensional crosstabulations using an iterative proportional-fitting algorithm. This procedure helps you find out which categorical variables are associated. To build models, forced entry and backward elimination methods are available.
58 Chapter 5 Loglinear Analysis or Logit Loglinear Analysis. You can use Autorecode to recode string variables. If a numeric variable has empty categories, use Recode to create consecutive integer values. To Obtain a Model Selection Loglinear Analysis From the menus choose: Analyze Loglinear Model Selection... Figure 5-1 Model Selection Loglinear Analysis dialog box Select two or more numeric categorical factors. Select one or more factor variables in the Factor(s) list, and click Define Range.
59 Model Selection Loglinear Analysis Loglinear Analysis Define Range Figure 5-2 Loglinear Analysis Define Range dialog box You must indicate the range of categories for each factor variable. Values for Minimum and Maximum correspond to the lowest and highest categories of the factor variable. Both values must be integers, and the minimum value must be less than the maximum value. Cases with values outside of the bounds are excluded.
60 Chapter 5 in the Factors list and then select Interaction from the Build Terms drop-down list. The resulting model will contain the specified 3-way interaction A*B*C, the 2-way interactions A*B, A*C, and B*C, and main effects for A, B, and C. Do not specify the lower-order relatives in the generating class. Build Terms For the selected factors and covariates: Interaction. Creates the highest-level interaction term of all selected variables. This is the default. Main effects.
61 Model Selection Loglinear Analysis Display for Saturated Model. For a saturated model, you can choose Parameter estimates. The parameter estimates may help determine which terms can be dropped from the model. An association table, which lists tests of partial association, is also available. This option is computationally expensive for tables with many factors. Plot. For custom models, you can choose one or both types of plots, Residuals and Normal probability.
Chapter 6 General Loglinear Analysis The General Loglinear Analysis procedure analyzes the frequency counts of observations falling into each cross-classification category in a crosstabulation or a contingency table. Each cross-classification in the table constitutes a cell, and each categorical variable is called a factor. The dependent variable is the number of cases (frequency) in a cell of the crosstabulation, and the explanatory variables are factors and covariates.
64 Chapter 6 Contrast variables are continuous. They are used to compute generalized log-odds ratios. The values of the contrast variable are the coefficients for the linear combination of the logs of the expected cell counts. A cell structure variable assigns weights. For example, if some of the cells are structural zeros, the cell structure variable has a value of either 0 or 1. Do not use a cell structure variable to weight aggregated data. Instead, choose Weight Cases from the Data menu. Assumptions.
65 General Loglinear Analysis To Obtain a General Loglinear Analysis From the menus choose: Analyze Loglinear General... Figure 6-1 General Loglinear Analysis dialog box In the General Loglinear Analysis dialog box, select up to 10 factor variables. Optionally, you can: Select cell covariates. Select a cell structure variable to define structural zeros or include an offset term. Select a contrast variable.
66 Chapter 6 General Loglinear Analysis Model Figure 6-2 General Loglinear Analysis Model dialog box Specify Model. A saturated model contains all main effects and interactions involving factor variables. It does not contain covariate terms. Select Custom to specify only a subset of interactions or to specify factor-by-covariate interactions. Factors and Covariates. The factors and covariates are listed, with (Cov) indicating a covariate. Terms in Model. The model depends on the nature of your data.
67 General Loglinear Analysis All 4-way. Creates all possible four-way interactions of the selected variables. All 5-way. Creates all possible five-way interactions of the selected variables. General Loglinear Analysis Options Figure 6-3 General Loglinear Analysis Options dialog box The General Loglinear Analysis procedure displays model information and goodnessof-fit statistics. In addition, you can choose one or more of the following: Display.
68 Chapter 6 To Specify Options From the menus choose: Analyze Loglinear General... In the General Loglinear Analysis or Logit Loglinear Analysis dialog box, click Options. General Loglinear Analysis Save Figure 6-4 General Loglinear Analysis Save dialog box Select the values you want to save as new variables in the working data file. The suffix n in the new variable names increments to make a unique name for each saved variable.
69 General Loglinear Analysis GENLOG Command Additional Features The SPSS command language also allows you to: Calculate linear combinations of observed cell frequencies and expected cell frequencies, and print residuals, standardized residuals, and adjusted residuals of that combination (using the GERESID subcommand). Change the default threshold value for redundancy checking (using the CRITERIA subcommand). Display the standardized residuals (using the PRINT subcommand).
Chapter 7 Logit Loglinear Analysis The Logit Loglinear Analysis procedure analyzes the relationship between dependent (or response) variables and independent (or explanatory) variables. The dependent variables are always categorical, while the independent variables can be categorical (factors). Other independent variables, cell covariates, can be continuous, but they are not applied on a case-by-case basis. The weighted covariate mean for a cell is applied to that cell.
72 Chapter 7 Statistics. Observed and expected frequencies; raw, adjusted, and deviance residuals; design matrix; parameter estimates; generalized log odds ratio; Wald statistic; and confidence intervals. Plots: adjusted residuals, deviance residuals, and normal probability plots. Data. The dependent variables are categorical. Factors are categorical. Cell covariates can be continuous, but when a covariate is in the model, SPSS applies the mean covariate value for cases in a cell to that cell.
73 Logit Loglinear Analysis To Obtain a Logit Loglinear Analysis From the menus choose: Analyze Loglinear Logit... Figure 7-1 Logit Loglinear Analysis dialog box In the Logit Loglinear Analysis dialog box, select one or more dependent variables. Select one or more factor variables. The total number of dependent and factor variables must be less than or equal to 10. Optionally, you can: Select cell covariates. Select a cell structure variable to define structural zeros or include an offset term.
74 Chapter 7 Logit Loglinear Analysis Model Figure 7-2 Logit Loglinear Analysis Model dialog box Specify Model. A saturated model contains all main effects and interactions involving factor variables. It does not contain covariate terms. Select Custom to specify only a subset of interactions or to specify factor-by-covariate interactions. Factors and Covariates. The factors and covariates are listed, with (Cov) indicating a covariate. Terms in Model. The model depends on the nature of your data.
75 Logit Loglinear Analysis D1, D2, D1*D2 M1*D1, M1*D2, M1*D1*D2 M2*D1, M2*D2, M2*D1*D2. Include constant for dependent. Includes a constant for the dependent variable in a custom model. Build Terms For the selected factors and covariates: Interaction. Creates the highest-level interaction term of all selected variables. This is the default. Main effects. Creates a main-effects term for each variable selected. All 2-way. Creates all possible two-way interactions of the selected variables. All 3-way.
76 Chapter 7 The Logit Loglinear Analysis procedure displays model information and goodness-offit statistics. In addition, you can choose one or more of the following options: Display. Several statistics are available for display: observed and expected cell frequencies; raw, adjusted, and deviance residuals; a design matrix of the model; and parameter estimates for the model. Plot.
77 Logit Loglinear Analysis Select the values you want to save as new variables in the working data file. The suffix n in the new variable names increments to make a unique name for each saved variable. The saved values refer to the aggregated data (to cells in the contingency table), even if the data are recorded in individual observations in the Data Editor.
Chapter 8 Ordinal Regression Ordinal Regression allows you to model the dependence of a polytomous ordinal response on a set of predictors, which can be factors or covariates. The design of Ordinal Regression is based on the methodology of McCullagh (1980, 1998), and the procedure is referred to as PLUM in the syntax.
80 Chapter 8 standard errors, confidence intervals, and Cox and Snell’s, Nagelkerke’s, and 2 McFadden’s R statistics. Data. The dependent variable is assumed to be ordinal and can be numeric or string. The ordering is determined by sorting the values of the dependent variable in ascending order. The lowest value defines the first category. Factor variables are assumed to be categorical. Covariate variables must be numeric.
81 Ordinal Regression Select one dependent variable. Click OK. Ordinal Regression Options The Options dialog box allows you to adjust parameters used in the iterative estimation algorithm, choose a level of confidence for your parameter estimates, and select a link function. Figure 8-2 Ordinal Regression Options dialog box Iterations. You can customize the iterative algorithm. Maximum iterations. Specify a non-negative integer. If 0 is specified, the procedure returns the initial estimates.
82 Chapter 8 Confidence interval. Specify a value greater than or equal to 0 and less than 100. Delta. The value added to zero cell frequencies. Specify a non-negative value less than 1. Singularity tolerance. Used for checking for highly dependent predictors. Select a value from the list of options. Link. Choose among the Cauchit, Complementary Log-log, Logit, Negative Log-log, and Probit functions.
83 Ordinal Regression Summary statistics. Cox and Snell’s, Nagelkerke’s, and McFadden’s 2 R statistics. Parameter estimates. Parameter estimates, standard errors, and confidence intervals. Asymptotic correlation of parameter estimates. Matrix of parameter estimate correlations. Asymptotic covariance of parameter estimates. Matrix of parameter estimate covariances. Cell information.
84 Chapter 8 Ordinal Regression Location Model The Location dialog box allows you to specify the location model for your analysis. Figure 8-4 Ordinal Regression Location dialog box Specify model. A main-effects model contains the covariate and factor main effects but no interaction effects. You can create a custom model to specify subsets of factor interactions or covariate interactions. Factors/covariates. The factors and covariates are listed with (F) for factor and (C) for covariate. Location model.
85 Ordinal Regression All 4-way. Creates all possible four-way interactions of the selected variables. All 5-way. Creates all possible five-way interactions of the selected variables. Ordinal Regression Scale Model The Scale dialog box allows you to specify the scale model for your analysis. Figure 8-5 Ordinal Regression Scale dialog box Factors/covariates. The factors and covariates are listed with (F) for factor and (C) for covariate. Scale model.
86 Chapter 8 Ordinal Regression Command Additional Features You can customize your Ordinal Regression if you paste your selections into a syntax window and edit the resulting PLUM command syntax. The SPSS command language also allows you to: Create customized hypothesis tests by specifying null hypotheses as linear combinations of parameters. See the SPSS Syntax Reference Guide for complete syntax information.
Chapter 9 Life Tables There are many situations in which you would want to examine the distribution of times between two events, such as length of employment (time between being hired and leaving the company). However, this kind of data usually includes some cases for which the second event isn’t recorded (for example, people still working for the company at the end of the study).
88 Chapter 9 Statistics. Number entering, number leaving, number exposed to risk, number of terminal events, proportion terminating, proportion surviving, cumulative proportion surviving (and standard error), probability density (and standard error), and hazard rate (and standard error) for each time interval for each group; median survival time for each group; and Wilcoxon (Gehan) test for comparing survival distributions between groups.
89 Life Tables Figure 9-1 Life Tables dialog box Select one numeric survival variable. Specify the time intervals to be examined. Select a status variable to define cases for which the terminal event has occurred. Click Define Event to specify the value of the status variable that indicates that an event occurred. Optionally, you can select a first-order factor variable. Actuarial tables for the survival variable are generated for each category of the factor variable.
90 Chapter 9 Life Tables Define Event for Status Variable Figure 9-2 Life Tables Define Event for Status Variable dialog box Occurrences of the selected value or values for the status variable indicate that the terminal event has occurred for those cases. All other cases are considered to be censored. Enter either a single value or a range of values that identifies the event of interest.
91 Life Tables Life Tables Options Figure 9-4 Life Tables Options dialog box You can control various aspects of your Life Tables analysis. Life tables. To suppress the display of life tables in the output, deselect Life tables. Plot. Allows you to request plots of the survival functions. If you have defined factor variable(s), plots are generated for each subgroup defined by the factor variable(s). Available plots are survival, log survival, hazard, density, and one minus survival.
Chapter 10 Kaplan-Meier Survival Analysis There are many situations in which you would want to examine the distribution of times between two events, such as length of employment (time between being hired and leaving the company). However, this kind of data usually includes some censored cases. Censored cases are cases for which the second event isn’t recorded (for example, people still working for the company at the end of the study).
94 Chapter 10 Assumptions. Probabilities for the event of interest should depend only on time after the initial event—they are assumed to be stable with respect to absolute time. That is, cases that enter the study at different times (for example, patients who begin treatment at different times) should behave similarly. There should also be no systematic differences between censored and uncensored cases.
95 Kaplan-Meier Survival Analysis Select a time variable. Select a status variable to identify cases for which the terminal event has occurred. This variable can be numeric or short string. Then click Define Event. Optionally, you can select a factor variable to examine group differences. You can also select a strata variable, which will produce separate analyses for each level (stratum) of the variable.
96 Chapter 10 Kaplan-Meier Compare Factor Levels Figure 10-3 Kaplan-Meier Compare Factor Levels dialog box You can request statistics to test the equality of the survival distributions for the different levels of the factor. Available statistics are log rank, Breslow, and TaroneWare. Select one of the alternatives to specify the comparisons to be made: pooled over strata, for each stratum, pairwise over strata, or pairwise for each stratum. Linear trend for factor levels.
97 Kaplan-Meier Survival Analysis Kaplan-Meier Options Figure 10-5 Kaplan-Meier Options dialog box You can request various output types from Kaplan-Meier analysis. Statistics. You can select statistics displayed for the survival functions computed, including survival table(s), mean and median survival, and quartiles. If you have included factor variables, separate statistics are generated for each group. Plots.
Chapter 11 Cox Regression Analysis Like Life Tables and Kaplan-Meier survival analysis, Cox Regression is a method for modeling time-to-event data in the presence of censored cases. However, Cox Regression allows you to include predictor variables (covariates) in your models. For example, you could construct a model of length of employment based on educational level and job category.
100 Chapter 11 Related procedures. If the proportional hazards assumption does not hold (see above), you may need to use the Cox with Time-Dependent Covariates procedure. If you have no covariates, or if you have only one categorical covariate, you can use the Life Tables or Kaplan-Meier procedure to examine survival or hazard functions for your sample(s).
101 Cox Regression Analysis Select one or more covariates. To include interaction terms, select all of the variables involved in the interaction and then select >a*b>. Optionally, you can compute separate models for different groups by defining a strata variable. Cox Regression Define Categorical Variables Figure 11-2 Cox Regression Define Categorical Covariates dialog box You can specify details of how the Cox Regression procedure will handle categorical variables. Covariates.
102 Chapter 11 Simple. Each category of the predictor variable except the reference category is compared to the reference category. Difference. Each category of the predictor variable except the first category is compared to the average effect of previous categories. Also known as reverse Helmert contrasts. Helmert. Each category of the predictor variable except the last category is compared to the average effect of subsequent categories. Repeated.
103 Cox Regression Analysis Plots can help you to evaluate your estimated model and interpret the results. You can plot the survival, hazard, log-minus-log, and one-minus-survival functions. Because these functions depend on values of the covariates, you must use constant values for the covariates to plot the functions versus time. The default is to use the mean of each covariate as a constant value, but you can enter your own values for the plot using the Change Value control group.
104 Chapter 11 Cox Regression Options Figure 11-5 Cox Regression Options dialog box You can control various aspects of your analysis and output. Model Statistics. You can obtain statistics for your model parameters, including confidence intervals for exp(B) and correlation of estimates. You can request these statistics either at each step or at the last step only. Probability for Stepwise. If you have selected a stepwise method, you can specify the probability for either entry or removal from the model.
105 Cox Regression Analysis COXREG Command Additional Features The SPSS command language also allows you to: Obtain frequency tables that consider cases lost to follow-up as a separate category from censored cases. Select a reference category, other than first or last, for the deviation, simple, and indicator contrast methods. Specify unequal spacing of categories for the polynomial contrast method. Specify additional iteration criteria. Control the treatment of missing values.
Chapter 12 Computing Time-Dependent Covariates There are certain situations in which you would want to compute a Cox Regression model but the proportional hazards assumption does not hold. That is, hazard ratios change across time; the values of one (or more) of your covariates are different at different time points. In such cases, you need to use an extended Cox Regression model, which allows you to specify time-dependent covariates.
108 Chapter 12 the four weeks of your study (identified as BP1 to BP4), you can define your timedependent covariate as ( T_ < 1 ) * BP1 + ( T_ ≥ 1 & T_ < 2 ) * BP2 + ( T_ ≥ 2 & T_ < 3 ) * BP3 + (T_ ≥ 3 & T_ < 4) * BP4 Notice that exactly one of the terms in parentheses will be equal to 1 for any given case and the rest will all equal 0. In other words, this function means that if time is less than one week, use BP1; if it is more than one week but less than two weeks, use BP2, and so on.
109 Computing Time-Dependent Covariates Figure 12-1 Compute Time-Dependent Covariate dialog box Enter an expression for the time-dependent covariate. Click Model to proceed with your Cox Regression. Note: Be sure to include the new variable T_COV_ as a covariate in your Cox Regression model. For more information about the model-building process, see Chapter 11.
Appendix A Categorical Variable Coding Schemes In many SPSS procedures, you can request automatic replacement of a categorical independent variable with a set of contrast variables, which will then be entered or removed from an equation as a block. You can specify how the set of contrast variables is to be coded, usually on the CONTRAST subcommand. This appendix explains and illustrates how different contrast types requested on CONTRAST actually work. Deviation Deviation from the grand mean.
112 Appendix A To omit a category other than the last, specify the number of the omitted category in parentheses after the DEVIATION keyword. For example, the following subcommand obtains the deviations for the first and third categories and omits the second: /CONTRAST(FACTOR)=DEVIATION(2) Suppose that factor has three categories. The resulting contrast matrix will be ( 1/3 ( 2/3 ( –1/3 1/3 –1/3 –1/3 1/3 ) –1/3 ) 2/3 ) Simple Simple contrasts. Compares each level of a factor to the last.
113 Categorica l Variable Coding Sc hemes Suppose that factor has four categories. The resulting contrast matrix will be ( 1/4 ( 1 ( 0 ( 0 1/4 –1 –1 –1 1/4 0 1 0 1/4 ) 0) 0) 1) Helmert Helmert contrasts. Compares categories of an independent variable with the mean of the subsequent categories. The general matrix form is mean df(1) df(2) . . df(k–2) df(k–1) ( ( ( 1/k 1 0 ( ( 0 0 1/k –1/(k–1) 1 . . 0 0 ... ... ... 1/k –1/(k–1) –1/(k–2) 1/k ) –1/(k–1)) –1/(k–2)) 1 ...
114 Appendix A Difference Difference or reverse Helmert contrasts. Compares categories of an independent variable with the mean of the previous categories of the variable. The general matrix form is mean df(1) df(2) . . df(k–1) ( ( ( ( 1/k –1 –1/2 1/k 1/k 1 0 –1/2 1 . . –1/(k–1) –1/(k–1) –1/(k–1) ... ... ... 1/k ) 0) 0) ... 1) where k is the number of categories for the independent variable.
115 Categorica l Variable Coding Sc hemes third group is three times that to the first group, the treatment categories are equally spaced, and an appropriate metric for this situation consists of consecutive integers: /CONTRAST(DRUG)=POLYNOMIAL(1,2,3) If, however, the dosage administered to the second group is four times that given the first group, and the dosage given the third group is seven times that to the first, an appropriate metric is /CONTRAST(DRUG)=POLYNOMIAL(1,4,7) In either case, the result o
116 Appendix A Special A user-defined contrast. Allows entry of special contrasts in the form of square matrices with as many rows and columns as there are categories of the given independent variable. For MANOVA and LOGLINEAR, the first row entered is always the mean, or constant, effect and represents the set of weights indicating how to average other independent variables, if any, over the given variable. Generally, this contrast is a vector of ones.
117 Categorica l Variable Coding Sc hemes Each row except the means row sums to 0. Products of each pair of disjoint rows sum to 0 as well: Rows 2 and 3: ( 3 ) ( 0 ) + ( – 1 ) ( 2 ) + ( – 1 ) ( – 1 ) + ( – 1 ) ( – 1 ) = 0 Rows 2 and 4: ( 3 ) ( 0 ) + ( – 1 ) ( 0 ) + ( – 1 ) ( 1 ) + ( – 1 ) ( – 1 ) = 0 Rows 3 and 4: ( 0 ) ( 0 ) + ( 2 ) ( 0 ) + ( – 1 ) ( 1 ) + ( – 1 ) ( – 1 ) = 0 The special contrasts need not be orthogonal. However, they must not be linear combinations of each other.
Appendix B Covariance Structures This appendix provides additional information on covariance structures. Ante-Dependence: First-Order. This covariance structure has heterogenous variances and heterogenous correlations between adjacent elements. The correlation between two nonadjacent elements is the product of the correlations between the elements that lie between the elements of interest. LM s s s r M M s s rr M s s rr r N 2 1 2 3 4 1 1 1 s 1 1 1 s 2s 1 r 1 2 2 3 AR(1): Heterogenous.
120 Appendix B ARMA(1,1). This is a first-order autoregressive moving average structure. It has homogenous variances. The correlation between two elements is equal to phi*rho for adjacent elements, phi*(rho^2) for elements separated by a third, and so on. Rho and phi are the autoregressive and moving average parameters, respectively, and their values are constrained to lie between –1 and 1, inclusive. Compound Symmetry. This structure has constant variance and constant covariance.
121 Covariance Structures LM l +d l l M M l l M Nl l Factor Analytic: First-Order. This covariance structure has heterogenous variances that are composed of a term that is heterogenous across elements and a term that is homogenous across elements. The covariance between any two elements is the square root of the product of their heterogenous variance terms. 2 1 Scaled Identity. This structure has constant variance. There is assumed to be no correlation between any elements.
122 Appendix B Toeplitz. This covariance structure has homogenous variances and heterogenous correlations between elements. The correlation between adjacent elements is homogenous across pairs of adjacent elements. The correlation between elements separated by a third is again homogenous, and so on. Toeplitz: Heterogenous. This covariance structure has heterogenous variances and heterogenous correlations between elements.
Index analysis of covariance in GLM Multivariate, 1 analysis of variance in Variance Components, 37 ANOVA in GLM Multivariate, 1 in GLM Repeated Measures, 18 contrasts, 71, 101 in Cox Regression, 101 in General Loglinear Analysis, 63 in GLM Multivariate, 6, 7 in GLM Repeated Measures, 23, 24 in Logit Loglinear Analysis, 71 Cook’s distance in GLM Multivariate, 11 in GLM Repeated Measures, 28 correlation matrix, 52, 82 in Linear Mixed Models, 52 in Ordinal Regression, 82 covariance matrix, 52, 82 in Linear
124 Index related procedures, 99 saving new variables, 103 statistics, 99, 104 stepwise entry and removal, 104 string covariates, 101 survival function, 103 survival status variable, 104 time-dependent covariates, 107, 109 cross-products hypothesis and error matrices, 12 crosstabulation in Model Selection Loglinear Analysis, 57 cumulative frequencies, 82 in Ordinal Regression, 82 custom models in GLM Multivariate, 4 in GLM Repeated Measures, 21 in Model Selection Loglinear Analysis, 59 in Variance Componen
125 Index General Loglinear Analysis build terms, 66 cell covariates, 63 cell structures, 63 confidence intervals, 67 contrasts, 63 criteria, 67 display options, 67 distribution of cell counts, 63 factors, 63 interaction terms, 66 model specification, 66 plots, 67 residuals, 68 saving predicted values, 68 saving variables, 68 generalized log-odds ratio in General Loglinear Analysis, 63 generating class in Model Selection Loglinear Analysis, 59 GLM full factorial models, 4 GLM Multivariate build terms, 5 co
126 Index Kaplan-Meier command additional features, 97 comparing factor levels, 96 defining events, 95 example, 93 linear trend for factor levels, 96 mean and median survival time, 97 plots, 97 quartiles, 97 saving new variables, 96 statistics, 93, 97 survival status variables, 95 survival tables, 97 least significant difference in GLM Multivariate, 9 in GLM Repeated Measures, 26 Levene test in GLM Multivariate, 12 in GLM Repeated Measures, 29 leverage values in GLM Multivariate, 11 in GLM Repeated Measur
127 Index Model Selection Loglinear Analysis, 57 build terms, 60 defining factor ranges, 59 interaction terms, 60 models, 59 options, 60 multinomial logit models, 71 multivariate ANOVA, 1 multivariate GLM, 1 Nagelkerke R-square, 82 in Ordinal Regression, 82 nested terms, 47 in Linear Mixed Models, 47 Newton-Raphson method in General Loglinear Analysis, 63 in Logit Loglinear Analysis, 71 normal probability plots in Model Selection Loglinear Analysis, 60 observed frequencies, 82 in Ordinal Regression, 82 o
128 Index R-E-G-W Q in GLM Multivariate, 9 in GLM Repeated Measures, 26 repeated contrasts, 115 in GLM Multivariate, 6, 7 in GLM Repeated Measures, 23, 24 repeated measures variables, 42 in Linear Mixed Models, 42 residual covariance matrix, 52 in Linear Mixed Models, 52 residual plots in GLM Multivariate, 12 in GLM Repeated Measures, 29 residual SSCP in GLM Multivariate, 12 in GLM Repeated Measures, 29 residuals in General Loglinear Analysis, 68 in Linear Mixed Models, 55 in Logit Loglinear Analysis, 76 i
129 Index survival function in Life Tables, 87 t tests in GLM Multivariate, 12 in GLM Repeated Measures, 29 Tamhane’s T2 in GLM Multivariate, 9 in GLM Repeated Measures, 26 Tarone-Ware test in Kaplan-Meier, 96 test of parallel lines, 82 in Ordinal Regression, 82 Tukey’s honestly significant difference in GLM Multivariate, 9 in GLM Repeated Measures, 26 Tukey’s-b test in GLM Multivariate, 9 in GLM Repeated Measures, 26 unstandardized residuals in GLM Multivariate, 11 in GLM Repeated Measures, 28 Variance