SPSS has a user-friendly interface and powerful capabilities
Conducting statistics and interpreting outputs is easy in SPSS
Statistical Package for the Social Sciences (SPSS; Armonk, NY, IBM Corp.) is a statistical software application that allows for researchers to enter and manipulate data and conduct various statistical analyses. Step by step methods for conducting and interpreting over 60 statistical tests are available in Research Engineer. Videos will be coming soon. Click on a link below to gain access to the methods for conducting and interpreting the statistical analysis in SPSS.
Parametric statistics are more powerful statistics
Non-parametric statistics are used with categorical and ordinal outcomes
As we continue our journey to break through the barriers associated with statistical lexicons, here is another dichotomy of popular statistical terms that are spoken commonly but not always understood by everyone.
Parametric statistics are used to assess differences and effects for continuous outcomes. These statistical tests include one-sample t-tests, independent samples t-tests, one-way ANOVA, repeated-measures ANOVA, ANCOVA, factorial ANOVA, multiple regression, MANOVA, and MANCOVA.
Non-parametric statistics are used to assess differences and effects for:
1. Ordinal outcomes - One-sample median tests, Mann-Whitney U, Wilcoxon, Kruskal-Wallis, Friedman's ANOVA, Proportional odds regression
2. Categorical outcomes - Chi-square, Chi-square Goodness-of-fit, odds ratio, relative risk, McNemar's, Cochran's Q, Kaplan-Meier, log-rank test, Cochran-Mantel-Haenszel, Cox regression, logistic regression, multinomial logistic regression
3. Small sample sizes (n < 30) - Smaller sample sizes make it harder to meet the statistical assumptions associated with parametric statistics. Non-parametric statistics can generate valid statistical inferences in these situations.
4. Violations of statistical assumptions for parametric tests - Normality, Homogeneity of variance, Normality of difference scores
McNemar's can be used as a post hoc test
Significant main effects for Cochran's Q need to be explained
Non-parametric tests like chi-square, fisher's exact test, Kruskal-Wallis, Cochran's Q, and Friedman's ANOVA do not have post hoc analyses to explain significant main effects. In order to conduct these post hoc anlayses, researchers have to resort to using subsequent non-parametric tests for two groups.
In a prior post, I explained how Mann-Whitney U tests were used in a post hoc fashion for significant main effects found with Kruskal-Wallis analyses. This is pertinent for between-subjects tests.
If you are using a within-subjects design with three or more observations of a dichotomous categorical outcome, you utilize Cochran's Q test to assess main effects. If a significant main effect is found, then McNemar's tests have to be employed for post hoc group comparisons. Significant post hoc tests (or relative risk calculations) will provide evidence of significant differences across observations or within-subjects.
Non-parametric statistics should be employed more often than they are in the literature. Many published studies use small sample sizes and ordinal or categorical outcomes. The statistical assumptions of more power parametric statistics can often not be met with these types of designs. Non-parametric statistics are robust to these violations and should be used accordingly. Post hoc analyses are important in non-parametric statistics, just like in parametric statistics.
Eric Heidel, Ph.D. is Owner and Operator of Scalë, LLC.
Hire A Statistician!
Copyright © 2019 Scalë. All Rights Reserved. Patent Pending.