SPSS has a user-friendly interface and powerful capabilities
Conducting statistics and interpreting outputs is easy in SPSS
Statistical Package for the Social Sciences (SPSS; Armonk, NY, IBM Corp.) is a statistical software application that allows for researchers to enter and manipulate data and conduct various statistical analyses. Step by step methods for conducting and interpreting over 60 statistical tests are available in Research Engineer. Videos will be coming soon. Click on a link below to gain access to the methods for conducting and interpreting the statistical analysis in SPSS.
Parametric statistics are more powerful statistics
Non-parametric statistics are used with categorical and ordinal outcomes
As we continue our journey to break through the barriers associated with statistical lexicons, here is another dichotomy of popular statistical terms that are spoken commonly but not always understood by everyone.
Parametric statistics are used to assess differences and effects for continuous outcomes. These statistical tests include one-sample t-tests, independent samples t-tests, one-way ANOVA, repeated-measures ANOVA, ANCOVA, factorial ANOVA, multiple regression, MANOVA, and MANCOVA.
Non-parametric statistics are used to assess differences and effects for:
1. Ordinal outcomes - One-sample median tests, Mann-Whitney U, Wilcoxon, Kruskal-Wallis, Friedman's ANOVA, Proportional odds regression
2. Categorical outcomes - Chi-square, Chi-square Goodness-of-fit, odds ratio, relative risk, McNemar's, Cochran's Q, Kaplan-Meier, log-rank test, Cochran-Mantel-Haenszel, Cox regression, logistic regression, multinomial logistic regression
3. Small sample sizes (n < 30) - Smaller sample sizes make it harder to meet the statistical assumptions associated with parametric statistics. Non-parametric statistics can generate valid statistical inferences in these situations.
4. Violations of statistical assumptions for parametric tests - Normality, Homogeneity of variance, Normality of difference scores
Multivariate statistical tests show evidence of association between predictor variables and an outcome, when controlling for demographic, confounding, and other patient data.
Multivariate statistics are more reflective of real-world medicine
We covered between-subjects and within-subjects analyses in the first Statistical Designs post. Multivariate statistics will be the focus in Statistical Designs 2.
While 90% of statistics reported in the literature fall under the guise of between-subjects and within-subjects analyses, they do not properly account for all of the variance and confounding effects that exist in reality. Multivariate statistics play an important role in empirical reasoning because they allow us to control for various demographic, confounding, clinical, or prognostic variables that mitigate, mediate, and affect the association between a predictor and outcome variable. They are also much more representative of reality and true effects that exist within human populations.
Very few if any relationships or treatment effects in physiology, psychology, education, or life in general are bivariate in nature. Relationships and treatment effects in reality ARE multivariate, diverse, and confounded by any number of characteristics. Therefore, it makes sense that researchers should be conducting multivariate statistics to truly understand human phenomena.
With this being said, it is important to use multivariate statistics ONLY when you are asking a multivariate research question. Throwing a bunch of variables into a model without some sort of theoretical or conceptual reason for including them can yield false treatment effects and increase Type I errors. Also, these spurious variables can create "statistical noise" which detracts from a model's capability for detecting significant associations.
Choosing the correct multivariate statistic to answer your question is simple. You choose the multivariate analysis based on the outcome.
1. Categorical outcomes - Logistic regression (dichotomous), multinomial logistic regression (polychotomous), Kaplan-Meier, Cochran-Mantel-Haenszel, Cox regression (dichotomous/survival/time-to-event)
2. Ordinal outcomes - Proportional odds regression
3. Continuous outcomes - Factorial ANOVA with fixed effects, factorial ANOVA with random effects, factorial ANOVA with mixed effects, ANCOVA, multiple regression, MANOVA, MANCOVA
4. Count outcomes - Negative binomial regression (variance larger than mean) and Poisson regression (mean larger than variance)
Eric Heidel, Ph.D. is Owner and Operator of Scalë, LLC.
Hire A Statistician!
Copyright © 2019 Scalë. All Rights Reserved. Patent Pending.