# Effect size, sample size, and statistical power

## Choose an effect size to maximize statistical power and decrease sample size

**ALWAYS**exact a predictable and static change in the other two.

An effect size is the

**hypothesized difference**expected by researchers in an

*a priori*fashion between

**independent groups**(between-subjects analysis),

**across time or observations**(within-subjects analysis), or the

**magnitude and direction of association between constructs**(correlations and multivariate analyses).

Effect size planning is perhaps the

**HARDEST**part of designing a research study. Oftentimes, researchers have

**NO IDEA**of what type of effect size they are trying to detect.

First and foremost, when researchers cannot state the hypothesized differences in their outcomes, an

**evidence-based measure of effect**yielded from a published study that is theoretically or conceptually similar to the phenomenon of interest should be used. Using an evidence-based measure of effect in an

*a priori*power analysis shows

**more empirical rigor**on the part of the researchers and

**increases the internal validity**of the study with the use of published values.

Sample size is the

**absolute number of participants that are sampled from a given population for purposes of running inferential statistics**. The nomenclature of the word, inferential, denotes the basic empirical reasoning that we are drawing a

**representative sample**

**from a population**and then conducting statistics in order to

**make inferences back to said population**. An important part of preliminary study planning is to

**specify the inclusion and exclusion criteria**for participation in your study and then getting an idea of how large a

**participant pool**you have available to you from which to draw a sample for purposes of running inferential statistics.

Due to the underlying algebra associated with mathematical science,

**large sample sizes**will drastically increase your chances of detecting a statistically significant finding, or in other terms, drastically

**increase your statistical power**. Large sample sizes will also allow you to detect both

**large and small effect sizes**, regardless of scale of measurement of the outcome, research design, and/or magnitude, variance, and direction of the effect.

**Small sample sizes will decrease**your chances of detecting statistically significant differences (

**statistical power**), especially with categorical and ordinal outcomes, between-subjects and multivariate designs, and small effect sizes.

Statistical power is the

**chance you have as a researcher to reject the null hypothesis, given that the treatment effect actually exists in the population**. Basically, statistical power is the chance you have of finding a significant difference or main effect when running statistical analyses. Statistical power is what you are interested in when you ask, "

**How many people do I need to find significance?**"

In the applied empirical sense,

**measuring for large effect sizes increases statistical power**. Trying to detect

**small effect sizes will decrease your statistical power**.

**Continuous outcomes increase statistical power**because of increased precision and accuracy in measurement.

**Categorical and ordinal outcomes decrease statistical power**because of decreased variance and objectivity of measurement.

**Within-subjects designs generate more statistical power**due to participants serving as their own controls.

**Between-subjects and multivariate designs require more observations to detect differences and therefore decrease statistical power**.