Another great post from Daniel Lakens.
- Determine the maximum sample size you are willing to collect (e.g., N = 400)
- Plan equally spaced analyses (e.g., four looks at the data, after 50, 100, 150, and 200 participants per condition in a two-sample t-test).
- Use alpha levels for each of the four looks at your data that control the Type 1 error rate (e.g., for four looks: 0.018, 0.019, 0.020, and 0.021; for three looks: 0.023, 0.023, and 0.024; for two looks: 0.031 and 0.030).
- Calculate one-sided p-values and JZS Bayes Factors (with a scale r on the effect size of 0.5) at every analysis. Stop when the effect is statistically significant and/or JZS Bayes Factors > 3. Stop when there is support for the null hypothesis based on a JZS Bayes Factor < 0.3. If the results are inconclusive, continue. In small samples (e.g., 50 participants per condition) the risk of Type 1 errors when accepting the null using Bayes Factors is relatively high, so always interpret results from small samples with caution.
- When the maximum sample size is reached without providing convincing evidence for the null or alternative hypothesis, interpret the Bayes Factor while acknowledging the Bayes Factor provides weak support for either the null or the alternative hypothesis. Conclude that based on the power you had to observe a small effect size (e.g., 91% power to observe a d = 0.3) the true effect size is most likely either zero or small.
- Report the effect size and its 95% confidence interval, and interpret it in relation to other findings in the literature or to theoretical predictions about the size of the effect.