Factorial Analysis of Variance: Within-Within

This blog post explores Factorial Analysis of Variance (ANOVA) in a within-within design, covering key assumptions, equations, and calculations. Readers will learn about effect size measures, post-hoc analysis options, and non-parametric alternatives. The post also provides steps to perform Factorial ANOVA using popular statistical software packages like jamovi, SPSS, and R Studio, and guidance on reporting results in APA style, enabling readers to effectively apply this statistical method in real-world research settings.

Factorial ANOVA
Within-within design
Comparing means
Author
Affiliation

Cal State Northridge

Published

April 30, 2023

Learning Objectives

  1. Define within-within factorial ANOVA and describe its potential applications in kinesiology research.
  2. Understand the assumptions underlying within-within factorial ANOVA and how to check them.
  3. Know how to calculate the various sums of squares, degrees of freedom, and mean squares for within-within factorial ANOVA by hand.
  4. Interpret the results of within-within factorial ANOVA, including F-ratios and p-values, and understand how to make inferences about the effects of factors and their interactions.
  5. Understand how to conduct post-hoc tests to investigate significant main effects or interactions.
  6. Describe the advantages and limitations of within-within factorial ANOVA compared to other statistical methods.
  7. Understand how to use statistical software to conduct within-within factorial ANOVA and interpret the output.
  8. Identify real-world examples of within-within factorial ANOVA in kinesiology research and understand how it can be used to answer research questions.
  9. Be able to critically evaluate within-within factorial ANOVA studies in the kinesiology literature and identify potential limitations or confounding factors.
  10. Understand the importance of careful study design and data collection for within-within factorial ANOVA and how to avoid common pitfalls.

1 Decision

List of questions that a researcher may need to consider when deciding to use a within-subjects factorial ANOVA:

  1. Are both independent variables dependent or related to each other? If yes, then a within-subjects factorial ANOVA may be appropriate.
  2. Is the dependent variable continuous? If not, then a within-subjects factorial ANOVA is not appropriate.
  3. Are there two or more independent variables? If not, then a within-subjects factorial ANOVA is not appropriate. In this case, use the One-way ANOVA.
  4. Are the data normally distributed? If the data are not normally distributed, then the researcher may consider using a nonparametric alternative such as the Friedman test or a mixed design Friedman test.
  5. Are the variances equal across groups? If not, then the researcher may consider using Welch’s ANOVA or a mixed design Welch’s ANOVA.

In summary, a within-subjects factorial ANOVA is appropriate when the dependent variable is continuous and there are two or more independent variables that are dependent or related to each other, and the data meet the assumptions of normality and equal variances.

Note

The website StatKat has several tools to help with this decision.

2 Sample data

Download1 the dataset: motor_performance.csv

This dataset consists of 30 participants who have undergone balance and strength training interventions. The dataset contains the following variables:

  1. ID: A unique identifier for each participant.
  2. Pretest_Balance: The Functional Reach Test (FRT) score for balance at the beginning of the intervention (pretest).
  3. Midtest_Balance: The FRT score for balance at the midpoint of the intervention (midtest).
  4. Posttest_Balance: The FRT score for balance at the end of the intervention (posttest).
  5. Pretest_Strength: The strength score at the beginning of the intervention (pretest).
  6. Midtest_Strength: The strength score at the midpoint of the intervention (midtest).
  7. Posttest_Strength: The strength score at the end of the intervention (posttest).
  8. Gender: The gender of each participant (Male or Female).

The dataset includes information on participants’ motor performance in terms of balance (FRT scores) and strength over three time points (pretest, midtest, and posttest) and their gender. This dataset can be used to investigate the effectiveness of balance and strength training interventions on motor performance over time and any potential differences in outcomes based on gender.

A lower FRT score suggests poorer balance and stability, which may be associated with a higher risk of falling. In contrast, a higher FRT score indicates better balance and stability, suggesting a lower risk of falling. By examining the effects of time and training type on FRT scores, this study aims to determine whether balance and strength training interventions can improve motor performance and, by extension, reduce the risk of falling among the participants.

3 Intro to within-subjects \(f\)ANOVA

In kinesiology, we often investigate the effects of multiple factors on various dependent variables. Take, for example, the study of strength training interventions. We might be interested in examining the impact of different exercise regimens and intensities on muscle strength, endurance, and flexibility. We need a statistical approach that accounts for multiple independent variables and their possible interactions to explore these relationships.

Enter the within-within \(f\)ANOVA (aka within-subjects \(f\)ANOVA)! This advanced statistical method allows us to examine the effects of two or more within-subject factors on a continuous dependent variable while accounting for the interactions between these factors. Using a within-within design, we can minimize the influence of extraneous variables and increase the statistical power of our analysis, making it an ideal choice for kinesiology research.

This blog post will cover the basics of within-within factorial ANOVA, including its assumptions, interpretation, and application. We will then delve into real-world examples from kinesiology research, demonstrating how this versatile tool can untangle the web of relationships between various factors in our studies. By the end of this post, you will have a deeper understanding of the within-within factorial ANOVA and be better equipped to incorporate it into your research projects.

The within-subjects \(f\)ANOVA allow to examine the effects of two or more within-subject factors on a continuous dependent variable while accounting for the interactions between these factors.

4 Assumptions

Certain assumptions must be met for the within-within factorial ANOVA to produce valid results. These assumptions are important when designing your study and analyzing your data. Here are the main assumptions for this analysis:

  1. Normality: The dependent variable should be approximately normally distributed within each combination of the levels of the within-subject factors. While the within-within factorial ANOVA is fairly robust to moderate violations of normality, severe deviations from normality can compromise the validity of the results. It is essential to assess the normality of the data using visual methods like histograms, Q-Q plots, or statistical tests like the Shapiro-Wilk test.
  2. Sphericity: Sphericity is an assumption specific to within-subjects designs and refers to the equality of variances of the differences between levels of a within-subject factor. Violations of sphericity can lead to an increased likelihood of Type I errors (false positives). To assess sphericity, you can use Mauchly’s test. If sphericity is violated, you can apply corrections to the degrees of freedom, such as the Greenhouse-Geisser or Huynh-Feldt corrections, to adjust the F-test and maintain the validity of the results.
  3. Independence of observations: This assumption states that the observations within each combination of the levels of the within-subject factors should be independent of one another. Although within-subject designs inherently involve repeated measures on the same participants, the independence assumption still applies to the error term. Randomly assigning participants to the order of conditions and counterbalancing can help maintain this assumption.
  4. Homogeneity of error variances: The variances of the errors should be consistent across all combinations of the levels of the within-subject factors. This assumption can be assessed using Levene's test or visual inspection of residual plots. Data transformations or non-parametric tests might be considered if this assumption is violated.

By ensuring that these assumptions are met, you can confidently apply within-within factorial ANOVA to your kinesiology research and trust the validity of your results. Conversely, if any of these assumptions are violated, consider alternative statistical methods, transformations, or corrections to maintain the integrity of your analysis.

5 Equations

Equations needed for hand calculation of within-within factorial ANOVA:

Grand mean

\[frac{\sum_{i=1}^a \sum_{j=1}^b Y_{ij}}{abn} \tag{1}\]

Factor A sum of squares

\[(SSA) = n\sum_{i=1}^a (\bar{Y_{i\cdot}} - \bar{Y})^2 \tag{2}\]

Factor B sum of squares

\[(SSB) = n\sum_{j=1}^b (\bar{Y_{\cdot j}} - \bar{Y})^2 \tag{3}\]

Interaction sum of squares

\[(SSAB) = n\sum_{i=1}^a \sum_{j=1}^b (\bar{Y_{ij}} - \bar{Y_{i\cdot}} - \bar{Y_{\cdot j}} + \bar{Y})^2 \tag{4}\]

Total sum of squares

\[(SST) = \sum_{i=1}^a \sum_{j=1}^b \sum_{k=1}^n (Y_{ijk} - \bar{Y})^2 \tag{5}\]

Error sum of squares

\[(SSE) = SST - SSA - SSB - SSAB \tag{6}\]

Degrees of freedom

\[(df) = ab(n-1) \tag{7}\]

where \(a\) is the number of levels for factor A, \(b\) is the number of levels for factor B, and \(n\) is the number of observations per cell.

Mean squares

\[(MS) = \frac{\text{SS}}{\text{df}} \tag{8}\]

F-ratio

\[F = \frac{\text{MS}_{\text{effect}}}{\text{MS}_{\text{error}}} \tag{9}\]

6 F Distribution

The F distribution[1], also known as the Fisher-Snedecor distribution, is a continuous probability distribution that is widely used in statistical hypothesis testing, particularly in the analysis of variance (ANOVA). It is named after Ronald A. Fisher and George W. Snedecor, two prominent statisticians who contributed significantly to its development.

The F-distribution used in the Between-Between Factorial ANOVA is the same as that used in One-Way ANOVA. The F-distribution is a continuous probability distribution that arises frequently as the null distribution of the test statistic in ANOVA, regardless of whether it is a One-Way or Factorial ANOVA.

However, the degrees of freedom for the F-distribution will differ between One-Way ANOVA and Factorial ANOVA. In One-Way ANOVA, the degrees of freedom are associated with the number of levels of a single independent variable.

In Factorial ANOVA, the degrees of freedom are associated with the number of levels of multiple independent variables and their interactions. When comparing F-ratios to critical F-values, you need to consider the appropriate degrees of freedom for your specific test. In both One-Way and Factorial ANOVA, you look up the critical F-value in an F-distribution table based on the numerator and denominator degrees of freedom and the chosen significance level (usually α = 0.05). If the calculated F-ratio is greater than the critical F-value, you can reject the null hypothesis and conclude that there is a significant effect.

Some key characteristics of the F distribution are:

  1. It is always non-negative, as it represents the ratio of two chi-square distributions.
  2. It is asymmetric and positively skewed, with a longer tail on the right side.
  3. The peak of the distribution shifts to the right as the degrees of freedom increase.
  4. As both degrees of freedom approach infinity, the F distribution converges to a normal distribution.
Code
# Load required packages quietly
if (!require("pacman")) install.packages("pacman", quiet = TRUE)
Loading required package: pacman
Code
suppressMessages(pacman::p_load("ggplot2", "ggthemes"))

# Set the parameters for the F distribution
df1 <- 10  # degrees of freedom for the numerator
df2 <- 20  # degrees of freedom for the denominator

# Create a function to calculate the probability density function (pdf) of the F distribution
f_pdf <- function(x) {
  df(x, df1, df2)
}

# Define the range of x values to plot
x_range <- seq(0, 5, length.out = 1000)

# Plot the F distribution using ggplot2
suppressWarnings(
  ggplot(data.frame(x = x_range, y = f_pdf(x_range)), aes(x = x, y = y)) +
    geom_line(color = "blue", size = 1) +
    ggtitle(paste("F Distribution with df1 =", df1, "and df2 =", df2)) +
    xlab("F value") +
    ylab("Probability Density") +
    theme_minimal()
)

7 Measure of effect size

When conducting a within-subjects factorial ANOVA, one of the most critical aspects is the effect size. The effect size is a quantitative measure of the magnitude of the observed effect in a statistical analysis. This section will discuss the importance of effect size in within-subjects factorial ANOVA, the various measures used to calculate it, and the interpretation of these values.

Effect size is crucial for determining the practical significance of a statistical analysis. While p-values provide information about the probability of obtaining the observed results due to chance alone, effect size conveys the strength of the relationship between variables, which is vital for understanding the real-world implications of the findings. Additionally, effect sizes are essential for conducting power analyses and determining the appropriate sample size for future studies.

There are several ways to calculate the effect size for within-subjects factorial ANOVA, with the most commonly used measures being partial eta-squared (\(η^2_p\)) and generalized eta-squared (\(η^2_G\)). Both indices quantify the proportion of variance in the dependent variable that can be accounted for by each factor and their interactions.

Partial Eta-Squared (\(η^2_p\)): This measure of effect size is calculated as the ratio of the sum of squares for a specific effect (e.g., main effect or interaction) to the sum of the effect’s sum of squares and the error sum of squares. Partial eta-squared is widely used due to its ease of computation and interpretation.

Generalized Eta-Squared (\(η^2_G\)): This effect size measure extends partial eta-squared, considering the repeated-measures nature of within-subjects designs. It is calculated by dividing the sum of squares for a specific effect by the total sum of squares, including the subject variability. Generalized eta-squared is considered a more accurate estimate of effect size in within-subjects factorial ANOVA, especially when there is an imbalance in the number of observations across different cells of the design.

Effect size values can be interpreted using the following guidelines:

  • Small effect: \(η^2_p\) or \(η^2_G\) ≥ 0.01
  • Medium effect: \(η^2_p\) or \(η^2_G\) ≥ 0.06
  • Large effect: \(η^2_p\) or \(η^2_G\) ≥ 0.14

When interpreting effect size values, it is essential to consider the research context and the specific variables under investigation. In some cases, even a small effect size can have significant practical implications, whereas, in others, a large effect size may not be as meaningful.

8 Post-Hoc analysis

When conducting a within-subjects factorial ANOVA and finding significant main effects or interactions, it is essential to perform post hoc analyses to further investigate the nature of these effects. Post hoc analyses help researchers identify where the significant differences lie between the levels of the factors or the specific combinations of factor levels that contribute to the interaction effect. This section will discuss the purpose of post hoc analyses, the different methods available for conducting these tests, and their interpretation.

Within-subjects factorial ANOVA provides information about the overall main effects and interactions between factors but does not pinpoint the differences between factor levels or their combinations. Post hoc analyses are follow-up tests that allow researchers to make pairwise comparisons between different factor levels or examine simple effects within an interaction, providing a more detailed understanding of the data and the relationships between variables.

Several post hoc tests are available for within-subjects factorial ANOVA, each with unique features and assumptions. Some of the most commonly used methods include:

  1. Bonferroni Correction: This method involves adjusting the significance level (α) for multiple comparisons by dividing the original α by the number of comparisons made. While the Bonferroni correction is straightforward, it can be overly conservative, increasing the likelihood of Type II errors (i.e., failing to detect a true effect).

  2. Tukey’s Honestly Significant Difference (HSD): Tukey’s HSD is a popular post hoc test that controls the family-wise error rate (i.e., the probability of making at least one Type I error across all comparisons). This test is more powerful than the Bonferroni correction, as it accounts for the interdependence of the comparisons.

  3. Hochberg’s GT2: This test is an alternative to Tukey’s HSD, specifically designed for within-subjects designs with unequal sample sizes. Hochberg’s GT2 controls the family-wise error rate and is considered more powerful than the Bonferroni correction.

  4. Simple Effects Analysis: In the case of significant interactions, researchers may conduct simple effects analysis to examine the effect of one factor at each level of the other factor(s). This method helps to disentangle the nature of the interaction and identify specific factor level combinations that contribute to the observed effect.

When interpreting post hoc analysis results, it is crucial to consider the adjusted p-values or confidence intervals provided by the chosen method. Pairwise comparisons or simple effects with adjusted p-values less than the significance level (e.g., 0.05) indicate statistically significant differences between the factor levels or combinations.

Remember that post hoc analyses should be considered exploratory, as they are based on the original within-subjects factorial ANOVA results. Thus, findings should be interpreted cautiously, and replication in future studies is recommended to confirm the observed effects.

9 Result interpretation

To better understand the relationships between variables and determine the practical implications of their findings, researchers must follow several key steps when interpreting the results of a within-subjects factorial ANOVA.

Examine the Assumptions: To correctly interpret the results of a within-subjects factorial ANOVA, verifying that the data meets certain requirements, such as normality, sphericity, and the absence of outliers, is important. In addition, one may consider transforming the data or utilizing alternative statistical methods for any violations.

Analyze Main Effects and Interactions: Review the ANOVA summary table to determine the statistical significance of main effects and interactions:

  • Main Effects: To determine the significant main effect of a factor on the dependent variable, check the p-values linked to each factor in the analysis. A p-value lower than the predetermined significance level (for instance, 0.05) suggests that the factor has a significant main effect on the dependent variable.
  • Interactions: Review the p-values for the interaction terms between factors. If the p-value is less than 0.05, it indicates a significant interaction, which means that the effect of one factor depends on the level of the other factor(s).

Calculate and Interpret Effect Sizes: To better understand the observed effects and their practical significance, calculate effect sizes such as partial eta-squared (η^2_p) or generalized eta-squared (η^2_G) for each significant main effect or interaction. Refer to Section 7 for the interpretation of effect size.

Conduct Post Hoc Analysis: Once the ANOVA shows significant main effects or interactions, it is important to conduct post hoc tests to pinpoint the specific differences between factor levels or simple effects within interactions. The post hoc test you choose, such as Bonferroni, Tukey’s HSD, or Hochberg’s GT2, will depend on the design and sample sizes.

Interpret Post Hoc Analysis Results: To determine if there are any significant differences between the factor levels or combinations, it is important to review the adjusted p-values or confidence intervals provided by the selected post hoc test. Comparisons or effects with adjusted p-values below the significance level (such as 0.05) are considered statistically significant.

Report the Results: Provide the findings from the factorial ANOVA conducted within the group. Emphasize the main effects, interactions, effect sizes, and post hoc analysis results. Please ensure that the explanation of the results related to the research question and their practical implications is clear and concise.

9.1 Interpreting Main Effects When Interaction is Significant

Interpreting the main effects becomes more complex when a significant interaction is present in a within-subjects factorial ANOVA. A significant interaction suggests that the effect of one factor depends on the level of the other factor(s). In such cases, focusing on interpreting the interaction rather than the main effects alone is essential. Here is how to go about it:

  1. Simple Effects Analysis: Conduct a simple effects analysis to disentangle the interaction. Simple effects analysis involves examining the effect of one factor at each level of the other factor(s). This helps identify specific combinations of factor levels contributing to the significant interaction.

  2. Post Hoc Tests for Simple Effects: If a simple effect is significant, perform post hoc tests to identify which pairwise comparisons are significantly different. Use appropriate post hoc tests like Bonferroni, Tukey’s HSD, or Hochberg’s GT2, depending on your study design and sample sizes.

  3. Graphical Representation: Plot the means of the dependent variable across the levels of one factor, with separate lines for each level of the other factor. This interaction plot will help visualize the nature of the interaction, making it easier to interpret the relationship between factors.

  4. Interpretation: Describe the pattern observed in the interaction plot, paying attention to the differences in the slopes of the lines. Explain how the effect of one factor changes depending on the level of the other factor(s). It is crucial to interpret the main effects in the context of the interaction, as the main effects alone may not provide a complete picture of the relationships between variables.

  5. Report the Results: Report the results of the interaction and the simple effects analysis, including any post hoc tests. Discuss the practical implications of these findings in relation to your research question.

Remember that the main effects should be interpreted with caution in the presence of a significant interaction. The interaction and the simple effects provide more meaningful insights into the relationships between the factors and the dependent variable.

10 Example

A researcher wanted to investigate the effects of two different motor training interventions on motor performance over time. Specifically, the study aimed to determine whether balance training or strength training led to greater improvements in motor performance over three time points: pretest (T1), midtest (T2), and posttest (T3). To conduct the study, 30 participants were recruited and were randomly assigned to either the balance training group or the strength training group. All participants completed motor performance tests at three time points: before the training intervention (pretest), at the middle of the training period (midtest), and after the training intervention (posttest). Motor performance was assessed using the Functional Reach Test (FRT), which measures a participant’s ability to reach forward while maintaining their balance.

10.1 Research question

The research question for this study was: Do balance training and strength training interventions have different effects on motor performance over time, as measured by the FRT?

The researcher performed a within-subjects factorial ANOVA to analyze the data and assess any significant differences in motor performance across the three time points and between the two intervention groups, as well as any interaction effects between time and intervention.

10.2 Data set up

Table 1: Within-subjects ANOVA data setup
ID Balance_T1 Balance_T2 Balance_T3 Strength_T1 Strength_T2 Strength_T3
1 20 23 26 19 22 25
2 25 28 31 24 27 30
3 28 31 34 27 30 33
4 22 24 27 21 23 26
5 27 30 33 26 29 32

10.3 Variables

In this study, there are several variables to consider:

  1. Time (within-subjects factor): This variable represents the three time points at which the motor performance assessments took place: T1 (pretest), T2 (midtest), and T3 (posttest). Time is a within-subjects factor because all participants were assessed at each of these three time points.

  2. Intervention Group (within-subjects factor): This variable represents the two different motor training interventions: balance training and strength training. Intervention group is a within-subjects factor because each participant underwent both interventions.

  3. Motor Performance (dependent variable): This variable represents the participants’ motor performance, as measured by the Functional Reach Test (FRT) scores. The variables include balance_t1, balance_t2, balance_t3, strength_t1, strength_t2, and strength_t3. The FRT scores are the dependent variables because they are expected to change as a result of the motor training interventions and the passage of time.

  4. Participant ID: This variable is a unique identifier for each participant in the study, ensuring that the data for each individual can be accurately tracked and analyzed. Participant ID is a nominal variable, and it is not directly involved in the statistical analysis of the study.

The main objective of this study is to analyze the interaction between Time and Intervention Group on motor performance, as well as any significant main effects of Time or Intervention Group.

10.4 Hypothesis Statements

\[ \textbf{Null Hypothesis (H0$_T$):} \enspace \mu_{T1} = \mu_{T2} = \mu_{T3} \]

This null hypothesis states that there is no significant difference in the means of motor performance across the three time points (pretest, midtest, and posttest) (i.e., the means are equal).

\[ \textbf{Null Hypothesis (H0$_I$):} \enspace \mu_{I1} = \mu_{I2} \]

This null hypothesis states that there is no significant difference in the means of motor performance between the balance and strength training interventions (i.e., the means are equal).

\[ \textbf{Null Hypothesis (H0$_{TI}$):} \enspace \] There is no significant interaction effect between Time and Intervention on motor performance.

\[ \textbf{Alternative Hypothesis (H1$_T$):} \enspace \mu_{T1} \neq \mu_{T2} \neq \mu_{T3} \]

This alternative hypothesis states that there is a significant difference in the means of motor performance across the three time points (pretest, midtest, and posttest) (i.e., the means are not equal).

\[ \textbf{Alternative Hypothesis (H1$_I$):} \enspace \mu_{I1} \neq \mu_{I2} \]

This alternative hypothesis states that there is a significant difference in the means of motor performance between the balance and strength training interventions (i.e., the means are not equal).

\[ \textbf{Alternative Hypothesis (H1$_{TI}$):} \enspace \] There is a significant interaction effect between Time and Intervention on motor performance.

10.5 Analyzing with jamovi

Jamovi is an open-source statistical software package that allows users to run various statistical analyses, including ANOVA (Analysis of Variance).

Download jamovi file (data+analysis+output)
Download jamovi output

Video tutorial[2]

  1. Install and open Jamovi: Download the latest version of Jamovi from the official website and install it on your computer. Once installed, open Jamovi.
  2. Import data: To import your dataset, click on the three horizontal lines in the top-left corner, navigate to ‘Open,’ and browse your computer to find your dataset file (e.g., a .csv or .xlsx file). Alternatively, you can simply drag and drop the dataset file onto the Jamovi window.
  3. Structure your dataset: For a within-subjects factorial ANOVA, make sure that your dataset is in the wide format, with each participant’s data in one row, and separate columns for each level of the within-subject factors. You should also have a unique identifier for each participant (e.g., Participant ID).
  4. Run the within-subjects factorial ANOVA: In the ‘Analyses’ tab, click on ‘ANOVA,’ and then select ‘Repeated Measures ANOVA.’ This will open the Repeated Measures ANOVA options panel.
  5. Specify factors and levels: In the options panel, specify the number of within-subject factors and their respective levels by clicking the ‘+ Factor’ button. Provide meaningful names for each factor and enter the correct number of levels for each factor.
  6. Assign variables: For each level of each factor, click the ‘+ Measure’ button and assign the appropriate variable (column) from your dataset. The variables you assign should correspond to the columns containing the data for each combination of factor levels.
  7. Set additional options (optional): You can customize the output of your analysis by selecting additional options, such as effect size, post-hoc tests, and plots. You can find these options in the ‘Repeated Measures ANOVA’ panel under the ‘Options’ and ‘Post Hoc Tests’ sections.
  8. Interpret results: After completing the steps above, Jamovi will automatically run the within-subjects factorial ANOVA and display the results in the ‘Results’ tab. Review the output to check for significant main effects and interactions. Pay close attention to the p-values, effect sizes, and any post-hoc tests you selected.

Remember that the specific steps and options may change in newer versions of Jamovi, so be sure to consult the latest documentation and tutorials if you encounter any difficulties. The steps above refer to Version 2.3.21.0.

10.6 Analyzing with SPSS

General steps for running a within-subjects factorial ANOVA in the latest version of IBM SPSS Statistics (v28 as of this writing):

  1. Open SPSS and create a new data set.
  2. Enter your data into the Data Editor.
  3. Click on “Analyze” in the top menu and select “General Linear Model” and then “Repeated Measures” from the dropdown menus.
  4. Select your dependent variable and move it to the “Dependent” box.
  5. Select your within-subjects factors and move them to the “Within-Subject Factor(s)” box.
  6. Click on the “Options” button to specify any additional options (such as effect sizes or power analysis).
  7. Click on the “Plots” button to specify any desired plots (such as interaction plots).
  8. Click on the “Post Hoc” button to specify any desired post hoc tests.
  9. Click on the “OK” button to perform the analysis.
  10. Examine the output tables and interpret the results.
Note

The specific steps and options may vary depending on the version of SPSS you are using, as well as the specific details of your data and analysis. The steps above are for version 28.

SPSS Syntax

SPSS syntax for the 3x2 Between-Between Factorial ANOVA using your dataset:

GLM DV1 DV2 DV3 BY IV1 IV2
  /WSFACTOR=FactorName NumberOfLevels FactorLevels
  /METHOD=SSTYPE(3)
  /PRINT=DESCRIPTIVE ETASQ
  /EMMEANS=TABLES(FactorName) COMPARE ADJ(SIDAK)
  /CRITERIA=ALPHA(0.05)
  /DESIGN=IV1 IV2 FactorName.

Replace the placeholders with your specific variables:

  • DV1, DV2, DV3: Replace these with the names of your dependent variables (e.g., Score1, Score2, Score3).

  • IV1, IV2: Replace these with the names of your independent variables or grouping variables (e.g., Group, Condition).

  • FactorName: Replace this with the name of your within-subjects factor (e.g., Time).

  • NumberOfLevels: Replace this with the number of levels for your within-subjects factor (e.g., 3 for three levels).

  • FactorLevels: Replace this with the level labels for your within-subjects factor (e.g., 'Pretest' 'Midtest' 'Posttest').

10.7 Interpreting the results

Based on the results from the Factorial ANOVA and Post-hoc test tables, we can interpret the findings as follows:

Based on the current web page context, here is a breakdown of the interpretation of the results per factor and interaction:

Condition: There was a significant main effect of Condition, F(1, 29) = 371.20, p < .001. This indicates that there were significant differences between the Balance and Strength conditions in terms of their impact on the outcome variable.

Time: There was a significant main effect of Time, F(2, 58) = 3529.19, p < .001. This indicates that there were significant differences between the different time points (Pretest, Midtest, and Posttest) in terms of their impact on the outcome variable.

Condition x Time Interaction: There was a significant interaction between Condition and Time, F(2, 58) = 5.80, p = 0.005. This indicates that the effect of Time on the outcome variable varied depending on the Condition.

10.8 APA Style

The results for this analysis can be written following the APA Style as shown below.

A Repeated Measures ANOVA was conducted to analyze the data. Results indicated significant main effects of Condition, F(1, 29) = 371.20, p < .001, and Time, F(2, 58) = 3529.19, p < .001. There was also a significant interaction between Condition and Time, F(2, 58) = 5.80, p = 0.005. Post-hoc tests revealed significant differences between all pairs of time points (Pretest, Midtest, and Posttest) within both the Balance and Strength conditions. Additionally, there were significant differences between the Balance and Strength conditions at all time points. In summary, the study demonstrated that motor performance differed significantly between the Balance and Strength conditions and improved over time in both conditions. The interaction effect also implies that the pattern of improvement over time is not identical between the two conditions.

11 Nonparametric

The nonparametric equivalent for the within-subjects \(f\)ANOVA is the Friedman test. The Friedman test is used when the assumptions of normality and sphericity are not met in a repeated measures design with one or more within-subject factors.

Here’s how to run the Friedman test in jamovi, SPSS, and R:

11.1 jamovi:

In jamovi, there isn’t a direct way to perform the Friedman test for a within-subjects factorial design. However, you can use the “WRS” module, which provides a collection of robust statistical methods, to perform a related test called the aligned rank transform (ART) for nonparametric factorial data. To do this:

  • Install the “WRS” module by clicking on “+ Modules” in the top right corner, search for “WRS”, and then click “Install”.

  • Load your data into jamovi.

  • Go to “Analyses” > “WRS” > “Aligned Rank Transform”, and specify your within-subjects factors and the dependent variable.

The results will be displayed in the output pane, including the test statistics and p-values.

11.2 SPSS:

In SPSS, there’s no direct way to perform the Friedman test for within-subjects factorial designs. You’ll need to aggregate your data and then perform the Friedman test separately for each level of the between-subjects factor. To do this:

  • Load your data into SPSS.

  • Go to “Transform” > “Compute Variable” to create a new variable representing the mean of the dependent variable across the levels of the within-subjects factor.

  • Go to “Analyze” > “Nonparametric Tests” > “Related Samples”.

  • Select “Friedman” as the test type, and add the aggregated dependent variable to the “Test Variables” list.

  • Click “OK” to run the analysis, and the output will include the test statistic and p-value.

11.3 R

In R, you can use the “Friedman.test” function from the base package or the “friedman.test” function from the “PMCMRplus” package for a more generalized version of the test. To run the Friedman test:

# Install and load the PMCMRplus package
install.packages("PMCMRplus")
library(PMCMRplus)

# Load your data into R
data <- read.csv("your_data.csv")

# Run the Friedman test using the friedman.test function from the PMCMRplus package
result <- friedman.test(data$DV, data$WithinFactor1, data$WithinFactor2, block=data$Subject)

# Replace data$DV, data$WithinFactor1, data$WithinFactor2, and data$Subject with the appropriate variable names from your dataset

# Display the results
print(result)

References

1. Furtado, O. (2023, April 8). RandomStats - One-Way ANOVA [Blog]. RandomStats. https://drfurtado.github.io/randomstats/posts/04082023-one-way-anova/
2. Repeated-measures ANOVAJamovi. (2018, August 6). https://www.youtube.com/watch?v=m5JNwPgiMso

Footnotes

  1. Right-click on the link and save as…↩︎

Reuse

Citation

BibTeX citation:
@misc{furtado2023,
  author = {Furtado, Ovande},
  title = {Factorial {Analysis} of {Variance:} {Within-Within}},
  date = {2023-04-30},
  url = {https://drfurtado.github.io/randomstats/posts/04282023-fanova-ww},
  langid = {en}
}
For attribution, please cite this work as:
1. Furtado, O. (2023, April 30). Factorial Analysis of Variance: Within-Within. RandomStats. https://drfurtado.github.io/randomstats/posts/04282023-fanova-ww