KIN 610: Quantitative Methods in Kinesiology

Chapter 12: Multiple Correlation and Multiple Regression

Ovande Furtado Jr., PhD.

Professor, Cal State Northridge

2026-03-01

FYI

This presentation is based on the following books. The references are coming from these books unless otherwise specified.

Main sources:

  • Moore, D. S., Notz, W. I., & Fligner, M. (2021). The basic practice of statistics (9th ed.). W.H. Freeman.
  • Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE Publications.
  • Furtado, O., Jr. (2026). Statistics for movement science: A hands-on guide with SPSS (1st ed.). https://drfurtado.github.io/sms

ClassShare App

You may be asked in class to go to the ClassShare App to answer questions.

SPSS Tutorial

Intro Question

  • A researcher wants to predict 20-meter sprint time from two variables: aerobic capacity (VO₂max) and lower-body strength. How can we determine the unique contribution of lower-body strength while accounting for the athlete’s VO₂max?
Click to reveal answer We use Multiple Regression. Unlike bivariate regression, multiple regression enables us to look at the effect of one predictor while “holding the other predictor constant,” revealing each variable’s independent contribution to the outcome.
  • Bivariate Regression models the relationship between ONE predictor and an outcome.
  • Multiple Regression models the relationship between TWO OR MORE predictors and a single outcome.
  • This better reflects the reality of Movement Science, where outcomes are multifactorial.

Learning Objectives

By the end of this chapter, you should be able to:

  • Explain the rationale for multiple regression and when it is appropriate
  • Compute and interpret multiple correlation (\(R\)) and \(R^2_{\text{adj}}\)
  • Classify partial and semipartial (part) correlations
  • Build and interpret a multiple regression model with OLS
  • Assess model fit using the Omnibus \(F\)-test
  • Interpret unstandardized (\(b\)) and standardized (\(\beta\)) regression coefficients
  • Detect and address multicollinearity among predictors using VIF
  • Check assumptions specific to multiple regression
  • Report transparently following APA formatting guidelines

Symbols

Symbol Name Pronunciation Definition
\(\hat{Y}\) Predicted value “y hat” Value of \(Y\) predicted by the regression equation
\(b_0\) Intercept “b sub 0” Predicted \(Y\) when all predictors equal 0
\(b_i\) Regression coefficient “b sub i” Change in \(\hat{Y}\) for a one-unit increase in \(X_i\), holding other variables constant
\(k\) Number of predictors “k” Count of independent variables
\(R\) Multiple correlation “capital R” Correlation between observed \(Y\) and predicted \(\hat{Y}\)
\(R^2\) Coefficient of determination “R squared” Proportion of variance in \(Y\) explained by predictors
\(R^2_{\text{adj}}\) Adjusted \(R^2\) “adjusted R squared” Proportion of variance explained, corrected for the number of predictors
\(\Delta R^2\) Change in \(R^2\) “delta R squared” Increase in \(R^2\) when a new predictor is added
\(\beta\) Standardized coefficient “beta” Regression coefficient expressed in standard deviation units
\(VIF\) Variance Inflation Factor “V-I-F” Statistic quantifying the severity of multicollinearity

What is Multiple Regression?

Multiple regression models the relationship between a single continuous outcome variable (\(Y\)) and two or more predictor variables (\(X_1, X_2, \ldots, X_k\))[1,2]. In this course, we focus primarily on Ordinary Least Squares (OLS) regression.

\[ \hat{Y} = b_0 + b_1 X_1 + b_2 X_2 + \cdots + b_k X_k \]

“Holding Constant” Property

Each regression coefficient (\(b_i\)) represents the unique effect of predictor \(X_i\) on \(Y\), holding all other predictors constant. This contrasts with bivariate regression, where the coefficient reflects the total (confounded) relationship. For example, if body mass and leg strength both predict jump height, \(b_\text{mass}\) in multiple regression shows the effect of mass independent of strength.

Why Use It?

  1. Prediction: Build models to predict outcomes based on multiple factors.
  2. Explanation: Identify which variables uniquely contribute to an outcome.
  3. Control: Adjust for confounding variables to isolate specific effects.

Worked Example: Conceptual Preview

A researcher measures 50 athletes and wants to predict vertical jump height (\(Y\)) from two variables: lower-body strength (\(X_1\)) and body mass (\(X_2\)).

\[ \hat{Y} = b_0 + b_1 \times \text{Strength} + b_2 \times \text{Body Mass} \]

Suppose the fitted model is:

\[ \hat{Y} = 12.5 + 0.20 \times \text{Strength} - 0.10 \times \text{Body Mass} \]

Interpretation:

  • Intercept (\(b_0 = 12.5\)): Predicted jump height when strength and body mass are both zero (often not interpretable)
  • Strength (\(b_1 = 0.20\)): Each 1 kg increase in strength → 0.20 cm increase in jump height, holding body mass constant
  • Body mass (\(b_2 = -0.10\)): Each 1 kg increase in body mass → 0.10 cm decrease in jump height, holding strength constant
Figure 1: Predicted vs. Observed jump height for a two-predictor model (simulated data)

Multiple Correlation (\(R\)) & \(R^2\)

Multiple correlation (R) quantifies the strength of the relationship between the set of predictors and the outcome. It is strictly positive: \(0 \le R \le 1\)[1].

\(R^2\) (coefficient of determination) is the proportion of variance in \(Y\) explained by all the predictors together.

\[ R^2 = \frac{\text{SS}_{\text{regression}}}{\text{SS}_{\text{total}}} = 1 - \frac{\text{SS}_{\text{residual}}}{\text{SS}_{\text{total}}} \]

\(R^2\) Inflation Issue:

Adding new predictors always increases \(R^2\), even if the predictors are purely random noise. This produces a risk of overfitting[2].

Solution: Adjusted \(R^2\) (\(R^2_{\text{adj}}\))

Penalizes the model for having too many predictors relative to the sample size[3]:

\[ R^2_{\text{adj}} = 1 - \frac{(1 - R^2)(n - 1)}{n - k - 1} \]

  • Always report Adjusted \(R^2\) when using multiple predictors.
  • If Adjusted \(R^2\) drops upon adding a variable, the variable is uninformative.

Calculate in SPSS

SPSS reports \(R\), \(R^2\), and Adjusted \(R^2\) in the Model Summary table. See the SPSS Tutorial: Model Summary for details on interpreting this output.

Check Question

When adding an extra variable to a multiple regression model, what happens to the Adjusted \(R^2\) if the new variable is practically useless?
Click to reveal answer

Answer: The Adjusted \(R^2\) will decrease. While classical \(R^2\) can only either stay the same or go up, the Adjusted \(R^2\) applies a penalty based on the number of predictors (\(k\)). A useless variable fails to meaningfully increase the model’s explanatory power and thus triggers a drop in Adjusted \(R^2\) because of the penalty.

Partial vs. Semipartial Correlation

Predictors often overlap with the outcome and with each other. We use special correlations to untangle this mess[2]. R² and adjusted R² tell us how well the predictors collectively explain the outcome, but they do not tell us about the contribution of any individual predictor.

Partial Correlation (\(r_{\text{partial}}\))

Measures the relationship between \(X_1\) and \(Y\) after filtering out the influence of other variables.

  • Think of it as the “pure” connection.
  • Answers: “If everyone had the exact same score on \(X_2\), how strongly would \(X_1\) and \(Y\) relate?”
  • Focus: The remaining variance in \(Y\).

Semipartial (Part) Correlation (\(r_{\text{semi}}\))

Measures the unique contribution of \(X_1\) to the total outcome (\(Y\)).

  • Answers: “How much new information does \(X_1\) bring to the whole model?”
  • Focus: The total variance in \(Y\).
  • Squaring it gives the Change in \(R^2\) (\(\Delta R^2\)), making it great for comparing predictors.

Simple Analogy

Imagine the variance in \(Y\) is a pie.

  • Partial: What percentage of the remaining pie did this predictor explain?
  • Semipartial (Part): What percentage of the entire pie did this predictor explain?

SPSS output

SPSS reports part and partial correlations when you check Part and partial correlations under Statistics in the regression dialog. See the SPSS Tutorial: Running the regression for setup instructions.

Visualizing Explained Variance

Imagine the variance in \(Y\) is a pie.

  • Partial: What percentage of the remaining pie did this predictor explain?
  • Semipartial (Part): What percentage of the entire pie did this predictor explain?

Understanding the Variance Regions

A = variance in \(Y\) uniquely explained by \(X_1\). B = variance shared by both \(X_1\) and \(X_2\) (confounded). C = variance uniquely explained by \(X_2\). D = unexplained (residual) variance.

Statistic Question it answers Regions
\(R^2\) How much of \(Y\) do all predictors together explain? A + B + C
Semipartial \(r^2\) If I add \(X_1\) last, how much does \(R^2\) go up? (\(= \Delta R^2\)) A
Partial \(r^2\) After removing \(X_2\)’s influence, how strongly are \(X_1\) and \(Y\) still related? A ÷ (A + D)
Figure 2: Venn diagram illustrating variance partitioning in multiple regression with two predictors (\(X_1\) and \(X_2\)).

Check Question

A researcher adds strength (\(X_1\)) to a model that already contains VO₂max (\(X_2\)). The semipartial correlation for strength is \(r_{\text{semi}} = 0.35\). How much additional variance in sprint time does strength uniquely explain?
Click to reveal answer

Answer: \(\Delta R^2 = r^2_{\text{semi}} = 0.35^2 = \mathbf{0.1225}\), or approximately 12.25%. This means strength uniquely explains about 12% of the total variance in sprint time, above and beyond what VO₂max already accounts for. This is the incremental contribution of strength to the model.

Building the Model: Choosing Predictors

Before fitting a multiple regression, careful planning prevents overfitting and ensures interpretable results[2,3].

Selection Criteria:

  1. Theory: Prior research and domain knowledge should guide predictor selection
  2. Parsimony: Fewer predictors reduce overfitting and improve interpretability
  3. Sample size rule of thumb: \(n \ge 10\)\(20\) per predictor

Example: To predict vertical jump height, theory suggests lower-body strength, body mass, and explosive power as relevant predictors. Including unrelated variables (e.g., shoe size) wastes degrees of freedom.

Avoid “Kitchen Sink” Models

Including every available variable leads to overfitting, multicollinearity, and uninterpretable results. Focus on theoretically motivated predictors[1].

Sample Size Warning

With \(n = 50\) athletes and \(k = 2\) predictors, we have adequate power: \(n/k = 25\). But with \(k = 8\) predictors: \(n/k = 6.25\) — too few observations per predictor, leading to unstable, untrustworthy models.

Assumptions of Multiple Regression

Multiple regression assumes the same conditions as bivariate regression, plus a multicollinearity check[1,2].

1. Linearity

The relationship between \(Y\) and each predictor \(X_i\) must be approximately linear. → Check: scatterplot of \(Y\) vs each \(X_i\) and residual plot.

2. Independence

Each observation must be independent (one data point per participant). → Check: study design.

3. Homoscedasticity

Variance of residuals should be constant across all predicted values. → Check: ZRESID vs. ZPRED residual plot (no funnel shape).

4. Normality of Residuals

Residuals should be approximately normally distributed (for inference). Less critical for large samples (CLT). → Check: Normal P-P plot of residuals.

5. No Extreme Outliers

Outliers with high leverage and large residuals can distort coefficients. → Check: Cook’s distance (values > 1 warrant investigation).

6. No Multicollinearity (NEW for multiple regression)

Predictors should not be so highly intercorrelated that their effects cannot be separated. → Check: VIF (> 10 = severe), Tolerance (< 0.10 = problem).

SPSS Diagnostics

SPSS generates all assumption diagnostics automatically. See the SPSS Tutorial: Checking Assumptions for step-by-step procedures.

Fitting the Model with OLS

Once predictors are chosen and assumptions checked, the model is estimated using Ordinary Least Squares (OLS) — the default method in SPSS[2].

OLS finds the set of coefficients (\(b_0, b_1, \ldots, b_k\)) that minimizes the sum of squared residuals:

\[ \text{Minimize: } \sum_{i=1}^{n} (Y_i - \hat{Y}_i)^2 \]

Why squared residuals?

  • Makes all errors positive (positive and negative errors don’t cancel out)
  • Penalizes large errors more heavily than small ones
  • Has a clean analytical solution via matrix algebra

In SPSS: Analyze → Regression → Linear with Method: Enter (simultaneous entry — all predictors added at once).

Figure 3: OLS regression minimizes the sum of squared vertical distances (residuals) from each point to the line

SPSS Setup

See the SPSS Tutorial: Running the regression for full setup including requesting confidence intervals, part/partial correlations, collinearity diagnostics, and residual plots.

Worked Example: SPSS Output Walkthrough

Using the SMS core dataset (\(n = 60\), pre-training): predicting 20-m sprint time (\(Y\)) from VO₂max (\(X_1\)) and Strength (\(X_2\)).

Model Summary:

Statistic Value
\(R\) .725
\(R^2\) .526
Adjusted \(R^2\) .510
Std. Error .248 s

ANOVA:

\(F(2, 57) = 31.64, p < .001\)

The overall model is statistically significant — knowing an athlete’s VO₂max and strength produces meaningfully better predictions than the null model, which simply predicts \(\bar{Y}\) for everyone (see the Omnibus F-Test slide for a full explanation).

Coefficients Table:

Predictor \(b\) \(SE\) \(\beta\) \(t\) \(p\) VIF
Constant 5.752 .347 16.58 < .001
VO₂max −.025 .005 −.549 −4.92 < .001 1.26
Strength −.012 .003 −.434 −3.88 .001 1.26

Interpretation:

  • VO₂max: Each 1 mL·kg⁻¹·min⁻¹ increase → 0.025 s decrease in sprint time, controlling for strength
  • Strength: Each 1 kg increase → 0.012 s decrease in sprint time, controlling for VO₂max

Compare to bivariate model

In bivariate regression, VO₂max alone explained \(R^2 = .414\) (41.4%). Adding strength increases this to \(R^2 = .526\) (52.6%) — a gain of 11.2 percentage points. See the SPSS Tutorial: Model Summary and Coefficients for full output interpretation.

Check Question

Using the equation \(\hat{y} = 5.752 - 0.025 \times \text{VO}_2\text{max} - 0.012 \times \text{Strength}\), predict the 20-m sprint time for an athlete with VO₂max = 45 mL·kg⁻¹·min⁻¹ and Strength = 80 kg.
Click to reveal answer

Answer: Calculation:
\(\hat{y} = 5.752 - 0.025(45) - 0.012(80) = 5.752 - 1.125 - 0.960 = \mathbf{3.67 \text{ s}}\).

Compare to bivariate:
Using only VO₂max: \(\hat{y} = 5.174 - 0.033(45) = 3.69\) s. The multiple regression model refines this estimate by also accounting for strength. Both predictor values fall within the observed ranges, so this is a valid (not extrapolated) prediction.

Interpreting Regression Coefficients

Each regression coefficient holds clinical or practical meaning. Let \(b_i\) be the coefficient of \(X_i\).

Unstandardized Coefficient (\(b\))

  • Represented in the measurement’s original units.
  • “A 1-unit increase in \(X\) associates with a \(b\)-unit change in \(Y\), holding all other predictors constant”.
  • Excellent for real-world predictions (e.g., sprint time in seconds, jump height in cm)[1].

Example: \(b_{\text{VO}_2\text{max}} = -0.025\) means each 1 mL·kg⁻¹·min⁻¹ increase in VO₂max is associated with a 0.025 s decrease in sprint time, holding strength constant.

Standardized Coefficient (\(\beta\))

  • Expressed in standard deviation (SD) units.
  • Removes metric scales to allow direct magnitude comparison between variables.
  • “A 1-SD increase in \(X\) associates with a \(\beta\)-SD change in \(Y\)”.
  • Helps answer: “Which variable possesses the strongest unique effect within the model?”

Example: \(\beta_{\text{VO}_2\text{max}} = -.549\) vs. \(\beta_{\text{Strength}} = -.434\). VO₂max has the stronger unique association with sprint time.

Always Report Both

Report unstandardized coefficients (\(b\)) for practical interpretation and prediction, and standardized coefficients (\(\beta\)) for comparing relative importance of predictors with different scales[2].

Statistical Significance of Coefficients

Each individual coefficient is tested with a t-test to determine whether it significantly contributes to the model[1,2].

Hypotheses:

  • H₀: \(b_i = 0\) (predictor has no unique effect)
  • H₁: \(b_i \ne 0\) (predictor has a unique effect)

Decision:

  • \(p < .05\): Reject H₀ → predictor contributes significantly to the model
  • \(p \ge .05\): Fail to reject H₀ → predictor does not add significant unique variance

From our worked example:

  • VO₂max: \(t(57) = -4.92, p < .001\) ✓ significant
  • Strength: \(t(57) = -3.88, p = .001\) ✓ significant

Statistical Significance ≠ Practical Importance

A predictor can be statistically significant (p < .05) yet practically trivial (small coefficient). With very large samples, even tiny effects become significant. Always examine:

  • Confidence intervals: Do they include only trivial effects?
  • Effect size (\(\beta\)): Is the standardized coefficient large enough to matter?
  • Context: Would a change of this magnitude influence outcomes in practice?

Why the VO₂max Slope Changed

A key insight: the VO₂max slope changed between the bivariate and multiple regression models.

Model VO₂max slope (\(b\))
Bivariate (VO₂max only) \(-0.033\)
Multiple (VO₂max + Strength) \(-0.025\)

Why did it decrease?

In the bivariate model, VO₂max’s coefficient captured both:

  1. Its own direct effect on sprint time
  2. Variance shared with strength (because VO₂max and strength correlate at \(r = .452\))

Once strength is included in the model, the shared variance is attributed to strength, and VO₂max’s coefficient reflects only its unique contribution.

Figure 4: Shared vs. unique variance: When both predictors are in the model, each coefficient reflects only the unique (non-overlapping) portion.

Evaluating the Model: Omnibus F-Test

The F-test evaluates whether the regression model as a whole predicts \(Y\) significantly better than the null model[1,2].

What is the null model?

The null model is an intercept-only model — it contains no predictors and simply predicts the sample mean (\(\bar{Y}\)) for every observation. For our sprint data, the null model predicts \(\bar{Y} = 3.89\) s for every athlete, regardless of their VO₂max or strength. The F-test asks: does including our predictors improve on that baseline?

Hypotheses:

  • H₀: All regression coefficients = 0 (\(R^2 = 0\))
  • H₁: At least one predictor contributes significantly (\(R^2 > 0\))

F-statistic:

\[ F = \frac{R^2 / k}{(1 - R^2) / (n - k - 1)} \]

Where \(k\) = number of predictors, \(n\) = sample size.

From our worked example:

\[ F(2, 57) = 31.64, \; p < .001 \]

  • df Regression = 2: One df per predictor (\(k = 2\))
  • df Residual = 57: \(n - k - 1 = 60 - 2 - 1 = 57\)

Decision: \(p < .001\) → Reject H₀. Including VO₂max and strength significantly improves predictions over the null model (\(\bar{Y}\) for everyone).

The F-test answers ONE question

The omnibus F-test tells you the model as a whole is significant. It does NOT tell you which individual predictors are significant. Use the \(t\)-tests in the Coefficients table for that.

SPSS Output

The ANOVA table in SPSS provides \(F\), \(df\), and \(p\). See the SPSS Tutorial: ANOVA table for interpretation.

Multicollinearity: The Trap of Complexity

Multicollinearity occurs when two or more predictor variables are highly correlated with each other[1,2].

Why is it a problem?

  • Unstable Coefficients: Slight shifts in data cause large swings in \(b\) values.
  • Large Standard Errors: Confidence intervals blow up, lowering statistical power.
  • Nonsignificant Predictors: Truly important predictors may appear nonsignificant due to shared variance.
  • Loss of Interpretation: Because variables move together, the model cannot distinguish which one is truly responsible for the outcome.

Detecting Multicollinearity:

  1. Variance Inflation Factor (VIF):
    • VIF < 5: Good. Extracted directly via SPSS output.
    • VIF 5–10: Moderate concern.
    • VIF > 10: Severe multicollinearity (take action).
  2. Tolerance: \(1 / \text{VIF}\). Tolerance < \(0.10\) is problematic.
  3. Correlation Matrix: Predictor-predictor pairs yielding \(|r| > 0.80\).

SPSS VIF

SPSS reports VIF whenever you check Collinearity diagnostics under Statistics. Always request it — it is a required diagnostic for multiple regression. See the SPSS Tutorial: Collinearity Diagnostics.

Worked Example: Detecting Multicollinearity

A regression model predicts injury risk from three biomechanical variables.

VIF Values:

Predictor VIF Diagnosis
Knee flexion angle (\(X_1\)) 2.3 ✓ No concern
Hip flexion angle (\(X_2\)) 12.8 ⚠ Severe
Ankle dorsiflexion (\(X_3\)) 11.5 ⚠ Severe

Interpretation:

  • Knee flexion: VIF < 5 → no multicollinearity concern
  • Hip and ankle: VIF > 10 → severe multicollinearity. These two measures overlap too much for the model to separate their effects.

Action Plan:

  1. Check the predictor correlation: If \(r > .85\) between hip and ankle angles, the two are near-redundant.

  2. Remove one predictor: Retain the one with stronger theoretical justification or better measurement properties.

  3. Create a composite: Combine hip and ankle into a single “lower-limb flexion” variable via averaging z-scores.

  4. Increase sample size: More data can stabilize estimates (but won’t fix fundamental redundancy).

Compare: In our sprint example, VIF = 1.26 for both predictors — well within acceptable range. The \(r = .452\) between VO₂max and strength is moderate — not a problem.

Check Question

A researcher observes VIF = 8.5 for body mass and VIF = 9.2 for BMI in a regression model predicting sprint performance. What is the problem, and what should the researcher do?
Click to reveal answer

Answer: Moderate-to-severe multicollinearity. Body mass and BMI are nearly redundant — BMI is calculated directly from body mass (BMI = mass / height²). Their correlation is extremely high, making VIF approach 10.

Solution: Remove one predictor. Since BMI already incorporates mass, keep whichever variable is more theoretically relevant to the research question. Including both inflates standard errors and makes coefficient interpretation unreliable.

Variable Selection: Theory-Driven vs. Stepwise

We emphasize OLS Regression where you choose predictors via Theory and domain knowledge.

Theory-Driven Selection (Recommended):

  • Select predictors based on prior research, domain expertise, and theoretical frameworks
  • Produces interpretable, replicable models
  • Protects against overfitting
  • Use Method: Enter in SPSS (simultaneous entry)

Stepwise Methods (Forward, Backward, Mixed):

  • Software selects variables based on arbitrary \(p\)-value thresholds
  • Capitalizes on sample-specific noise
  • Inflated Type I error rates
  • Results often fail to replicate across samples

Why we avoid Stepwise Regression

Stepwise regression selects predictors that fit noise rather than true patterns. In a sample of 50 athletes, stepwise might select leg length as the best predictor, but in a new sample, it might select arm length — neither replicable nor interpretable[2,3].

Use stepwise methods only for exploratory analysis, and validate in independent samples.

Other Methods

For deeper coverage of all-subsets regression, AIC/BIC criteria, and cross-validation, please consult SMS Chapter 12: Variable Selection Methods.

Making Predictions

Using our regression equation:

\[\hat{y} = 5.752 - 0.025 \times \text{VO}_2\text{max} - 0.012 \times \text{Strength}\]

Example Prediction:

Predict 20-m sprint time for an athlete with VO₂max = 45 mL·kg⁻¹·min⁻¹ and Strength = 80 kg:

\[\hat{y} = 5.752 - 0.025(45) - 0.012(80)\] \[= 5.752 - 1.125 - 0.960 = \mathbf{3.67 \text{ s}}\]

Extrapolation

Never predict outside the observed range of predictor values. A model built on athletes with VO₂max 30–55 and Strength 55–100 should not be used to predict for an athlete with VO₂max = 70 or Strength = 30 — the linear relationship may not hold beyond the observed range[1].

SPSS Predictions

In SPSS, save predicted values via Analyze → Regression → Linear → Save → Unstandardized Predicted Values. SPSS adds a new column (PRE_1) with model-predicted values for each case. See the SPSS Tutorial: Making Predictions.

Residual Plots and Assumption Diagnostics

Always examine residual plots before trusting regression results[1,2].

Reading the ZRESID vs. ZPRED Plot:

Pattern Diagnosis
1. Random scatter ✓ Assumptions met: Errors are random (linearity) with constant spread (homoscedasticity).
2. Funnel shape Heteroscedasticity: Spread of errors changes, violating constant variance.
3. Curved pattern Nonlinearity: The linear model missed a curved relationship.
4. Outliers Influential points: Extreme values that might distort the model.
Figure 5: Visual guide to common residual plot patterns

SPSS Residual Plots

SPSS generates these plots automatically when you request *ZRESID on the Y-axis and *ZPRED on the X-axis under Plots. See the SPSS Tutorial: Checking Assumptions.

Reporting Results in APA Style

Report variables comprehensively. Be transparent about what works and the model limits[2].

“A multiple linear regression was conducted to examine whether aerobic capacity (VO₂max) and lower-body strength jointly predicted 20-meter sprint time in collegiate athletes (N = 60, pre-training). The overall model was statistically significant, \(F(2, 57) = 31.64\), \(p < .001\), \(R^2 = .526\), adjusted \(R^2 = .510\), indicating that the two predictors together explained 52.6% of the variance in sprint time. Both predictors made significant unique contributions: VO₂max (\(b = -0.025\), 95% CI \([-0.036, -0.015]\), \(\beta = -.549\), \(p < .001\)) and lower-body strength (\(b = -0.012\), 95% CI \([-0.018, -0.006]\), \(\beta = -.434\), \(p = .001\)). No multicollinearity concerns were identified (VIF = 1.26 for both predictors).”

Always Include:

  • Omnibus \(F\)-test with both degrees of freedom
  • Both \(R^2\) and Adjusted \(R^2\) (preferred as primary summary)
  • Unstandardized (\(b\)) and standardized (\(\beta\)) coefficients per variable
  • 95% Confidence Intervals for each \(b\)
  • VIF values to document multicollinearity was checked
  • Use \(p < .001\) when SPSS displays .000

SPSS Formatting

See the SPSS Tutorial: Reporting results in APA style for the full APA write-up and an optional regression table format.

Common Misconceptions

Misconception 1

❌ incorrect: “\(b = -0.025\) for VO₂max means VO₂max decreases sprint time by 0.025 s.”

✅ correct: Each slope in multiple regression is a partial coefficient — it represents the unique effect of that predictor after removing the influence of all other predictors in the model. Always include the qualifier: “controlling for strength” or “holding strength constant.”

Misconception 2

❌ incorrect: “The VO₂max coefficient of \(b = -0.025\) in the multiple model and \(b = -0.033\) in the bivariate model are contradictory.”

✅ correct: They are measuring different things. The bivariate \(b\) captures both direct and indirect (shared with strength) effects. The multiple regression \(b\) isolates the unique effect. The change is expected and normal.

Misconception 3

❌ incorrect: “I can compare unstandardized coefficients to see which predictor is more important: \(|-0.025| > |-0.012|\), so VO₂max matters more.”

✅ correct: Unstandardized coefficients are in different units (mL·kg⁻¹·min⁻¹ vs. kg). Use standardized \(\beta\) to compare relative importance: \(|-.549| > |-.434|\), confirming VO₂max has the stronger unique association.

Misconception 4

❌ incorrect: “\(R^2 = .526\), so we fully understand sprint performance.”

✅ correct: \(R^2 = .526\) means the model explains about half the variance. The other ~47% is due to predictors not included (technique, motivation, muscle fiber composition, etc.). Report \(R^2\) as a measure of model fit, not causal completeness.

Limitations and Cautions

1. Correlation ≠ Causation

Multiple regression identifies associations, not causal relationships. Even after controlling for confounders, omitted variables and reverse causation can bias interpretations[1].

2. Overfitting

Models with many predictors fit sample noise, producing inflated \(R^2\) and poor generalization. Mitigation: use adjusted \(R^2\), cross-validation, and theory-driven selection[2].

3. Sample Size

Rule of thumb: \(n \ge 10\)\(20\) per predictor. Smaller samples yield unstable, untrustworthy models.

4. Extrapolation

Predictions outside the range of observed predictors are unreliable. A model built on athletes aged 18–25 should not predict performance in 60-year-olds.

5. Model Validation

Ideally, validate your model on an independent sample or via cross-validation. Shrinkage in \(R^2\) from training to test data indicates overfitting.

Responsible Use

Ensure adequate sample size, theory-driven selection, rigorous assumption checking, and transparent reporting of diagnostics and limitations. For advanced validation strategies, see SMS Chapter 12: Model Validation.

Workflow Summary

Use this sequence when building a multiple regression model[1,2]:

Step Action Tool/Check
1 State the research question Define outcome (\(Y\)) and candidate predictors (\(X_1, \ldots, X_k\))
2 Select predictors based on theory Prior research, domain knowledge, parsimony
3 Screen data Scatterplot matrix, correlation matrix, outlier checks
4 Check assumptions Linearity, independence, homoscedasticity, normality, no multicollinearity (VIF)
5 Fit the model (OLS, Method: Enter) \(\hat{Y} = b_0 + b_1 X_1 + \cdots + b_k X_k\)
6 Evaluate model fit \(R^2\), Adjusted \(R^2\), \(F\)-test
7 Interpret coefficients \(b\) for practical effects, \(\beta\) for relative importance, CIs for precision
8 Report transparently (APA) Model summary, coefficients, VIF, limitations

Important

The goal is not just numbers — it is understanding the joint influence of multiple factors and communicating findings honestly, including their limitations.

Key Takeaways

  1. Multiple Regression allows multiple predictors in a single model, identifying the unique variance contributed by each factor while holding others constant.
  2. Adjusted \(R^2\) is crucial because it accounts for the number of predictors, helping prevent overfitting.
  3. Partial and Semipartial Correlations disentangle shared from unique variance; squared semipartial = \(\Delta R^2\).
  4. Multicollinearity inflates standard errors and produces unstable results. Always check VIF — values > 10 are severe.
  5. Use theory-driven OLS selection (Method: Enter). Avoid stepwise methods for confirmatory research.
  6. Unstandardized coefficients (\(b\)) report real-world impacts; Standardized coefficients (\(\beta\)) compare relative strengths across different scales.
  7. Always check assumptions (linearity, independence, homoscedasticity, normality, no multicollinearity) before trusting results.
  8. Correlation does not imply causation, even with multiple predictors controlled.

Important

Multiple regression is powerful — but responsible use requires knowing its limits, checking assumptions, and reporting transparently.

Practice Questions

  1. Why must we report Adjusted \(R^2\) instead of standard \(R^2\) when using multiple predictors?
  2. A multiple regression evaluates strength and power against jump height with a VIF of 14.2 for both. What problem does this indicate and what actions will you take?
  3. Describe what “holding constant” means in the context of interpreting a regression coefficient in multiple regression.
  4. If a variable adds \(0.15\) to the model’s \(R^2\), is this matched to the variable’s partial or semipartial (part) correlation squared?
  5. In the sprint example, VO₂max’s coefficient changed from \(b = -0.033\) (bivariate) to \(b = -0.025\) (multiple regression). Explain why.
  6. A researcher reports \(F(3, 46) = 12.5, p < .001\) but does not report individual predictor tests. What information is missing?
  7. When would you suspect multicollinearity, and what would you check first?
  8. What does a funnel-shaped residual plot indicate, and how might you address it?

Exit Ticket: Multiple Regression Activity

Your instructor will provide you with a link to the activity in Canvas

References

1. Moore, D. S., McCabe, G. P., & Craig, B. A. (2021). Introduction to the practice of statistics (10th ed.). W. H. Freeman; Company.
2. Field, A. (2013). Discovering statistics using IBM SPSS statistics. Sage.
3. Weir, J. P., & Vincent, W. J. (2021). Statistics in kinesiology (5th ed.). Human Kinetics.
4. Furtado, O., Jr. (2026). Statistics for movement science: A hands-on guide with SPSS (1st ed.). https://drfurtado.github.io/sms/