KIN 610: Quantitative Methods in Kinesiology

Chapter 7: The Normal Distribution

Ovande Furtado Jr., PhD.

Professor, Cal State Northridge

2026-02-04

FYI

This presentation is based on the following books. The references are coming from these books unless otherwise specified.

Main sources:

  • Moore, D. S., Notz, W. I., & Fligner, M. (2021). The basic practice of statistics (9th ed.). W.H. Freeman.
  • Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE Publications.
  • Furtado, O., Jr. (2026). Statistics for movement science: A hands-on guide with SPSS (1st ed.). https://drfurtado.github.io/sms

ClassShare App

You may be asked in class to go to the ClassShare App to answer questions.

SPSS Tutorial

Intro Question

  • If you imagine measuring something like height, where do you think most of the values would fall relative to the mean (average)? Near it, far from it, or evenly spread out?
Click to reveal answer Most values would cluster around the mean, with fewer values as you move further away.
  • The normal distribution (or bell curve) is a fundamental concept in statistics that describes how data are distributed around a central value. In this chapter, we’ll explore the properties of the normal distribution, how to assess whether our data follow this pattern, and why it matters for statistical analysis in movement science.

Interactive Demonstration: Sampling from a Normal Distribution

Try generating samples from a normal distribution to see how the histogram changes with different sample sizes and parameters. This demonstrates the concept that most values cluster around the mean.

Interactive Demo: Click here to open the interactive normal distribution sampling demo

Note: The interactive demo opens in a new tab/window. It starts with kinesiology-relevant defaults (μ = 50 cm, σ = 10 cm, like jump height) but allows you to adjust all parameters to see how the histogram changes.

Learning Objectives

By the end of this chapter, you should be able to:

  • Describe the properties of the normal distribution and its role in statistical inference
  • Use z-scores to compute probabilities and percentiles for normal distributions
  • Interpret skewness and kurtosis as measures of distributional shape
  • Compute and interpret z-skew and z-kurtosis to assess statistical significance
  • Assess normality using visual methods (histograms, Q-Q plots) and formal tests
  • Recognize common patterns of non-normality in Movement Science data
  • Make informed decisions about when departures from normality are consequential
  • Integrate multiple lines of evidence (visual + formal) when conflicts arise

Symbols

Symbol Name Pronunciation Definition
\(\mu\) Population mean “myoo” Center of the distribution
\(\sigma\) Population standard deviation \(\sigma\) Spread of the distribution
\(z\) Z-score “zee” \((x - \mu) / \sigma\)
\(\Phi(z)\) Cumulative probability “phi of z” Area under curve to the left of z
\(P(X \leq x)\) Probability “probability of X less than or equal to x” Probability that X is less than or equal to x
Skewness Skewness “skew-ness” Measure of asymmetry; 0 for symmetric
Kurtosis Kurtosis “kur-toh-sis” Measure of tail weight; 0 for normal (excess kurtosis)
\(z_{\text{skew}}\) Z-score for skewness Skewness / SE of skewness
\(z_{\text{kurt}}\) Z-score for kurtosis Kurtosis / SE of kurtosis
\(Q_1\) First quartile 25th percentile
\(Q_3\) Third quartile 75th percentile

Introduction: The Bell Curve

The normal distribution (also called the bell curve or Gaussian distribution) is a continuous probability distribution that is central to statistical theory and practice[1].

  • Many inferential procedures (t-tests, ANOVA, regression) assume normality of errors or sampling distributions
  • Provides a mathematical model connecting z-scores to probabilities
    • It is the basis for the Central Limit Theorem, which justifies using normal-based methods for large samples regardless of the underlying population distribution
  • Serves as a reference model for interpreting performance and establishing normative ranges[2]
    • Example: Vertical jump heights in a homogeneous athletic population often approximate normality, allowing coaches to identify typical vs. exceptional performance.
Figure 1: Central Limit Theorem demonstration: Sampling distributions of means from a right-skewed population

Important

The normal distribution is a theoretical ideal, not a universal law of nature. Real movement data often deviate from normality in meaningful ways[3,4].

Key Terms

Understanding terminology is essential for normality assessment[1,7]:

  • Normal distribution: Symmetric, bell-shaped probability distribution defined by mean (μ) and SD (σ)
  • Standard normal distribution: Normal distribution with μ = 0 and σ = 1 (z-distribution)
  • 68-95-99.7 rule: ~68% within μ ± σ, ~95% within μ ± 2σ, ~99.7% within μ ± 3σ
  • Skewness: Measure of asymmetry (0 = symmetric, positive = right tail, negative = left tail)
  • Kurtosis: Measure of tail weight (0 = normal-like, positive = heavy tails, negative = light tails)
  • Q-Q plot: Quantile-quantile plot comparing observed data to theoretical normal distribution
  • Shapiro-Wilk test: Formal statistical test of normality (H₀: data are normal)

Properties of the Normal Distribution

The normal distribution has five defining characteristics[1]:

  1. Symmetry: Perfectly symmetric around μ; mean = median = mode
  2. Unimodal: Single peak at the mean
  3. Asymptotic tails: Tails extend to ±∞, approaching but never touching zero
  1. Total area = 1: Represents 100% probability
  2. 68-95-99.7 rule: Empirical rule for quick probability estimates
Figure 3: Standard normal distribution showing the 68-95-99.7 rule

Check Question

Check your understanding: What is the most distinctive property of the normal distribution?
Click to reveal answer

Answer: The normal distribution is perfectly symmetric (mean = median = mode), creating a balanced bell-shaped curve.

Symmetry of the Normal Distribution

Z-Scores and Probability

Standard normal distribution (μ = 0, σ = 1) allows us to use a single z-table for any normal distribution[1]. See Equation 1 for the standard z-score calculation:

\[ z = \frac{x - \mu}{\sigma} \]

Equation 1: Standard z-score formula

Example: Vertical jump heights: μ = 45 cm, σ = 7 cm. What proportion jump higher than 52 cm?

Step 1: \(z = \frac{52 - 45}{7} = 1.00\)

Step 2: Cumulative probability for z = 1.00 is 0.8413 (84.13% below)

Step 3: Upper tail: \(P(X > 52) = 1 - 0.8413 = 0.1587\) (15.87% jump higher)

How did I get the percentile? I used the CPT.

Real-World Context: Professional Basketball

A study of 53 professional basketball players reported that Spanish League (LEB) players had a mean Countermovement Jump (CMJ) of 41.17 cm during the pre-season (1st assessment).

“The vertical jump is considered a fundamental skill in basketball… [data] allow coaches to compare their players’ performance with high-level athletes.” — Read Abstract

Assuming a hypothetical SD = 5.0 cm (consistent with similar cohorts), a 52 cm jump would have a z-score of: \[ z = \frac{52 - 41.17}{5.0} = 2.17 \] This exceeds 98.5% of the professional cohort!

Check Question

Check your understanding: If a score has z = - 1.5, \(\bar{x}\) = 52 cm, and \(\sigma\) = 7 cm, approximately what percentile is this?
Click to reveal answer

Answer: A z-score of -1.5 means the score is 1.5 standard deviations below the mean (\(x = 52 - 1.5(7) = 41.5\) cm).

Calculation:
Percentile = Normal CPT at z=-1.5 = 0.0668 or 6.68%

Meaning: The score is at the 7th percentile, meaning only ~6.7% of scores are lower (and 93.3% are higher).

Interpretation: This is 1.5 SD below average - a relatively low performance compared to the group.

Skewness: Measuring Asymmetry

Skewness quantifies the degree of asymmetry in a distribution[7,8].

Interpretation:

  • Skewness ≈ 0: Symmetric (normal-like)
  • Skewness > 0: Right-skewed (positive skew); mean > median, long right tail
  • Skewness < 0: Left-skewed (negative skew); mean < median, long left tail

Rules of thumb[8]:

Skewness Range Interpretation
\(\|\)Skewness\(\|\) < 0.5 Approximately symmetric
0.5 ≤ \(\|\)Skewness\(\|\) < 1.0 Moderately skewed
\(\|\)Skewness\(\|\) ≥ 1.0 Highly skewed

Movement Science patterns

Reaction times, sway area, and EMG amplitude are systematically right-skewed due to physiological lower bounds and occasional large values[2,6].

Real-World Context

Reaction times in elite sprinters are typically positively (right) skewed.

  • Why? There is a biological “floor” (and IAAF rule) at 0.100 s—no legitimate reaction can be faster.
  • However, there is no upper ceiling; a sprinter might react in 0.200 s or slower due to hesitation, creating a long “tail” of slower times to the right.
  • Data: A study of 1,319 World Championship sprinters found a mean reaction time of 0.166 s for men, but the distribution leans towards slower outliers. — Read Abstract

Check Question

Check your understanding: What does positive skewness indicate?
Click to reveal answer

Answer: Positive skewness indicates a long right tail where the mean is pulled higher than the median by extreme values.

Z-Skew: Statistical Significance

Z-score for skewness (z-skew) tests whether observed skewness is statistically different from zero[7,9].

Formula:

\[ z_{\text{skew}} = \frac{\text{Skewness}}{SE_{\text{skew}}} \]

Equation 2: Z-score for skewness formula

Decision rules:

  • \(|z_{\text{skew}}| < 1.96\): Not significant at α = 0.05 (approximately symmetric)
  • \(|z_{\text{skew}}| \geq 1.96\): Significant at α = 0.05 (statistically asymmetric)
  • \(|z_{\text{skew}}| \geq 2.58\): Highly significant at α = 0.01

To make it simple!

If z-skew is in between -2.0 and 2.0, the distribution is approximately symmetric.

Example 1 (not significant):

  • Skewness = 0.45, SE = 0.31 → \(z_{\text{skew}} = 0.45 / 0.31 = 1.45\)
  • Since \(|1.45| < 1.96\), skewness is not significant

Example 2 (highly significant):

  • Skewness = 1.85, SE = 0.31 → \(z_{\text{skew}} = 1.85 / 0.31 = 5.97\)
  • Since \(|5.97| \geq 2.58\), skewness is highly significant (p < .01)
  • Emphasize: Z-skew provides a formal threshold rather than subjective judgment.
  • Caveat: With large samples, trivial skewness can become significant; always check magnitude too[5].

Check Question

Check your understanding: If skewness = 1.45 and SE = 0.31, what is z-skew and is it significant?
Click to reveal answer

Answer: Formula: z-skew = skewness / SE_skew

Calculation:
z-skew = 1.45 / 0.31 = 4.68

Decision: |4.68| >= 2.58, so the skewness is highly significant at a = 0.01 (p < .01).

Interpretation: This is not a trivial departure - it’s a strong, systematic skew that cannot be attributed to sampling variation. A distribution this skewed is clearly not normal!

Kurtosis: Tail Weight

Kurtosis quantifies the “tailedness” or extremity of a distribution relative to the normal distribution[10,11].

Types:

  • Mesokurtic (k ≈ 0): Normal tails
  • Leptokurtic (k > 0): Heavy tails, sharp peak
  • Platykurtic (k < 0): Light tails, flat peak

Rules of thumb:

Kurtosis Interpretation
|k| < 1.0 Normal-like (ok)
1.0 ≤ |k| < 2.0 Moderate
|k| ≥ 2.0 Severe
Figure 4: Kurtosis Shapes - LOOK AT THE EDGES!

Important

Modern interpretation emphasizes tail weight rather than “peakedness”[10].

  • Heavy tails (Leptokurtic): Look for bars extending far left/right (outliers are common).
  • Light tails (Platykurtic): Look for bars stopping abruptly (outliers are rare/impossible).

Check Question

Check your understanding: What do heavy tails indicate?
Click to reveal answer

Answer: Heavy tails indicate more extreme outliers than expected under a normal distribution.

Figure 5: Example of leptokurtic (heavy-tailed) distribution

Z-Kurtosis: Statistical Significance

Z-score for kurtosis (z-kurtosis) tests whether observed kurtosis differs significantly from zero[7,9].

Formula:

\[ z_{\text{kurt}} = \frac{\text{Kurtosis}}{SE_{\text{kurt}}} \]

Equation 3: Z-score for kurtosis formula

Decision rules:

  • \(|z_{\text{kurt}}| < 1.96\): Not significant at α = 0.05 (normal-like tails)
  • \(|z_{\text{kurt}}| \geq 1.96\): Significant at α = 0.05 (significant tail departure)
  • \(|z_{\text{kurt}}| \geq 2.58\): Highly significant at α = 0.01

To make it simple!

If z-kurt is in between -2.0 and 2.0, the distribution is approximately normal.

Combined interpretation:

Z-Skew Z-Kurtosis Decision
Both \(\|z\| < 1.96\) Both \(\|z\| < 1.96\) Approximately normal
Either \(\|z\| \geq 1.96\) Either \(\|z\| \geq 1.96\) Significant departure
Both \(\|z\| \geq 2.58\) Both \(\|z\| \geq 2.58\) Severe non-normality

Visual Assessment: Histograms

Histograms display the frequency distribution of data by grouping values into bins, revealing the overall shape and patterns that numerical summaries alone cannot capture[7]. They allow you to quickly identify:

  • Symmetry or skewness: Is the distribution balanced or does it lean to one side?
  • Modality: Are there single or multiple peaks in the data?
  • Outliers: Are there unusual values far from the main cluster?
  • Spread: How much variability exists in the data?

Histograms provide an intuitive visual complement to statistics like mean, median, skewness, and kurtosis—helping you see what the numbers are telling you.

Figure 6: Symmetric, right-skewed, and left-skewed distributions

Visual Assessment: Q-Q Plots

Q-Q plots (quantile-quantile plots) compare observed data quantiles to expected normal quantiles[1,7].

How to interpret:

  • Points close to diagonal line: Approximately normal
  • S-curve (lower-left curving up, upper-right curving down): Right-skewed
  • Inverted S-curve: Left-skewed
  • Points above line at ends: Heavy tails (leptokurtic)
  • Points below line at ends: Light tails (platykurtic)
Code
set.seed(123)
par(mfrow = c(1, 3)) # 3 plots side-by-side

# 1. Normal
norm_data <- rnorm(200)
qqnorm(norm_data, main = "Normal", pch = 19, col = "gray50")
qqline(norm_data, col = "red", lwd = 2)

# 2. Right-Skewed (Positive Skew)
# Points curve UP at both ends (convex / U-shape) relative to line
right_skew <- rexp(200, rate = 1)
qqnorm(right_skew, main = "Right-Skewed", pch = 19, col = "gray50")
qqline(right_skew, col = "red", lwd = 2)

# 3. Left-Skewed (Negative Skew)
# Points curve DOWN at both ends (concave / inverted U) relative to line
left_skew <- 100 - rexp(200, rate = 1)
qqnorm(left_skew, main = "Left-Skewed", pch = 19, col = "gray50")
qqline(left_skew, col = "red", lwd = 2)

par(mfrow = c(1, 1)) # Reset layout
Figure 7: Q-Q Plots: Normal vs. Skewed Distributions

Gold standard

Q-Q plots are the most informative visual tool for assessing normality because they show how data deviate from the normal model across the entire distribution[7].

Check Question

Check your understanding: What does an S-shaped curve on a Q-Q plot indicate?
Click to reveal answer

Answer: An S-shaped curve indicates right-skewed data (positive skewness).

Figure 8: Q-Q plot showing S-shaped curve for right-skewed data (reaction times)

Formal Tests: Shapiro-Wilk

Shapiro-Wilk test is the most powerful normality test for small to moderate samples[13,14].

Hypotheses:

  • H₀: Data are normally distributed
  • H₁: Data are not normally distributed

Decision rule:

  • p < 0.05: Reject H₀ → Data are NOT normal (evidence of non-normality)
  • p ≥ 0.05: Fail to reject H₀ → Data are approximately normal (no evidence of departure)

Simple interpretation

  • p < 0.05 = Data depart significantly from normality
  • p ≥ 0.05 = Data are consistent with normality

Example:

  • Sprint times: W = 0.981, p = .448 → Data are approximately normal
  • Reaction times: W = 0.905, p = .001 → Data are NOT normal (significant departure)

Critical limitation

Do not rely solely on p-values! Sample size strongly affects test results[5,9]:

  • Large samples (n > 100): Tests detect trivial departures
  • Small samples (n < 30): Tests lack power to detect real departures

So, always combine formal tests with visual assessment (Q-Q plots, histograms) to make informed decisions about normality.

The Sample Size Paradox

Why visual and formal methods often conflict:

Large Samples (n > 100)

  • Formal tests become hypersensitive
  • Detect trivial departures (statistically significant but practically irrelevant)
  • Q-Q plot shows near-perfect normality, but p < 0.05

Decision: Trust visual assessment

Small Samples (n < 30)

  • Formal tests have low power
  • Fail to detect real departures (statistically non-significant but practically important)
  • Q-Q plot shows clear departure, but p > 0.05

Decision: Trust visual assessment

Important

Modern statistical practice prioritizes visual assessment with formal tests serving as supplementary evidence[5,7,12].

Key message: Due to sample size effects, formal tests work best with moderate samples (n = 30-100) but are problematic with very small or very large samples. Always start with visual assessment (Q-Q plots, histograms) regardless of sample size.

Check Question

Check your understanding: When do visual and formal normality tests most often conflict?
Click to reveal answer

Answer: They most often conflict due to:

  • sample size effects - large samples make formal tests too sensitive, small samples make them underpowered.

::::

Decision Framework: Integrating Evidence

When visual and formal methods conflict, follow this hierarchical approach[5,7]:

Step 1: Prioritize visual assessment (reveals nature and magnitude)

Step 2: Consider sample size when interpreting formal tests

  • n < 30: Weight visual heavily (tests underpowered)
  • 30 ≤ n < 100: Balance visual and formal (tests most informative)
  • n ≥ 100: Weight visual heavily (tests hypersensitive)

Step 3: Evaluate practical vs. statistical significance

  • Are z-skew and z-kurt in between ±2 (acceptable) or beyond ±2 (significant)?
  • Do Q-Q plots show systematic departure or minor waviness?

Step 4: Apply convergence rule

  • All agree (visual, z-scores, formal test) → Proceed with high confidence
  • Visual shows normality BUT p < .05 → Likely trivial departure (large n) — use parametric
  • Visual shows departure BUT p ≥ .05 → Likely real problem (small n) — use transformation/nonparametric
  • Mixed/borderline evidence → Use robust methods (Welch’s t-test, bootstrap)

Convergence Rule: Practical Examples

Apply the convergence rule to resolve conflicts between methods:

Scenario 1: Large Sample (n = 150)

  • Q-Q plot: Points hug the line closely
  • z-skew = 0.8, z-kurt = 1.1 (both in ±2)
  • Shapiro-Wilk: W = 0.975, p = .018

Decision: Use parametric

  • Visual shows near-perfect normality
  • z-scores show acceptable shape
  • Significant p-value likely due to large n detecting trivial departure

Scenario 2: Small Sample (n = 22)

  • Q-Q plot: Clear S-curve (right-skewed)
  • z-skew = 2.8 (beyond ±2)
  • Shapiro-Wilk: W = 0.918, p = .062

Decision: Transform or use nonparametric

  • Visual clearly shows departure
  • z-skew confirms significant skewness
  • Non-significant p-value due to low power (small n)
  • Don’t let p > .05 mislead you!

Key insight

When methods conflict, trust the pattern across multiple indicators rather than relying on a single test. Large samples reveal trivial issues; small samples hide real problems.

Check Question

Check your understanding: What should you prioritize when assessing normality: visual assessment or formal tests?
Click to reveal answer

Answer: Prioritize visual assessment (Q-Q plots, histograms) because they reveal the nature and magnitude of departures. Formal tests serve as supplementary evidence, but are highly influenced by sample size.

Decision Table for Conflicts

Visual Assessment Formal Test Sample Size Recommended Action
Approximately normal p < 0.05 n < 30 Use parametric (test underpowered)
Approximately normal p < 0.05 n ≥ 100 Use parametric (trivial departure)
Clear departure p > 0.05 n < 30 Transform/nonparametric (test underpowered)
Clear departure p > 0.05 n ≥ 100 Investigate data quality
Mild departure p < 0.05 Any Use robust methods (Welch’s t)
Severe departure p < 0.05 Any Transform/nonparametric

Practical checklist

When reviewing SPSS output, systematically check:

✓ Sample size, ✓ Visual (Q-Q + histogram), ✓ Magnitude (z-skew, z-kurt), ✓ Formal test (Shapiro-Wilk), ✓ Integrated decision

Common Non-Normal Patterns

Movement Science data often show systematic departures from normality[3,4]:

  1. Right-skewed: Reaction time, sway area, EMG amplitude (physiological lower bounds)[6]
  2. Ceiling/floor effects: Function scores, pain scales (clustering at boundaries)
  3. Bimodal: Mixed groups (trained vs. untrained) — analyze separately[15]
  4. Heavy-tailed: Outlier-prone measures (strength under fatigue, motivation artifacts)[12]

Real example

A researcher collects reaction times: strong right skew (skew = 1.8), Shapiro-Wilk p = .002, Q-Q plot shows clear upward curvature.

Options:

  1. Log transform → confirm normality on log scale → proceed with parametric methods
  2. Report median/IQR instead of mean/SD
  3. Use nonparametric tests (Mann-Whitney U, Kruskal-Wallis)

When Normality Matters Most

High priority: Check normality carefully

  • Small samples (n < 30): Parametric methods rely heavily on distributional assumptions[1]
  • Hypothesis tests with p-values near cutoffs (p ≈ 0.05): Violations could tip decisions[9]
  • Variables known to be non-normal: Reaction time, sway area, EMG amplitude[6]

Lower priority: Normality less critical

  • Large samples (n > 100): Central Limit Theorem makes parametric tests robust[5]
  • Robust methods: Welch’s t-test, bootstrapping, rank-based tests tolerate departures[12,16]
  • Descriptive summaries: Report median/IQR regardless of normality[7]

Key principle: context over rules

There is no universal “how normal is normal enough.” The answer depends on sample size, purpose (inference vs. description), magnitude of departure, and robustness of method[5,16].

What to Do When Data Are Not Normal

When departures are consequential, consider these principled options[4,12]:

  1. Use robust or nonparametric methods:

    • Median/IQR instead of mean/SD
    • Mann-Whitney U or Kruskal-Wallis instead of t-tests/ANOVA
    • Bootstrap or permutation tests
  2. Transform the variable:

    • Log transformation for right-skewed data (reaction time, sway area)[6]
    • Square root or Box-Cox for count data
  3. Accept departure and proceed with caution:

    • Many procedures are robust to moderate departures (n > 30, balanced designs)[5]
    • Use Welch’s t-test when variances differ[16]
  4. Separate subgroups: If bimodal, analyze groups separately[15]

  5. Acknowledge and report: Describe distributional shape and justify your approach[7]

Worked Example: Complete Assessment

Scenario: Sprint times from 40 participants

Step 1: Visualize (histogram + Q-Q plot)

  • Histogram: Reasonably symmetric, no extreme outliers
  • Q-Q plot: Points close to diagonal with minor waviness

Step 2: Compute shape measures

  • Skewness = 0.15 (negligible)
  • Kurtosis = −0.22 (close to normal)
  • z-skew = 0.15 / 0.26 = 0.58 (not significant)
  • z-kurt = −0.22 / 0.51 = −0.43 (not significant)

Step 3: Run formal test

  • Shapiro-Wilk: W = 0.981, p = .448 (do not reject normality)

Step 4: Integrated decision

  • All evidence converges: approximately normal
  • Conclusion: Proceed with parametric methods (t-tests, ANOVA) confidently

Summary: Key Takeaways

  1. Normal distribution: Symmetric, bell-shaped; defined by μ and σ; foundational for inference
  2. 68-95-99.7 rule: Quick probability estimates without software
  3. Skewness & kurtosis: Quantify shape; use z-scores for significance testing
  4. Visual assessment is primary: Q-Q plots and histograms reveal nature and magnitude of departures
  5. Formal tests are supplementary: Sample size strongly affects results (hypersensitive with large n, underpowered with small n)
  6. Integration over exclusion: When conflicts arise, combine all evidence using the decision framework
  7. Context matters: Sample size, purpose, magnitude, and robustness of methods determine whether departures are consequential
  8. Multiple options exist: Transformation, robust methods, nonparametric tests when normality fails

Important

The goal is not to worship normality or avoid it reflexively, but to treat it as one useful model among many, applicable when data and context support it[12,15].

Practice Questions

  1. What is the most distinctive property of the normal distribution?
  2. If z = 1.5, approximately what percentile is this?
  3. What does positive skewness indicate? Give a Movement Science example.
  4. When do visual and formal normality assessments typically conflict, and why?
  5. What is the decision rule for z-skew at α = 0.05?
  6. If Q-Q plot shows approximate normality but Shapiro-Wilk p = .02 with n = 150, what should you do?
  7. Name three common non-normal patterns in Movement Science data.
  8. When is normality assessment most critical (high priority)?

References

1. Moore, D. S., McCabe, G. P., & Craig, B. A. (2021). Introduction to the practice of statistics (10th ed.). W. H. Freeman; Company.
2. Vincent, W. J. (1999). Statistics in kinesiology. Human Kinetics.
3. Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105(1), 156.
4. Blanca, M. J., Alarcón, R., Arnau, J., Bono, R., & Bendayan, R. (2013). Non-normal data: Is ANOVA still a valid option? Psicothema, 25(2), 221–226.
5. Lumley, T., Diehr, P., Emerson, S., & Chen, L. (2002). The importance of the normality assumption in large public health data sets. Annual Review of Public Health, 23(1), 151–169.
6. Limpert, E., Stahel, W. A., & Abbt, M. (2001). Log-normal distributions across the sciences: Keys and clues. BioScience, 51(5), 341–352. https://doi.org/10.1641/0006-3568(2001)051[0341:LNDATS]2.0.CO;2
7. Field, A. (2013). Discovering statistics using IBM SPSS statistics. Sage.
8. Bulmer, M. G. (1979). Principles of statistics. Courier Corporation.
9. Ghasemi, A., & Zahediasl, S. (2012). Normality tests for statistical analysis: A guide for non-statisticians. International Journal of Endocrinology and Metabolism, 10(2), 486.
10. Westfall, P. H., & Young, S. S. (2014). Resampling-based multiple testing: Examples and methods for p-value adjustment. John Wiley & Sons.
11. Joanes, D. N., & Gill, C. A. (1998). Comparing measures of sample skewness and kurtosis. The Statistician, 47(1), 183–189.
12. Wilcox, R. R. (2017). Introduction to robust estimation and hypothesis testing (4th ed.). Academic Press.
13. Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 52(3-4), 591–611.
14. Razali, N. M., & Wah, Y. B. (2011). Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. Journal of Statistical Modeling and Analytics, 2(1), 21–33.
15. Tukey, J. W. (1977). Exploratory data analysis. Addison-Wesley.
16. Delacré, M., Lakens, D., & Leys, C. (2021). Why psychologists should by default use welch’s t-test instead of student’s t-test. International Review of Social Psychology, 34(1).
17. Furtado, O., Jr. (2026). Statistics for movement science: A hands-on guide with SPSS (1st ed.). https://drfurtado.github.io/sms/