Directional Hypotheses: One-Tailed vs. Two-Tailed Tests

A comprehensive, evidence-based guide to formulating directional hypotheses (H₁: μ₁ > μ₂), the practical and theoretical distinctions between one-tailed and two-tailed tests, and why two-tailed tests are the standard in scientific research unless strong a priori directional predictions exist. Illustrated with R visualizations and examples from kinesiology and the humanities.

Hypothesis testing
One-tailed tests
Two-tailed tests
Research methodology
Effect size
Author
Affiliation

Cal State Northridge

Published

March 4, 2026

1 Introduction

One of the most consequential decisions a researcher makes during study design is whether to use a one-tailed or a two-tailed hypothesis test. The decision is not merely technical—it reflects the researcher’s prior knowledge, theoretical commitments, and willingness to accept the risk of missing effects in the unexpected direction (Bland & Altman, 1994; Weir & Vincent, 2021).

In kinesiology, this matters enormously. Suppose a researcher tests whether high-intensity interval training (HIIT) improves VO₂max more than moderate-intensity continuous training (MICT). If a one-tailed test is chosen—predicting HIIT is better—and HIIT actually turns out to be worse, the one-tailed framework cannot detect that harmful outcome at the chosen α level. Two-tailed tests preserve the researcher’s ability to detect effects in either direction (Bland & Altman, 1994; Rosner, 2015).

This post develops the mathematical and methodological foundations for both approaches, evaluates when each is scientifically defensible, and provides R-based demonstrations using kinesiology and humanities examples.

2 Hypothesis Formulation

2.1 The General Structure

Every classical hypothesis test involves two competing statements about a population parameter (Gravetter et al., 2021; Weir & Vincent, 2021):

  • Null hypothesis (\(H_0\)): The default claim of no difference, no association, or no effect.
  • Alternative hypothesis (\(H_1\)): The researcher’s prediction that a difference, association, or effect exists.

The alternative hypothesis determines whether the test is one-tailed or two-tailed, and in which direction.

2.2 Non-Directional Hypotheses → Two-Tailed Tests

A non-directional alternative hypothesis specifies that a difference exists but makes no claim about its direction:

\[ H_0: \mu_1 = \mu_2 \qquad H_1: \mu_1 \neq \mu_2 \]

The \(\neq\) sign means we are interested in detecting an effect regardless of whether \(\mu_1 > \mu_2\) or \(\mu_1 < \mu_2\). The critical region for rejecting \(H_0\) is therefore split equally between both tails of the null distribution, allocating \(\alpha/2\) to each tail (Weir & Vincent, 2021).

Kinesiology example. A researcher wants to know whether the grip strength (\(\mu\), in kg) of competitive rock climbers differs from that of competitive swimmers. No prior evidence firmly establishes the direction:

\[ H_0: \mu_{\text{climbers}} = \mu_{\text{swimmers}} \qquad H_1: \mu_{\text{climbers}} \neq \mu_{\text{swimmers}} \]

Humanities example. A sociolinguist tests whether the mean sentence length in 19th-century British novels differs from that in 19th-century American novels. Neither direction is theoretically mandated:

\[ H_0: \mu_{\text{British}} = \mu_{\text{American}} \qquad H_1: \mu_{\text{British}} \neq \mu_{\text{American}} \]

2.3 Directional Hypotheses → One-Tailed Tests

A directional alternative hypothesis specifies both the existence and the direction of an effect:

\[ H_1: \mu_1 > \mu_2 \qquad \text{(right-tailed)} \]

or

\[ H_1: \mu_1 < \mu_2 \qquad \text{(left-tailed)} \]

The entire \(\alpha\) is concentrated in one tail, placing the critical value closer to the mean of the null distribution and—for effects in the predicted direction—making rejection easier to achieve (Cohen, 1988).

Kinesiology example. Based on a meta-analysis showing resistance training consistently increases bone mineral density, a researcher predicts that a 16-week resistance training program will increase femoral neck BMD in postmenopausal women:

\[ H_0: \mu_{\text{post}} \leq \mu_{\text{pre}} \qquad H_1: \mu_{\text{post}} > \mu_{\text{pre}} \]

Humanities example. A cognitive psychologist grounded in dual-coding theory predicts that illustrated textbooks will improve recall scores compared to text-only versions:

\[ H_0: \mu_{\text{illustrated}} \leq \mu_{\text{text-only}} \qquad H_1: \mu_{\text{illustrated}} > \mu_{\text{text-only}} \]

3 The Mathematics of One- vs. Two-Tailed Tests

3.1 Critical Values and the α Allocation

For a t-test at \(\alpha = 0.05\) with \(df = 29\) (\(n = 30\) per group):

Code
df <- 29
alpha <- 0.05

t_two <- qt(1 - alpha/2, df = df)   # two-tailed critical value
t_one <- qt(1 - alpha,   df = df)   # one-tailed (right) critical value

writeLines(c(
  paste0("Two-tailed critical value (±): ", round(t_two, 3)),
  paste0("One-tailed critical value    : ", round(t_one, 3)),
  paste0("Difference                   : ", round(t_two - t_one, 3),
         " (two-tailed is more stringent)")
))
## Two-tailed critical value (±): 2.045
## One-tailed critical value    : 1.699
## Difference                   : 0.346 (two-tailed is more stringent)

The one-tailed critical value (≈1.699) is smaller in absolute terms than the two-tailed value (≈2.045)—in the predicted direction—meaning a one-tailed test requires a less extreme observed t-statistic to reject \(H_0\). This is the power advantage of one-tailed tests.

3.2 Rejection Regions Visualized

Code
t_seq <- seq(-4, 4, length.out = 600)
yt    <- dt(t_seq, df = 29)

par(mfrow = c(2, 1), mar = c(3.5, 4, 2.5, 1))

# ---- One-tailed (right) ----
plot(t_seq, yt, type = "l", lwd = 2, col = "navy",
     main = expression(paste("One-Tailed (Right): ", alpha, " = 0.05")),
     xlab = "", ylab = "Density", ylim = c(0, 0.41))
t_nr <- t_seq[t_seq <= t_one]
polygon(c(t_nr, rev(t_nr)), c(dt(t_nr, df), rep(0, length(t_nr))),
        col = "lightgray", border = NA)
t_rj <- t_seq[t_seq >= t_one]
polygon(c(t_rj, rev(t_rj)), c(dt(t_rj, df), rep(0, length(t_rj))),
        col = rgb(1, 0, 0, 0.5), border = NA)
abline(v = t_one, lty = 2, col = "steelblue", lwd = 1.8)
text(t_one + 0.15, 0.27, paste0("t* = ", round(t_one, 2)),
     col = "steelblue", cex = 0.75, adj = 0)
text(3.3, 0.04, expression(alpha), col = "red", cex = 0.9)
text(-1.5, 0.18, expression(paste("Retain ", H[0])), col = "gray40", cex = 0.8)
text(3.0, 0.15, expression(paste("Reject ", H[0])), col = "red", cex = 0.8)

# ---- Two-tailed ----
plot(t_seq, yt, type = "l", lwd = 2, col = "navy",
     main = expression(paste("Two-Tailed: ", alpha, " = 0.05")),
     xlab = "t-statistic", ylab = "Density", ylim = c(0, 0.41))
t_nr2 <- t_seq[t_seq >= -t_two & t_seq <= t_two]
polygon(c(t_nr2, rev(t_nr2)), c(dt(t_nr2, df), rep(0, length(t_nr2))),
        col = "lightgray", border = NA)
t_L <- t_seq[t_seq <= -t_two]
polygon(c(t_L, rev(t_L)), c(dt(t_L, df), rep(0, length(t_L))),
        col = rgb(1, 0, 0, 0.5), border = NA)
t_R <- t_seq[t_seq >= t_two]
polygon(c(t_R, rev(t_R)), c(dt(t_R, df), rep(0, length(t_R))),
        col = rgb(1, 0, 0, 0.5), border = NA)
abline(v = -t_two, lty = 2, col = "steelblue", lwd = 1.8)
abline(v =  t_two, lty = 2, col = "steelblue", lwd = 1.8)
text(-t_two - 0.15, 0.27, paste0("t* = ", round(-t_two, 2)),
     col = "steelblue", cex = 0.75, adj = 1)
text( t_two + 0.15, 0.27, paste0("t* = ", round(t_two, 2)),
     col = "steelblue", cex = 0.75, adj = 0)
text(-3.3, 0.04, expression(alpha/2), col = "red", cex = 0.9)
text( 3.3, 0.04, expression(alpha/2), col = "red", cex = 0.9)
text(0, 0.18, expression(paste("Retain ", H[0])), col = "gray40", cex = 0.8)

par(mfrow = c(1,1))
Figure 1: Rejection regions (shaded red) and non-rejection regions (shaded gray) for a right-tailed one-tailed test (top) and a two-tailed test (bottom) at alpha = 0.05, df = 29. Blue dashed vertical lines mark the critical values. The one-tailed test concentrates all rejecting power in the right tail.

3.3 The Power Trade-Off

The power advantage of a one-tailed test exists only in the predicted direction. In the opposite direction, the one-tailed test has essentially zero power to detect even very large effects (Bland & Altman, 1994; Cohen, 1988). The table below makes this explicit for \(n = 25\) per group at \(\alpha = 0.05\).

Code
compute_power <- function(d, n, alpha, tails = 2) {
  df  <- 2 * (n - 1)
  ncp <- d * sqrt(n / 2)
  if (tails == 2) {
    crit  <- qt(1 - alpha/2, df)
    power <- pt(-crit, df, ncp) + 1 - pt(crit, df, ncp)
  } else {
    crit  <- qt(1 - alpha, df)
    power <- 1 - pt(crit, df, ncp)
  }
  round(power, 3)
}

d_values <- c(-0.8, -0.5, -0.2, 0, 0.2, 0.5, 0.8)
n_group  <- 25

pow_two <- sapply(d_values, compute_power, n = n_group, alpha = 0.05, tails = 2)
pow_one <- sapply(d_values, compute_power, n = n_group, alpha = 0.05, tails = 1)

df_pow <- data.frame(
  "Cohen's d"           = d_values,
  "Direction"           = c("Opposite","Opposite","Opposite","None",
                             "Predicted","Predicted","Predicted"),
  "Power (Two-Tailed)"  = pow_two,
  "Power (One-Tailed)"  = pow_one,
  check.names = FALSE
)
print(df_pow, row.names = FALSE)
 Cohen's d Direction Power (Two-Tailed) Power (One-Tailed)
      -0.8  Opposite              0.791              0.000
      -0.5  Opposite              0.410              0.000
      -0.2  Opposite              0.107              0.010
       0.0      None              0.050              0.050
       0.2 Predicted              0.107              0.172
       0.5 Predicted              0.410              0.539
       0.8 Predicted              0.791              0.874

Key observation: When the true effect is in the predicted direction (\(d > 0\)), the one-tailed test has noticeably higher power. But when the true effect is in the opposite direction (\(d < 0\)), the one-tailed test has near-zero power—it will almost never detect that the intervention harms the outcome. In kinesiology and health sciences, detecting harm is just as important as detecting benefit (Bland & Altman, 1994).

Code
d_cont    <- seq(-1.5, 1.5, length.out = 300)
pow_two_c <- sapply(d_cont, compute_power, n = 25, alpha = 0.05, tails = 2)
pow_one_c <- sapply(d_cont, compute_power, n = 25, alpha = 0.05, tails = 1)

plot(d_cont, pow_two_c, type = "l", lwd = 2, col = "firebrick",
     xlab = "True Effect Size (Cohen's d)",
     ylab = "Statistical Power",
     main = bquote("Power Curves: One-Tailed vs. Two-Tailed (n = 25/group," ~ alpha ~ "= 0.05)"),
     ylim = c(0, 1))
lines(d_cont, pow_one_c, lwd = 2, col = "steelblue")
abline(h = 0.80, lty = 2, col = "gray50")
abline(v = 0,    lty = 3, col = "black")
text(0.05, 0.83, "80% power", col = "gray40", cex = 0.8, adj = 0)
text(1.25, 0.70, "Two-tailed", col = "firebrick", cex = 0.85)
text(1.25, 0.92, "One-tailed", col = "steelblue", cex = 0.85)
text(-1.2, 0.14, "One-tailed\n(no power here)", col = "steelblue", cex = 0.75)
Figure 2: Power as a function of true effect size (Cohen’s d) for one-tailed (blue) and two-tailed (red) tests at alpha = 0.05, n = 25 per group. Positive d values favor the predicted direction; negative d values represent opposite-direction effects. The one-tailed test has higher power in the predicted direction but cannot detect effects on the left.

4 Why Two-Tailed Tests Are the Scientific Default

4.1 The Conservative Rationale

Despite the power advantage in a specific direction, two-tailed tests are the recommended default in kinesiology, psychology, medicine, and behavioral sciences unless a very specific set of conditions is satisfied (Bland & Altman, 1994; Rosner, 2015; Weir & Vincent, 2021). The reasoning is twofold:

  1. Detecting harm matters. In intervention research, an effect in the “wrong” direction is often clinically or practically consequential. A training protocol that reduces jump height is important to know about, even if the researcher only predicted improvement.

  2. Conservatism prevents false discoveries. Two-tailed tests are more conservative—they require a more extreme test statistic to achieve significance—which reduces the rate of spurious findings in the literature.

4.2 The p-Hacking Problem

A well-documented methodological abuse involves conducting a two-tailed analysis, observing \(p = 0.07\) (not significant), then switching to a one-tailed analysis to obtain \(p = 0.035\) (declared significant)—without any pre-specified directional justification (Bland & Altman, 1994). This practice:

  • Doubles the effective Type I error rate from 5% to 10%.
  • Constitutes post-hoc hypothesis formulation, a violation of the frequentist framework where hypotheses must be stated before data inspection.
  • Has been documented as a driver of non-replicable findings in sports science and psychology (Cumming, 2014).
Warning

Pre-registration as a safeguard. Publicly registering the hypothesis direction and statistical test before data collection (e.g., on the Open Science Framework) prevents post-hoc switching between one- and two-tailed tests and protects the validity of the reported α level. Many kinesiology journals now encourage or require pre-registration (Thomas et al., 2015).

The simulation below illustrates how switching to a one-tailed test inflates the observed Type I error rate when the null hypothesis is true.

Code
set.seed(2024)
n_sims <- 3000
n_grp  <- 30

p_two_vec <- numeric(n_sims)
p_one_vec <- numeric(n_sims)

for (i in seq_len(n_sims)) {
  g1 <- rnorm(n_grp, 0, 1)
  g2 <- rnorm(n_grp, 0, 1)
  p_two_vec[i] <- t.test(g1, g2, alternative = "two.sided")$p.value
  p_one_vec[i] <- t.test(g1, g2, alternative = "greater")$p.value
}

type1_two    <- mean(p_two_vec < 0.05)
type1_switch <- mean(p_one_vec[p_two_vec >= 0.05 & p_two_vec < 0.10] < 0.05)
marginal     <- p_two_vec >= 0.05 & p_two_vec < 0.10

writeLines(c(
  paste0("Type I error — genuine two-tailed       : ", round(type1_two, 3)),
  paste0("Switch candidate studies (p_two 0.05–0.10): ", sum(marginal)),
  paste0("Type I error after switching            : ", round(type1_switch, 3),
         " — substantially inflated!")
))
Type I error — genuine two-tailed       : 0.042
Switch candidate studies (p_two 0.05–0.10): 148
Type I error after switching            : 0.473 — substantially inflated!
Code
plot_col <- ifelse(marginal, "firebrick", rgb(0.2, 0.4, 0.8, 0.25))
plot(p_two_vec, p_one_vec,
     pch = 19, cex = 0.35, col = plot_col,
     xlab = "p-value (two-tailed)",
     ylab = "p-value (one-tailed)",
     main = "Post-hoc Switching Inflates Type I Error\n(H0 True in All Simulations)")
abline(h = 0.05, lty = 2, col = "black")
abline(v = 0.05, lty = 3, col = "gray50")
abline(v = 0.10, lty = 3, col = "gray50")
legend("topright",
       legend = c("p_two >= 0.10",
                  "p_two 0.05-0.10 (switch candidates)",
                  "alpha = 0.05"),
       col  = c(rgb(0.2,0.4,0.8,0.6), "firebrick", "black"),
       pch  = c(19, 19, NA),
       lty  = c(NA, NA, 2),
       cex  = 0.72, bty = "n")
Figure 3: Simulation demonstrating Type I error inflation when researchers switch from a two-tailed to a one-tailed test post-hoc. Each dot represents one simulated study (n = 30/group) drawn from populations with identical means (H0 true). Red dots: studies where p_two-tailed fell between 0.05 and 0.10 – marginal results a researcher might re-analyze as one-tailed. The horizontal dashed line marks alpha = 0.05.

Reading the plot: Among simulated studies where \(p_{\text{two}} \in [0.05, 0.10)\) (red dots), the proportion with \(p_{\text{one}} < 0.05\) is substantially above 5%, confirming the Type I error inflation. All of these studies were drawn from populations with no true effect (\(H_0\) is true).

5 When Is a One-Tailed Test Justified?

Methodologists converge on two necessary criteria that must both be met (Bland & Altman, 1994; Rosner, 2015; Weir & Vincent, 2021):

Criterion 1 — A priori directional justification. The directional prediction must be grounded in substantial prior empirical or theoretical evidence established before any data are examined. A researcher who reads a related study after data collection and then formulates a directional hypothesis is engaging in post-hoc rationalization.

Criterion 2 — Irrelevance of the opposite direction. The researcher must be prepared to argue that an effect in the opposite direction—even a large one—would have no theoretical, clinical, or practical significance. In most kinesiology and health science contexts, this criterion is extremely difficult to satisfy.

The decision flowchart below synthesizes current methodological guidance:

Table 1: Decision guide for choosing between one-tailed and two-tailed tests
Step Question If YES If NO
1 Strong prior literature predicts a specific direction? Go to Step 2 Use two-tailed
2 Directional prediction registered before data collection? Go to Step 3 Use two-tailed
3 Effect in opposite direction would be completely meaningless? One-tailed is defensible Use two-tailed

6 A Worked Numerical Example

A researcher hypothesizes—based on a published meta-analysis of 12 studies—that eccentric-overload training produces greater quadriceps hypertrophy than traditional resistance training. She pre-registers a one-tailed test.

Her study (\(n = 20\) per group) yields:

Code
set.seed(99)
eccentric <- rnorm(20, mean = 3.8, sd = 2.0)
trad      <- rnorm(20, mean = 2.5, sd = 2.1)

t_two_res <- t.test(eccentric, trad, alternative = "two.sided")
t_one_res <- t.test(eccentric, trad, alternative = "greater")

pooled_sd <- sqrt((var(eccentric) + var(trad)) / 2)
d_obs     <- (mean(eccentric) - mean(trad)) / pooled_sd

writeLines(c(
  paste0("Mean change \u2014 Eccentric : ", round(mean(eccentric), 2), " cm\u00b2"),
  paste0("Mean change \u2014 Traditional: ", round(mean(trad), 2), " cm\u00b2"),
  paste0("Observed Cohen\u2019s d       : ", round(d_obs, 2)),
  "",
  "--- Two-tailed test ---",
  paste0("  t(", round(t_two_res$parameter, 1), ") = ", round(t_two_res$statistic, 3)),
  paste0("  p = ", round(t_two_res$p.value, 4)),
  paste0("  95% CI: [", round(t_two_res$conf.int[1], 2), ", ",
         round(t_two_res$conf.int[2], 2), "] cm\u00b2"),
  "",
  "--- One-tailed test (eccentric > traditional) ---",
  paste0("  t(", round(t_one_res$parameter, 1), ") = ", round(t_one_res$statistic, 3)),
  paste0("  p = ", round(t_one_res$p.value, 4)),
  paste0("  Lower 95% bound: ", round(t_one_res$conf.int[1], 2), " cm\u00b2")
))
## Mean change — Eccentric : 3.1 cm²
## Mean change — Traditional: 2.59 cm²
## Observed Cohen’s d       : 0.25
## 
## --- Two-tailed test ---
##   t(37.6) = 0.787
##   p = 0.4361
##   95% CI: [-0.8, 1.82] cm²
## 
## --- One-tailed test (eccentric > traditional) ---
##   t(37.6) = 0.787
##   p = 0.2181
##   Lower 95% bound: -0.58 cm²
Code
df_plot <- data.frame(
  Group  = rep(c("Eccentric-Overload", "Traditional"), each = 20),
  Change = c(eccentric, trad)
)
sum_df <- data.frame(
  Group = c("Eccentric-Overload", "Traditional"),
  Mean  = c(mean(eccentric), mean(trad)),
  SD    = c(sd(eccentric),   sd(trad))
)

# Base R version for compatibility
group_cols <- c("Eccentric-Overload" = "steelblue", "Traditional" = "firebrick")
plot(as.numeric(factor(df_plot$Group)), df_plot$Change,
     col  = group_cols[df_plot$Group],
     pch  = 19, cex = 0.9,
     xaxt = "n",
     xlab = "", ylab = expression("Quad CSA Change (cm"^2*")"),
     main = "Quadriceps Hypertrophy: Eccentric vs. Traditional Training",
     xlim = c(0.5, 2.5))
axis(1, at = 1:2, labels = c("Eccentric-Overload", "Traditional"))
for (i in 1:2) {
  gn   <- levels(factor(df_plot$Group))[i]
  m    <- sum_df$Mean[sum_df$Group == gn]
  s    <- sum_df$SD[sum_df$Group == gn]
  segments(i, m - s, i, m + s, lwd = 2, col = group_cols[gn])
  points(i, m, pch = 18, cex = 2, col = group_cols[gn])
}
Figure 4: Quadriceps CSA change (cm^2) for the eccentric-overload and traditional resistance training groups. Points = individual values; diamond = group mean; error bars = +/-1 SD.

Interpretation. This one-tailed test was pre-registered based on a meta-analysis of 12 prior studies—satisfying both criteria for a justified one-tailed approach. Results should be reported with the effect size (\(d\)), the confidence interval, and the pre-specified power, as recommended by current reporting standards (Cumming, 2014; Lakens, 2013).

7 Summary and Decision Guide

Table 2: Comparison of two-tailed and one-tailed tests
Feature Two-Tailed Test One-Tailed Test
Alternative hypothesis \(\mu_1 \neq \mu_2\) \(\mu_1 > \mu_2\) or \(\mu_1 < \mu_2\)
α allocation \(\alpha/2\) in each tail \(\alpha\) in one tail
Critical value Larger (more stringent) Smaller
Power — predicted direction Lower Higher
Power — opposite direction Yes, can detect Near zero
Type I risk if switched post-hoc Doubles effective α
Recommended default ✓ In most studies Only with strong a priori justification
Pre-registration Recommended Mandatory

Four core principles:

  1. State your hypothesis—including directionality—before examining data (Bland & Altman, 1994; Rosner, 2015).
  2. Use a two-tailed test unless you can honestly argue the opposite direction is meaningless and strong prior evidence justifies the directional prediction (Weir & Vincent, 2021).
  3. Always report effect sizes and confidence intervals alongside the p-value (Lakens, 2013; Sullivan & Feinn, 2012).
  4. Consider pre-registering your analysis plan to prevent post-hoc hypothesis reformulation (Cumming, 2014; Thomas et al., 2015).

References

Bland, J. M., & Altman, D. G. (1994). Statistics notes: One and two sided tests of significance. BMJ, 309(6949), 248. https://doi.org/10.1136/bmj.309.6949.248
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. L. Erlbaum Associates.
Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7–29. https://doi.org/10.1177/0956797613504966
Gravetter, F. J., Wallnau, L. B., & Forzano, L.-A. B. (2021). Statistics for the behavioral sciences (10th ed.). Cengage Learning.
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. https://doi.org/10.3389/fpsyg.2013.00863
Rosner, B. (2015). Fundamentals of biostatistics (8th ed.). Cengage Learning.
Sullivan, G. M., & Feinn, R. (2012). Using effect size—or why the p value is not enough. Journal of Graduate Medical Education, 4(3), 279–282. https://doi.org/10.4300/JGME-D-12-00156.1
Thomas, J. R., Nelson, J. K., & Silverman, S. J. (2015). Research methods in physical activity (7th ed.). Human Kinetics.
Weir, J. P., & Vincent, W. J. (2021). Statistics in kinesiology (5th ed.). Human Kinetics.

Reuse

Citation

BibTeX citation:
@misc{furtado2026,
  author = {Furtado, Ovande},
  title = {Directional {Hypotheses:} {One-Tailed} Vs. {Two-Tailed}
    {Tests}},
  date = {2026-03-04},
  url = {https://drfurtado.github.io/randomstats/posts/03042026-directional-hypotheses/},
  langid = {en}
}
For attribution, please cite this work as:
Furtado, O. (2026, March 4). Directional Hypotheses: One-Tailed vs. Two-Tailed Tests. RandomStats. https://drfurtado.github.io/randomstats/posts/03042026-directional-hypotheses/