Consider allowing JavaScript. Otherwise, you have to be proficient in reading LaTeX since formulas will not be rendered. Furthermore, the table of contents in the left column for navigation will not be available and code-folding not supported. Sorry for the inconvenience.

Examples in this article were generated with R 4.0.5 by the package PowerTOST.1

More examples are given in the respective vignettes.2 3 See also the README on GitHub for an overview and the online manual4 for details and a collection of other articles.

  • The right-hand badges give the respective section’s ‘level’.
    
  1. Basics about sample size methodology – requiring no or only limited statistical expertise.
    
  1. These sections are the most important ones. They are – hopefully – easily comprehensible even for novices.
    
  1. A somewhat higher knowledge of statistics and/or R is required. May be skipped or reserved for a later reading.
  • Click to show / hide R code.
Abbreviation Meaning
ABE Average Bioequivalence
ABEL Average Bioequivalence with Expanding Limits
CVw Within-subject Coefficient of Variation
CVwT, CVwR Within-subject Coefficient of Variation of the Test and Reference treatment
HVD(P) Highly Variable Drug (Product)
RSABE Reference-Scaled Average Bioequivalence
SABE Scaled Average Bioequivalence

Introduction

    

What are the differences between ABE, RSABE, and ABEL in terms of power and sample sizes?

For background about power and sample size estimations see the respective articles (ABE, RSABE, and ABEL). See also the article about power analysis.

    

Definitions:

  • A Highy Variable Drug (HVD) shows a within-subject Coefficient of Variation (CVw) of ≥30% if administered as a solution in a replicate design. The high variability is an intrinsic property of the drug (absorption/permeation, clearance).
Agencies are generally only interested in CVwR.
  • A Highy Variable Drug Product (HVDP) shows a CVw of ≥30% in a replicate design.5

The concept of Scaled Average Bioequivalence (SABE) for HVD(P)s is based on the following considerations:

  • HVD(P)s are safe and efficacious despite their high variability because:
    • They have a wide therapeutic index (i.e., a flat dose-response curve). Consequently, even substantial changes in concentrations have only a limited impact on the effect.
      If they would have a narrow therapeutic index, adverse effects (due to high concentrations) and lacking effects (due to low concentrations) would have been observed in Phase III and consequently, the originator’s product not be approved in the first place.
    • Once approved, the product has a documented safety / efficacy record in phase IV and in clinical practice – despite its high variability.
      If problems were evident, the product would have been taken off the market.
  • Given that, the conventional ‘clinically relevant difference’ Δ of 20% (leading to the limits of 80.00 – 125.00% in Average Bioequivalence) is overly conservative and thus, requiring large sample sizes.
  • Hence, a more relaxed Δ of > 20% was proposed. A natural approach is to scale (expand / widen) the limits based on the within-subject variability of the reference product σwR.

Reference-Scaled Average Bioequivalence (RSABE) is preferred by the FDA and by China’s CDE. Average Bioequivalence with Expanding Limits (ABEL) is another variant of SABE (preferred in all other jurisdictions).

In order to apply the methods following conditions have to be fulfilled:

  • The study has to be performed in a replicate design (i.e., at least the reference product has to be administered twice).
  • The observed within-subject variability of the reference has to be high (in RSABE swR ≥0.294 and in ABEL CVwR >30%).
  • ABEL only:
    • A clincial justification has to be provided that the expanded limits will not impact safety / efficacy.
    • Except in Chile and Brazil, it has to be demonstrated that the high variability of the reference is not caused by ‘outliers’.
    • There is an ‘upper cap’ of scaling (uc = 50%, except for Health Canada, where uc ≈ 57.382%).

In all methods a point estimate-constraint is employed. Even if a study would pass the expanded limits, the PE has to lie within 80.00 – 125.00% in order to pass.

top of section ↩︎

Sample size

    

The idea behind reference-scaling is to avoid extreme sample sizes required for ABE and preserve power independent from the CV.

Let’s explore some examples. I assumed a CV of 0.45, a T/R-ratio of 0.90, and targeted ≥80% power for this combination in some common replicate designs.

library(PowerTOST) # attach it to run the examples

Note that sample sizes are integers and follow a step function because in software packages balanced sequences are returned.

CV      <- 0.45
theta0  <- 0.90
target  <- 0.80
designs <- c("2x2x4", "2x2x3", "2x3x3")
method  <- c("ABE", "ABEL", "RSABE")
res     <- data.frame(design = rep(designs, each = length(method)),
                      method = method, n = NA)
for (i in 1:nrow(res)) {
  if (res$method[i] == "ABE") {
    res[i, 3] <- sampleN.TOST(CV = CV, theta0 = theta0,
                              design = res$design[i],
                              targetpower = target,
                              print = FALSE)[7]
  }
  if (res$method[i] == "ABEL") {
    res[i, 3] <- sampleN.scABEL(CV = CV, theta0 = theta0,
                                design = res$design[i],
                                targetpower = target,
                                print = FALSE,
                                details = FALSE)[8]
  }
  if (res$method[i] == "RSABE") {
    res[i, 3] <- sampleN.RSABE(CV = CV, theta0 = theta0,
                               design = res$design[i],
                               targetpower = target,
                               print = FALSE,
                               details = FALSE)[8]
  }
}
print(res, row.names = FALSE)
R>  design method   n
R>   2x2x4    ABE  84
R>   2x2x4   ABEL  28
R>   2x2x4  RSABE  24
R>   2x2x3    ABE 124
R>   2x2x3   ABEL  42
R>   2x2x3  RSABE  36
R>   2x3x3    ABE 126
R>   2x3x3   ABEL  39
R>   2x3x3  RSABE  33
    
CV      <- 0.45
theta0  <- seq(0.95, 0.85, -0.001)
methods <- c("ABE", "ABEL", "RSABE")
clr     <- c("red", "magenta", "blue")
ylab    <- paste0("sample size (CV = ", 100*CV, "%)")
#################
design <- "2x2x4"
res1   <- data.frame(theta0 = theta0,
                     method = rep(methods, each =length(theta0)),
                     n = NA)
for (i in 1:nrow(res1)) {
  if (res1$method[i] == "ABE") {
    res1$n[i] <- sampleN.TOST(CV = CV, theta0 = res1$theta0[i],
                              design = design,
                              print = FALSE)[["Sample size"]]
  }
  if (res1$method[i] == "ABEL") {
    res1$n[i] <- sampleN.scABEL(CV = CV, theta0 = res1$theta0[i],
                                design = design, print = FALSE,
                                details = FALSE)[["Sample size"]]
  }
  if (res1$method[i] == "RSABE") {
    res1$n[i] <- sampleN.RSABE(CV = CV, theta0 = res1$theta0[i],
                               design = design, print = FALSE,
                               details = FALSE)[["Sample size"]]
  }
}
dev.new(width = 4.5, height = 4.5, record = TRUE)
op <- par(no.readonly = TRUE)
par(lend = 2, ljoin = 1, mar = c(4, 3.3, 0.1, 0.2), cex.axis = 0.9)
plot(theta0, res1$n[res1$method == "ABE"], type = "n", axes = FALSE,
     ylim = c(12, max(res1$n)), xlab = expression(theta[0]),
     log = "xy", ylab = "")
abline(v = seq(0.85, 0.95, 0.025), lty = 3, col = "lightgrey")
abline(v = 0.90, lty = 2)
abline(h = axTicks(2, log = TRUE), lty = 3, col = "lightgrey")
axis(1, at = seq(0.85, 0.95, 0.025))
axis(2, las = 1)
mtext(ylab, 2, line = 2.4)
legend("bottomleft", legend = methods, inset = 0.02, lwd = 2, cex = 0.9,
       col = clr, box.lty = 0, bg = "white", title = "\u226580% power")
lines(theta0, res1$n[res1$method == "ABE"],
      type = "S", lwd = 2, col = clr[1])
lines(theta0, res1$n[res1$method == "ABEL"],
      type = "S", lwd = 2, col = clr[2])
lines(theta0, res1$n[res1$method == "RSABE"],
      type = "S", lwd = 2, col = clr[3])
box()
#################
design <- "2x2x3"
res2   <- data.frame(theta0 = theta0,
                     method = rep(methods, each =length(theta0)),
                     n = NA)
for (i in 1:nrow(res2)) {
  if (res2$method[i] == "ABE") {
    res2$n[i] <- sampleN.TOST(CV = CV, theta0 = res2$theta0[i],
                              design = design,
                              print = FALSE)[["Sample size"]]
  }
  if (res2$method[i] == "ABEL") {
    res2$n[i] <- sampleN.scABEL(CV = CV, theta0 = res2$theta0[i],
                                design = design, print = FALSE,
                                details = FALSE)[["Sample size"]]
  }
  if (res2$method[i] == "RSABE") {
    res2$n[i] <- sampleN.RSABE(CV = CV, theta0 = res2$theta0[i],
                               design = design, print = FALSE,
                               details = FALSE)[["Sample size"]]
  }
}
plot(theta0, res2$n[res2$method == "ABE"], type = "n", axes = FALSE,
     ylim = c(12, max(res2$n)), xlab = expression(theta[0]),
     log = "xy", ylab = "")
abline(v = seq(0.85, 0.95, 0.025), lty = 3, col = "lightgrey")
abline(v = 0.90, lty = 2)
abline(h = axTicks(2, log = TRUE), lty = 3, col = "lightgrey")
axis(1, at = seq(0.85, 0.95, 0.025))
axis(2, las = 1)
mtext(ylab, 2, line = 2.4)
legend("bottomleft", legend = methods, inset = 0.02, lwd = 2, cex = 0.9,
       col = clr, box.lty = 0, bg = "white", title = "\u226580% power")
lines(theta0, res2$n[res2$method == "ABE"],
      type = "S", lwd = 2, col = clr[1])
lines(theta0, res2$n[res2$method == "ABEL"],
      type = "S", lwd = 2, col = clr[2])
lines(theta0, res2$n[res2$method == "RSABE"],
      type = "S", lwd = 2, col = clr[3])
box()
#################
design <- "2x3x3"
res3   <- data.frame(theta0 = theta0,
                     method = rep(methods, each =length(theta0)),
                     n = NA)
for (i in 1:nrow(res3)) {
  if (res3$method[i] == "ABE") {
    res3$n[i] <- sampleN.TOST(CV = CV, theta0 = res3$theta0[i],
                              design = design,
                              print = FALSE)[["Sample size"]]
  }
  if (res3$method[i] == "ABEL") {
    res3$n[i] <- sampleN.scABEL(CV = CV, theta0 = res3$theta0[i],
                                design = design, print = FALSE,
                                details = FALSE)[["Sample size"]]
  }
  if (res3$method[i] == "RSABE") {
    res3$n[i] <- sampleN.RSABE(CV = CV, theta0 = res3$theta0[i],
                               design = design, print = FALSE,
                               details = FALSE)[["Sample size"]]
  }
}
plot(theta0, res3$n[res3$method == "ABE"], type = "n", axes = FALSE,
     ylim = c(12, max(res3$n)), xlab = expression(theta[0]),
     log = "xy", ylab = "")
abline(v = seq(0.85, 0.95, 0.025), lty = 3, col = "lightgrey")
abline(v = 0.90, lty = 2)
abline(h = axTicks(2, log = TRUE), lty = 3, col = "lightgrey")
axis(1, at = seq(0.85, 0.95, 0.025))
axis(2, las = 1)
mtext(ylab, 2, line = 2.4)
legend("bottomleft", legend = methods, inset = 0.02, lwd = 2, cex = 0.9,
       col = clr, box.lty = 0, bg = "white", title = "\u226580% power")
lines(theta0, res3$n[res3$method == "ABE"],
      type = "S", lwd = 2, col = clr[1])
lines(theta0, res3$n[res3$method == "ABEL"],
      type = "S", lwd = 2, col = clr[2])
lines(theta0, res3$n[res3$method == "RSABE"],
      type = "S", lwd = 2, col = clr[3])
box()
par(op)

Fig. 1 4-period full replicate design.

    

It’s obvious that we need substantially smaller sample sizes in the methods for reference-scaling than we would require for ABE. The sample size functions of the scaling methods are also not that steep, which means that even if our assumptions about the T/R-ratio would be wrong, power (and hence, sample sizes) would be affected to a lesser degree.

Nevertheless, one should not be overly optimistic about the T/R-ratio. For HVD(P)s a T/R-ratio of ‘better’ than 0.90 should be avoided.6
NB, that’s the reason why in sampleN.scABEL() and sampleN.RSABE() the default is theta0 = 0.90. If scaling is not acceptable (e.g., AUC for the EMA), I strongly recommend to specify theta0 = 0.90 in sampleN.TOST() because its default is 0.95.

Fig. 2 3-period full replicate design.

    

Since power depends on the number of treatments, roughly 50% more subjects are required than in the 4-period full replicate design.

Fig. 3 Partial replicate design.

    

Similar sample sizes than in the 3-period full replicate design because both have the same degrees of freedom. However, the step size is wider (3 sequences instead of 2).

top of section ↩︎ previous section ↩︎

Power

    

Let’s change the point of view. As above, I assumed a CV of 0.45, a T/R-ratio of 0.90, and targeted ≥80% power for this combination. This time I explored how CV different from my assumption affects power with the estimated sample size.

Additionally I assessed ‘pure’ SABE, i.e., without an upper cap of scaling and without the PE-constraint for the EMA’s conditions (switching at CVwR 30%, regulatory constant \(\small{k=0.760}\)).
    
CV      <- 0.45
theta0  <- 0.90
target  <- 0.80
designs <- c("2x2x4", "2x2x3", "2x3x3")
method  <- c("ABE", "ABEL", "RSABE", "SABE")
# Pure SABE (only for comparison)
# No upper cup of scaling, no PE constraint
pure    <- reg_const("USER",
                     r_const  = 0.760,
                     CVswitch = 0.30,
                     CVcap    = Inf)
pure$pe_constr <- FALSE
res     <- data.frame(design = rep(designs, each = length(method)),
                      method = method, n = NA, power = NA)
for (i in 1:nrow(res)) {
  if (res$method[i] == "ABE") {
    res[i, 3:4] <- sampleN.TOST(CV = CV, theta0 = theta0,
                              design = res$design[i],
                              targetpower = target,
                              print = FALSE)[7:8]
  }
  if (res$method[i] == "ABEL") {
    res[i, 3:4] <- sampleN.scABEL(CV = CV, theta0 = theta0,
                                design = res$design[i],
                                targetpower = target,
                                print = FALSE,
                                details = FALSE)[8:9]
  }
  if (res$method[i] == "RSABE") {
    res[i, 3:4] <- sampleN.RSABE(CV = CV, theta0 = theta0,
                               design = res$design[i],
                               targetpower = target,
                               print = FALSE,
                               details = FALSE)[8:9]
  }
  if (res$method[i] == "SABE") {
    res[i, 3:4] <- sampleN.scABEL(CV = CV, theta0 = theta0,
                                design = res$design[i],
                                targetpower = target,
                                regulator = pure,
                                print = FALSE,
                                details = FALSE)[8:9]
  }
}
res[, 4] <- signif(res[, 4], 5)
print(res, row.names = FALSE)
R>  design method   n   power
R>   2x2x4    ABE  84 0.80569
R>   2x2x4   ABEL  28 0.81116
R>   2x2x4  RSABE  24 0.82450
R>   2x2x4   SABE  28 0.81884
R>   2x2x3    ABE 124 0.80012
R>   2x2x3   ABEL  42 0.80017
R>   2x2x3  RSABE  36 0.81147
R>   2x2x3   SABE  42 0.80961
R>   2x3x3    ABE 126 0.80570
R>   2x3x3   ABEL  39 0.80588
R>   2x3x3  RSABE  33 0.82802
R>   2x3x3   SABE  39 0.81386
# Cave: very long runtime
CV.fix  <- 0.45
CV      <- seq(0.35, 0.55, length.out = 201)
theta0  <- 0.90
methods <- c("ABE", "ABEL", "RSABE", "SABE")
clr     <- c("red", "magenta", "blue", "#00800080")
# Pure SABE (only for comparison)
# No upper cup of scaling, no PE constraint
pure    <- reg_const("USER",
                     r_const  = 0.760,
                     CVswitch = 0.30,
                     CVcap    = Inf)
pure$pe_constr <- FALSE
#################
design  <- "2x2x4"
res1    <- data.frame(CV = CV,
                      method = rep(methods, each =length(CV)),
                      power = NA)
n.ABE   <- sampleN.TOST(CV = CV.fix, theta0 = theta0,
                       design = design,
                       print = FALSE)[["Sample size"]]
n.RSABE <- sampleN.RSABE(CV = CV.fix, theta0 = theta0,
                        design = design, print = FALSE,
                        details = FALSE)[["Sample size"]]
n.ABEL  <- sampleN.scABEL(CV = CV.fix, theta0 = theta0,
                         design = design, print = FALSE,
                         details = FALSE)[["Sample size"]]
n.SABE  <- sampleN.scABEL(CV = CV.fix, theta0 = theta0,
                         design = design, print = FALSE,
                         regulator = pure,
                         details = FALSE)[["Sample size"]]
for (i in 1:nrow(res1)) {
  if (res1$method[i] == "ABE") {
    res1$power[i] <- power.TOST(CV = res1$CV[i], theta0 = theta0,
                                n = n.ABE, design = design)
  }
  if (res1$method[i] == "ABEL") {
    res1$power[i] <- power.scABEL(CV = res1$CV[i], theta0 = theta0,
                                  n = n.ABEL, design = design, nsims = 1e6)
  }
  if (res1$method[i] == "RSABE") {
    res1$power[i] <- power.RSABE(CV = res1$CV[i], theta0 = theta0,
                                 n = n.RSABE, design = design, nsims = 1e6)
  }
  if (res1$method[i] == "SABE") {
    res1$power[i] <- power.scABEL(CV = res1$CV[i], theta0 = theta0,
                                  n = n.ABEL, design = design,
                                  regulator = pure, nsims = 1e6)
  }
}
dev.new(width = 4.5, height = 4.5, record = TRUE)
op <- par(no.readonly = TRUE)
par(mar = c(4, 3.3, 0.1, 0.1), cex.axis = 0.9)
plot(CV, res1$power[res1$method == "ABE"], type = "n", axes = FALSE,
     ylim = c(0.65, 1), xlab = "CV", ylab = "")
abline(v = seq(0.35, 0.55, 0.05), lty = 3, col = "lightgrey")
abline(v = 0.45, lty = 2)
abline(h = axTicks(2, log = FALSE), lty = 3, col = "lightgrey")
axis(1, at = seq(0.35, 0.55, 0.05))
axis(2, las = 1)
mtext("power", 2, line = 2.6)
legend("topright", legend = methods, inset = 0.02, lwd = 2, cex = 0.9,
       col = clr, box.lty = 0, bg = "white", title = "n for CV = 45%")
lines(CV, res1$power[res1$method == "ABE"], lwd = 2, col = clr[1])
lines(CV, res1$power[res1$method == "ABEL"], lwd = 2, col = clr[2])
lines(CV, res1$power[res1$method == "RSABE"], lwd = 2, col = clr[3])
lines(CV, res1$power[res1$method == "SABE"], lwd = 2, col = clr[4])
box()
#################
design  <- "2x2x3"
res2    <- data.frame(CV = CV,
                      method = rep(methods, each =length(CV)),
                      power = NA)
n.ABE   <- sampleN.TOST(CV = CV.fix, theta0 = theta0,
                       design = design,
                       print = FALSE)[["Sample size"]]
n.RSABE <- sampleN.RSABE(CV = CV.fix, theta0 = theta0,
                        design = design, print = FALSE,
                        details = FALSE)[["Sample size"]]
n.ABEL  <- sampleN.scABEL(CV = CV.fix, theta0 = theta0,
                         design = design, print = FALSE,
                         details = FALSE)[["Sample size"]]
n.SABE  <- sampleN.scABEL(CV = CV.fix, theta0 = theta0,
                         design = design, print = FALSE,
                         regulator = pure,
                         details = FALSE)[["Sample size"]]
for (i in 1:nrow(res2)) {
  if (res2$method[i] == "ABE") {
    res2$power[i] <- power.TOST(CV = res2$CV[i], theta0 = theta0,
                                n = n.ABE, design = design)
  }
  if (res2$method[i] == "ABEL") {
    res2$power[i] <- power.scABEL(CV = res2$CV[i], theta0 = theta0,
                                  n = n.ABEL, design = design, nsims = 1e6)
  }
  if (res2$method[i] == "RSABE") {
    res2$power[i] <- power.RSABE(CV = res2$CV[i], theta0 = theta0,
                                 n = n.RSABE, design = design, nsims = 1e6)
  }
  if (res2$method[i] == "SABE") {
    res2$power[i] <- power.scABEL(CV = res2$CV[i], theta0 = theta0,
                                  n = n.ABEL, design = design,
                                  regulator = pure, nsims = 1e6)
  }
}
plot(CV, res2$power[res2$method == "ABE"], type = "n", axes = FALSE,
     ylim = c(0.65, 1), xlab = "CV", ylab = "")
abline(v = seq(0.35, 0.55, 0.05), lty = 3, col = "lightgrey")
abline(v = 0.45, lty = 2)
abline(h = axTicks(2, log = FALSE), lty = 3, col = "lightgrey")
axis(1, at = seq(0.35, 0.55, 0.05))
axis(2, las = 1)
mtext("power", 2, line = 2.6)
legend("topright", legend = methods, inset = 0.02, lwd = 2, cex = 0.9,
       col = clr, box.lty = 0, bg = "white", title = "n for CV = 45%")
lines(CV, res2$power[res2$method == "ABE"], lwd = 2, col = clr[1])
lines(CV, res2$power[res2$method == "ABEL"], lwd = 2, col = clr[2])
lines(CV, res2$power[res2$method == "RSABE"], lwd = 2, col = clr[3])
lines(CV, res2$power[res2$method == "SABE"], lwd = 2, col = clr[4])
box()
#################
design  <- "2x3x3"
res3    <- data.frame(CV = CV,
                      method = rep(methods, each =length(CV)),
                      power = NA)
n.ABE   <- sampleN.TOST(CV = CV.fix, theta0 = theta0,
                       design = design,
                       print = FALSE)[["Sample size"]]
n.RSABE <- sampleN.RSABE(CV = CV.fix, theta0 = theta0,
                        design = design, print = FALSE,
                        details = FALSE)[["Sample size"]]
n.ABEL  <- sampleN.scABEL(CV = CV.fix, theta0 = theta0,
                         design = design, print = FALSE,
                         details = FALSE)[["Sample size"]]
n.SABE  <- sampleN.scABEL(CV = CV.fix, theta0 = theta0,
                         design = design, print = FALSE,
                         regulator = pure,
                         details = FALSE)[["Sample size"]]
for (i in 1:nrow(res3)) {
  if (res3$method[i] == "ABE") {
    res3$power[i] <- power.TOST(CV = res3$CV[i], theta0 = theta0,
                                n = n.ABE, design = design)
  }
  if (res3$method[i] == "ABEL") {
    res3$power[i] <- power.scABEL(CV = res3$CV[i], theta0 = theta0,
                                  n = n.ABEL, design = design, nsims = 1e6)
  }
  if (res3$method[i] == "RSABE") {
    res3$power[i] <- power.RSABE(CV = res3$CV[i], theta0 = theta0,
                                 n = n.RSABE, design = design, nsims = 1e6)
  }
  if (res3$method[i] == "SABE") {
    res3$power[i] <- power.scABEL(CV = res3$CV[i], theta0 = theta0,
                                  n = n.ABEL, design = design,
                                  regulator = pure, nsims = 1e6)
  }
}
plot(CV, res3$power[res3$method == "ABE"], type = "n", axes = FALSE,
     ylim = c(0.65, 1), xlab = "CV", ylab = "")
abline(v = seq(0.35, 0.55, 0.05), lty = 3, col = "lightgrey")
abline(v = 0.45, lty = 2)
abline(h = axTicks(2, log = FALSE), lty = 3, col = "lightgrey")
axis(1, at = seq(0.35, 0.55, 0.05))
axis(2, las = 1)
mtext("power", 2, line = 2.6)
legend("topright", legend = methods, inset = 0.02, lwd = 2, cex = 0.9,
       col = clr, box.lty = 0, bg = "white", title = "n for CV = 45%")
lines(CV, res3$power[res3$method == "ABE"], lwd = 2, col = clr[1])
lines(CV, res3$power[res3$method == "ABEL"], lwd = 2, col = clr[2])
lines(CV, res3$power[res3$method == "RSABE"], lwd = 2, col = clr[3])
lines(CV, res3$power[res3$method == "SABE"], lwd = 2, col = clr[4])
box()
par(op)

Fig. 4 4-period full replicate design.

Fig. 5 3-period full replicate design.

Fig. 6 Partial replicate design.

    

As expected, power of ABE is extremely dependent on the CV. Not surprising, because the acceptance limits are fixed at 80.00 – 125.00%.

As stated above, ideally reference-scaling should preserve power independent from the CV. If that would be the case, power would be a line parallel to the x-axis. However, the methods implemented by authorities are frameworks, where certain conditions have to be observed. Therefore, beyond a maximum around 50%, power starts to decrease because the PE-constraint becomes increasingly important and – for ABEL  – the upper cap of scaling sets in.

On the other hand, ‘pure’ SABE shows the unconstrained behavior of ABEL.

    

Let’s dive deeper into the matter. As above but a wider range of CV values (0.3 – 1).

Fig. 7 4-period full replicate design.

Here we see a clear difference between RSABE and ABEL. Although in both the PE-constraint has to be observed, in the former no upper cap of scaling is imposed and hence, power affected to a minor degree.
On the contrary, due to the upper upper cap of scaling in the latter, it behaves similarly to ABE with fixed limits of 69.84 – 143.19%.

Consequently, if the CV will be substantially larger than assumed, in ABEL power may be compromised.

Note also the huge gap between ABEL and ‘pure’ SABE. Whilst the PE-constraint is statistically not justified, it was introduced in all jurisdictions ‘for political reasons’.

  1. There is no scientific basis or rationale for the point estimate recommendations
  2. There is no belief that addition of the point estimate criteria will improve the safety of approved generic drugs
  3. The point estimate recommendations are only “political” to give greater assurance to clinicians and patients who are not familiar (don’t understand) the statistics of highly variable drugs
Leslie Benet. 2006.7

top of section ↩︎ previous section ↩︎

Pros and Cons

From a statistical perspective, replicate designs are preferrable over the 2×2×2 crossover design. If we observe discordant8 outliers in the latter, we cannot distinguish between lack of compliance (the subject didn’t take the drug), a product failure, and a subject-by-formulation interaction (the subject belongs to a subpopulation).9

A member of the EMA’s PKWP once told me that he would like to see all studies performed in a replicate design – regardless whether the drug / drug product is highly variable or not. One of the rare cases where we were of the same opinion…10

We design studies always for the worst case combination, i.e., based on the PK metric requiring the largest sample size. In jurisdictions accepting reference-scaling only for Cmax (e.g. by ABEL) the sample size is driven by AUC.

metrics <- c("Cmax", "AUCt", "AUCinf")
alpha   <- 0.05
CV      <- c(0.45, 0.34, 0.36)
theta0  <- rep(0.90, 3)
theta1  <- 0.80
theta2  <- 1 / theta1
target  <- 0.80
design  <- "2x2x4"
plan    <- data.frame(metric = metrics,
                      method = c("ABEL", "ABE", "ABE"),
                      CV = CV, theta0 = theta0,
                      L = 100*theta1, U = 100*theta2,
                      n = NA, power = NA)
for (i in 1:nrow(plan)) {
  if (plan$method[i] == "ABEL") {
    plan[i, 5:6] <- round(100*scABEL(CV = CV[i]), 2)
    plan[i, 7:8] <- signif(
                      sampleN.scABEL(alpha = alpha,
                                     CV = CV[i],
                                     theta0 = theta0[i],
                                     theta1 = theta1,
                                     theta2 = theta2,
                                     targetpower = target,
                                     design = design,
                                     details = FALSE,
                                     print = FALSE)[8:9], 4)
  } else {
    plan[i, 7:8] <- signif(
                      sampleN.TOST(alpha = alpha,
                                   CV = CV[i],
                                   theta0 = theta0[i],
                                   theta1 = theta1,
                                   theta2 = theta2,
                                   targetpower = target,
                                   design = design,
                                   print = FALSE)[7:8], 4)
  }
}
txt <- paste0("Sample size based on ",
              plan$metric[plan$n == max(plan$n)], ".\n")
print(plan, row.names = FALSE); cat(txt)
R>  metric method   CV theta0     L      U  n  power
R>    Cmax   ABEL 0.45    0.9 72.15 138.59 28 0.8112
R>    AUCt    ABE 0.34    0.9 80.00 125.00 50 0.8055
R>  AUCinf    ABE 0.36    0.9 80.00 125.00 56 0.8077
R> Sample size based on AUCinf.
If the study is performed with 56 subjects and all assumed values are realized, post hoc power will be 0.9666 for Cmax. I have seen deficiency letters by regulatory assessors asking for a
»justification of too high power for Cmax«.

See also the article about post hoc power.

As shown in the article about Average Bioequivalence with Expanding Limits, we get an incentive in the sample size if \(\small{CV_\textrm{wT}<CV_\textrm{wR}}\). However, this does not help if reference-scaling is not acceptable (say, for AUC) because the conventional model for ABE assumes equal variances.

theta0        <- 0.90
design        <- "2x2x4"
CVw           <- 0.36 # AUC - no reference-scaling
# variance-ratio 0.80: T lower than R
CV            <- signif(CVp2CV(CV = CVw, ratio = 0.80), 5)
# 'switch off' all scaling conditions of ABEL
reg           <- reg_const("USER", r_const = 0.76,
                           CVswitch = Inf, CVcap = Inf)
reg$pe_constr <- FALSE
res           <- data.frame(variance = c("homoscedastic", "heteroscedastic"),
                            CVwT = c(CVw, CV[1]), CVwR = c(CVw, CV[2]),
                            CVw = rep(CVw, 2), n = NA)
res$n[1]      <- sampleN.TOST(CV = CVw, theta0 = theta0,
                              design = design,
                              print = FALSE)[["Sample size"]]
res$n[2]      <- sampleN.scABEL(CV = CV, theta0 = theta0,
                                design = design, regulator = reg,
                                details = FALSE,
                                print = FALSE)[["Sample size"]]
print(res, row.names = FALSE)
R>         variance    CVwT    CVwR  CVw  n
R>    homoscedastic 0.36000 0.36000 0.36 56
R>  heteroscedastic 0.33824 0.38079 0.36 56

Although we know that the test has a lower CV than the reference, this information is ignored and the (pooled) within-subject CV used.

For ABE costs of a replicate design are similar to the 2×2×2 crossover design. Power depends on the number of treatments – more administrations are compensated by the lower sample size. If the sample size of a 2×2×2 crossover design is \(\small{n}\), then the sample size for a 4-period replicate design is \(\small{^1/_2\,n}\) and for a 3-period replicate design \(\small{^3/_4\,n}\).
We have the same number of samples to analyze and study costs are driven to a good part by bioanalytics.11 We will save costs due to less pre-/post-study exams but have to pay a higher subject remuneration (more hospitalizations). If applicable (depending on the drug): Increased costs for in-study safety and/or PD measurements.

Pros

  • Statistically sound. Estimation of CVwR (and and in full replicate studies of CVwT) is possible. More information never hurts.
  • Mandatory for RSABE and ABEL. Smaller sample sizes for RSABE and ABEL than for ABE.
  • ‘Outliers’ can be better assessed than in the 2×2×2 crossover design.
    • In ABE for the EMA this will be rather difficult (exclusion of subjects based on statistics and/or PK grounds alone is not acceptable).
    • For ABEL assessment of outliers (of the reference treatment only) is part of the recommended procedure.12 13
    • Since extreme values are a natural property of HVD(P)s, assessment of outliers is not recommended by the FDA and China’s CDE for RSABE.

Cons

  • For ABE higher sample size adjustment according to the anticipated dropout-rate required than in a 2×2×2 crossover design due to three or four periods instead of two.14
  • The elephant in the room: Potential inflation of the Type I Error (patient’s risk) in RSABE (if CVwR < 30%) and in ABEL (if ~25% < CVwR < ~42%). This issue will be covered in another article.

top of section ↩︎ previous section ↩︎

Post Scriptum

Regularly I’m asked whether it is possible to use an adaptive Two-Stage Design for RSABE or ABEL.

Whereas for ABE it is possible in principle (though nothing is published so far), for SABE the answer is no. Contrary to ABE (where power and the Type I Error) can be calculated by analytical methods, in SABE we have to rely on simulations. We would have to find a suitable adjusted \(\small{\alpha}\) and demonstrate beforehand that the patient’s risk is controlled.
With the implemented regulatory frameworks the power/sample-size estimation requires 105 simulations to obtain a stable result (see here and there). Since the convergence of the empiric Type I Error is poor, we need 106 simulations. Combining that with a reasonably narrow grid of possible stage 1 sample sizes / CVwR-combinations, we end up with with 1013–1014 simulations. I don’t see how that can be done in the near future, unless one has access to a massively parallel supercomputer. I made a quick estimation for my fast workstation: ~60 years running 24/7…

As we have seen, SABE is not sensitive to the CV. Hence, the main advantage of TSDs over fixed sample designs in ABE is simply not relevant. Fully adaptive methods for the 2×2×2 crossover allow also to adjust for the point estimate observed in the first stage.15 Here it is not possible. If you are concerned about the T/R-ratio, perform a (reasonably large!)16 pilot study and – even if the T/R-ratio looks promising – plan for a ‘worse’ one since it is not stable between studies.

top of section ↩︎ previous section ↩︎

License

CC BY 4.0 Helmut Schütz 2021
1st version April 22, 2021.
Rendered 2021-05-04 12:29:22 CEST by rmarkdown in 0.39 seconds.

Footnotes and References


  1. Labes D, Schütz H, Lang B. PowerTOST: Power and Sample Size for (Bio)Equivalence Studies. 2021-01-18. CRAN.↩︎

  2. Schütz H. Average Bioequivalence. 2021-01-18. CRAN.↩︎

  3. Schütz H. Reference-Scaled Average Bioequivalence. 2020-12-23. CRAN.↩︎

  4. Labes D, Schütz H, Lang B. Package ‘PowerTOST’. January 18, 2021. CRAN.↩︎

  5. Some gastric resistant formulations of diclofenac are HVDPs, practically all topical formulations are HVDPs, whereas diclofenac itself is not a HVD (CVw of a solution ~8%).↩︎

  6. Tóthfalusi L, Endrényi L. Sample Sizes for Designing Bioequivalence Studies for Highly Variable Drugs. J Pharm Pharmacol Sci. 2012; 15(1): 73–84. doi:10.18433/j3z88f. Open Access.↩︎

  7. Benet L. Why Highly Variable Drugs are Safer. Presentation at the FDA Advisory Committee for Pharmaceutical Science. Rockville. 06 October, 2006. Webarchive.↩︎

  8. The T/R-ratio in a particular subject differs from other subjects showing a ‘normal’ response. A concordant outlier will show deviant responses for both T and R. That’s not relevant in crossover designs.↩︎

  9. FDA. Center for Drug Evaluation and Research. Rockville. January 2001. Guidance for Industry. Statistical Approaches to Establishing Bioequivalence. download.↩︎

  10. If the study was planned for ABE, fails due to lacking power (CV higher than assumed and CVwR >30%), and reference-scaling would be acceptable (no safety/efficacy issues with the expanded limits), one has already estimates of CVwT and CVwR and is able to design the next study properly.↩︎

  11. In case of a “poor” bioanalytical method requiring a large sample volume: Since the total blood sampling volume generally is limited with the one of a blood donation, one may opt for a 3-period full replicate or has to measure HCT prior to administration in higher periods and – for safety reasons – exclude subjects if the HCT is too high.↩︎

  12. The CVwR has to be recalculated after exlusion of the outlier(s). Consequently, less expansion of the limits. However, the outlying subject(s) have to be kept in the data set for calculating the 90% confidence interval.
    However, that contradicts the principle »The data from all treated subjects should be treated equally« stated in the guideline.↩︎

  13. Although Brazil’s ANVISA and Chile’s ANAMED apply ABEL, assessment of outliers is not recomended by both agencies.↩︎

  14. Also two or three washouts instead of one. Once I faced a case where during a washout a volunteer was bitten by a dog. Since he had to visit a hospital to get his wound stitched, it was rated according to the protocol as a – not drug-related – SAE and we had to exlude him from the study. Shit happens.↩︎

  15. Maurer W, Jones B, Chen Y. Controlling the type 1 error rate in two-stage sequential designs when testing for average bioequivalence. Stat Med. 2018; 37(10): 1587–1607. doi:10.1002/sim.7614..↩︎

  16. I know one big generic player’s rule for pilot studies of HVD(P)s: The minimum sample size is 24 in a 4-period fully replicate design. I have seen pilot studies with 80 subjects.↩︎