Consider allowing JavaScript. Otherwise, you have to be proficient in reading LaTeX since formulas will not be rendered. Furthermore, the table of contents in the left column for navigation will not be available and code-folding not supported. Sorry for the inconvenience.

The script was run on a Xeon E3-1245v3 @ 3.40GHz (1/4 cores) 16GB RAM with R 4.2.0 on Win­dows 7 build 7601, Service Pack 1, Universal C Runtime 10.0.10240.16390.

  • The right-hand badges give the respective section’s ‘level’.
    
  1. Basics about Noncompartmental Analysis (NCA) – requiring no or only limited expertise in Phar­ma­co­kinetics (PK).
    
  1. These sections are the most important ones. They are – hopefully – easily comprehensible even for novices.
    
  1. A higher knowledge of NCA and/or R is required. Sorry, there is no free lunch.
    
  1. An advanced knowledge of NCA and/or R is required. Not recommended for beginners in particular.
    
  1. If you are not a neRd or NCA afficionado, skipping is recommended. Suggested for experts but might be confusing for others.
  • Click to show / hide R code.

Introduction

    

How should we extrapolate the AUC to infinity?

    

Good question because two variants are implemented in software, namely based on the observed (\(\small{C_\textrm{last}}\)) and the predicted (\(\small{\widehat{C}_\textrm{last}}\)) last concentration \[\small{AUC_{0-\infty}=AUC_{{0-}\textrm{tlast}} + C_\textrm{last}/\widehat{\lambda}_\textrm{z}}\tag{1}\] \[\small{AUC_{0-\infty}=AUC_{{0-}\textrm{tlast}} + \widehat{C}_\textrm{last}/\widehat{\lambda}_\textrm{z}}\tag{2}\] where \(\small{AUC_{{0-}\textrm{tlast}}}\) is calculated by a trapezoidal rule and \(\small{\widehat{\lambda}_\textrm{z}}\) is the apparent elimination rate constant estimated by semilogarithmic regression.

However, this might be a somewhat academic question in most cases.

When the time course is measured until plasma concentration becomes 5% of its maximum, the relative cutoff errors in AUC, MRT, and VRT are smaller than about 5%, 10%, and 40%, res­pectively […]. If the time course is measured until plasma concentration becomes 1% of its maximum, the relative cutoff errors in AUC, MRT, and VRT are smaller than about 1%, 2%, and 10%, respectively.
Yamaoka et al. (1978)1

top of section ↩︎

Past and Present

    

In the early days of Noncompartmental Analysis \(\small{(1)}\) was common.

Already 42 years ago \(\small{(2)}\) was occasionally used (in a project2 sponsored by the FDA) and for the first time re­com­mend­ed 31 years ago.3 4 5 6 For a while it was mandatory for manuscripts submitted to the ‘Bio­equi­va­lence Sec­tion’ of the Inter­na­tio­nal Jour­nal of Clinical Pharmacology, Therapy and Toxicology (edited by one of the pioneers of bioequivalence, Henning Blume).
If you are interested in more recent references: By an author of the FDA7 and not surprisingly.8 9 10

    

Makes sense. \(\small{C_\textrm{last}}\) might be close to the Lower Limit of Quantification (LLOQ) and hence, has the lowest accuracy and highest variability of the entire profile. If one bases the extrapolation on it, one sets a fox to keep the geese. The intrinsic high variability propagates into the extrapolated area. Contrary to that, \(\small{\widehat{C}_\textrm{last}}\) takes all information of preceding (higher, and therefore, more accurate and precise) concentrations into account and thus, is in general more reliable.

Little is mentioned in guidelines. AFAIK, the FDA,11 China’s CDE, Brazil’s ANVISA, and the WHO12 recommend \(\small{(1)}\), whereas Health Canada13 recommends \(\small{(2)}\). Nothing is stated by the EMA.14 15

I’m using solely \(\small{(2)}\) since the early 1990s and never received a single deficiency letter in this respect (yes, none from the FDA as well). Of course, the method should be outlined in the protocol. My standard for immediate release formulations:

Extrapolation from \(\small{t_\textrm{last}}\) to infinite time will be performed by fitting terminal concentration values (at least three) to the monoexponential model \(\small{\widehat{C}(t)=\widehat{C}_0\cdot\exp(-\widehat{\lambda}_\textrm{z}\cdot t)}\) by means of unweighted semilogarithmic regression. The starting point for the estimation will default to \(\small{\geq 2\times t_\textrm{max}}\)16 and will be adjusted if deemed necessary (i.e., visual inspection of fits will be per­formed). Increasing concentrations in the late phase of the profile will be excluded from the estimation of \(\small{\widehat{\lambda}_\textrm{z}}\). To avoid under- or over-estimation of extrapolated area the estimated last concentration \(\small{\widehat{C}_\textrm{last}}\) (instead of the last measured concentration \(\small{C_\textrm{last}}\)) will be used in extra­polations.

Dr. David Bowman inside HAL 9000

NB, visual inspection of fits is man­da­tory. Any algorithm might terribly fail on ‘flat’ profiles (controlled release) or on profiles of multiphasic release formulations.

Or brain is an excellent pattern rec­og­​ni­tion machine. Estimation of \(\small{\widehat{\lambda}_\textrm{z}}\) will be ela­borated in another article.

top of section ↩︎ previous section ↩︎

Software

Both methods are implemented in numerous pieces of software. Most commonly used terminology:

Term Metric
AUClast \(\small{AUC_{{0-}\textrm{tlast}}}\)
AUCINF_obs \(\small{AUC_{0-\infty}}\) based on \(\small{(1)}\)
AUCINF_pred \(\small{AUC_{0-\infty}}\) based on \(\small{(2)}\)
Clast_pred \(\small{\widehat{C}_\textrm{last}}\)
Lambda_z \(\small{\widehat{\lambda}_\textrm{z}}\)
Lambda_z_intercept \(\small{\widehat{C}_0}\)

Already in 1993 both methods were implemented in TopFit 2.0.17They are available in Phoenix WinNonlin18 (see the User’s Guide or the online manual for details), PK­ana­lix,19 the R packages PKNCA,20 21 ncappc,22 qpNCA,23 and ubiquity,24 25 26 as well as in the Julia library Pumas.27 28

top of section ↩︎ previous section ↩︎

Simulation

    

One-compartment model, first order absorption and elimination, no lag time.
PK-parameters: \(\small{D=200}\), \(\small{f=0.80}\), \(\small{V=3}\), absorption half life 45 minutes, elimination half life five hours.
The error distribution was log-normal, where the error increased with decreasing concentrations. The LLOQ was set to 4% of the model’s \(\small{C_\textrm{max}}\). Simulated concentrations below the LLOQ prior to \(\small{t_\textrm{max}}\) were set to zero, and later ones to NA. Automatic algorithm for the estimation of \(\small{\widehat{\lambda}_\textrm{z}}\) based on maximizing \(\small{R_\textrm{adj}^2}\), which is the default in all other software as well.

    

Sorry, 268 LOC.

Function Purpose
one.comp() Calculates concentrations, the exact (i.e., the model’s) \(\small{C_\textrm{max}}\), \(\small{t_\textrm{max}}\), \(\small{AUC_{0-\textrm{tlast}}}\), \(\small{AUC_{0-\infty}}\), and LLOQ for given vector of sampling times, PK-parameters, and LLOQ specified as a fraction of the model’s \(\small{C_\textrm{max}}\).
micro2macro() Converts the model’s PK-parameters (micro constants \(\small{f}\), \(\small{D}\), \(\small{V}\), \(\small{k_{01}}\), and \(\small{k_{10}}\)) to macro (hybrid) constants \(\small{A}\), \(\small{B}\), \(\small{\alpha}\), and \(\small{\beta}\).
round.up() Rounds x up to the next multiple of y.
sampling() Calculates a vector of ‘optimal’ sampling times: Equally spaced to \(\small{t_\textrm{max}}\) and then following a geometric progression to \(\small{t_{\textrm{tlast}}}\).
est.elim() Estimates the apparent elimination by semilogarithmc regression. Returns \(\small{R_\textrm{adj}^2}\), \(\small{\widehat{C}_0}\), \(\small{\widehat{\lambda}_\textrm{z}}\), start- and end-times, and the number of values used. If estimation fails (slope ≥ 0), returns NAfor all.
calc.AUC() Calculates \(\small{AUC_{{0-}\textrm{tlast}}}\) by a trapezoidal rule. Implemented are linlog (default) and linear.
AUC.extr() Extrapolates the \(\small{AUC}\). Returns \(\small{AUC_{{0-}\textrm{tlast}}}\) (for comparison), \(\small{AUC_{0-\infty}}\) (observed and pre­dict­ed), \(\small{C_\textrm{last}}\), and \(\small{\widehat{C}_\textrm{last}}\).
sum.simple() Nonparametric summary (i.e., removes the arithmetic mean from summary().
geom.stat() Calculates the geometric mean and CV.
sim.auc <- function(D, f, V, t12.a, t12.e, n1, n2, tlast,
                    mins, CV0, LLOQ.f, rule, nsims,
                    setseed = TRUE, show.fits = FALSE, progress = FALSE) {
  one.comp <- function(f, D, V, k01, k10, t, LLOQ.f) {
    # one-compartment model, first order absorption
    # and elimination, no lagtime
    x <- micro2macro(f, D, V, k01, k10) # get hybrid (macro) constants
    if (!isTRUE(all.equal(k01, k10))) { # common: k01 != k10
      C    <- f * D * k01 / (V * (k01 - k10)) *
              (exp(-k10 * t) - exp(-k01 * t))
      # values based on the model
      tmax <- log(k01 / k10) / (k01 - k10)
      Cmax <- f * D * k01 / (V * (k01 - k10)) *
              (exp(-k10 * tmax) - exp(-k01 * tmax))
      AUC  <- f * D / V / k10
      AUCt <- (x$C[["A"]] - x$C[["A"]] * exp(-x$E[["alpha"]] * tlast)) /
              x$E[["alpha"]] +
              (x$C[["B"]] - x$C[["B"]] * exp(-x$E[["beta"]] * tlast)) /
              x$E[["beta"]]
    } else {                           # flip-flop
      k    <- k10
      C    <- f * D / V * k * t * exp(-k * t)
      tmax <- 1 / k
      Cmax <- f * D / V * k * tmax * exp(-k * tmax)
      AUC  <- f * D / V / k
      AUCt <- NA # sorry, no idea
    }
    LLOQ <- Cmax * LLOQ.f
    C[C <= LLOQ] <- NA                # set values below the LLOQ to NA
    C[which(is.na(C[t < tmax]))] <- 0 # set NAs prior tmax to zero
    res <- list(C = C, Cmax = Cmax, tmax = tmax, AUC = AUC, AUCt = AUCt,
                LLOQ = LLOQ)
    return(res)
  }

  micro2macro <- function(f, D, V, k01, k10) {
    # Convert parameters (micro constants) to macro (hybrid) constants
    # Coefficients (C) and exponents (E)
    C     <- f * D * k01 / (V * (k01 - k10))
    C     <- setNames(c(-C, +C), c("A", "B"))
    E     <- setNames(c(k01, k10), c("alpha", "beta"))
    macro <- list(C = C, E = E)
    return(macro)
  }

  round.up <- function(x, y) {
    # round x up to the next multiple of y
    return(y * (x %/% y + as.logical(x %% y)))
  }

  sampling <- function(f, D, V, k01, k10, n1, n2, tlast, mins, LLOQ.f) {
    # <= tmax: equally spaced
    # >= tmax: geometric progression
    # rounded to 'mins' minutes
    tmax <- one.comp(f, D, V, k01, k10, t = tlast, LLOQ.f)$tmax
    t    <- seq(0, tmax, length.out = n1)
    for (i in (n1+1):(n1+n2-1)) {
      t[i] <- t[i-1] * (tlast / tmax)^(1 / (n2 - 1))
    }
    t <- round.up(t * 60, mins) / 60
    t[length(t)] <- tlast
    return(t)
  }

  est.elim <- function(t, C) {
    # estimate lambda.z by "maximum adjusted R-squared approach"
    data   <- data.frame(t = t, C = C)
    Cmax   <- max(data$C, na.rm = TRUE)
    tmax   <- data$t[data$C[!is.na(data$C)] == Cmax]
    data   <- data[data$t > tmax, ]        # discard tmax and earlier
    data   <- data[complete.cases(data), ] # discard NAs
    lz.end <- tail(data$t, 1)
    # start with the last three concentrations
    x      <- tail(data, 3)
    r2     <- a <- b <- numeric()
    m      <- lm(log(C) ~ t, data = x)
    a[1]   <- coef(m)[[1]]
    b[1]   <- coef(m)[[2]]
    r2[1]  <- summary(m)$adj.r.squared
    # work backwards
    i      <- 1
    for (j in 4:nrow(data)) {
      i     <- i + 1
      x     <- tail(data, j)
      m     <- lm(log(C) ~ t, data = x)
      a[i]  <- coef(m)[[1]]
      b[i]  <- coef(m)[[2]]
      r2[i] <- summary(m)$adj.r.squared
      # don't proceed if no improvement
      if (r2[i] < r2[i-1] | abs(r2[i] - r2[i-1]) < 0.0001) break
    }
    # location of the largest adjusted R2
    loc <- which(r2 == max(r2, na.rm = TRUE))
    if (b[loc] >= 0 || r2[loc] <= 0) { # not meaningful
      R2adj <- intcpt <- lambda.z <- lz.start <- lz.end <- lz.n <- NA
    } else {
      R2adj    <- r2[loc]
      intcpt   <- a[loc]
      lambda.z <- -b[loc]
      lz.start <- x$t[2]
      lz.n     <- nrow(x) - 1
    }
    res <- data.frame(R2adj = R2adj, intcpt = intcpt, lambda.z = lambda.z,
                      lz.start = lz.start, lz.end = lz.end, lz.n = lz.n)
    return(res)
  }

  calc.AUC <- function(t, C, rule = "linlog",
                       digits = 5, details = FALSE) {
    x <- data.frame(t = t, C = C, pAUC = 0)
    x <- x[with(x, order(t)), ]
    x <- x[complete.cases(x), ]
    for (i in 1:(nrow(x) - 1)) {
      if (rule == "linlog") { # linear-up / log-down
        if (x$C[i+1]  < x$C[i]) {
          x$pAUC[i+1] <- (x$t[i+1] - x$t[i]) * (x$C[i+1] - x$C[i]) /
                         log(x$C[i+1] / x$C[i])
        } else {
          x$pAUC[i+1] <- 0.5 * (x$t[i+1] - x$t[i]) *
                         (x$C[i+1] + x$C[i])
        }
      } else {               # linear
        x$pAUC[i+1] <- 0.5 * (x$t[i+1] - x$t[i]) *
                       (x$C[i+1] + x$C[i])
      }
    }
    x$AUC <- cumsum(x$pAUC)
    x     <- x[with(x, order(t)), ] # sort by time
    if (details) {                  # entire data.frame
      res <- round(x, digits)
    } else {                        # only tlast and AUClast
      res <- setNames(as.numeric(round(tail(x[, c(1, 4)], 1), digits)),
                      c("tlast", "AUClast"))
    }
    if (rule == "linlog") {
      attr(res, "trapezoidal rule") <- "linear-up/log-down"
    } else {
      attr(res, "trapezoidal rule") <- "linear"
    }
    return(res)
  }

  AUC.extr <- function(t, C, intcpt, lambda.z) {
    x           <- calc.AUC(t, C, rule = rule, details = TRUE)
    AUClast     <- tail(x$AUC, 1)
    Clast_obs   <- tail(x$C, 1)
    AUCinf_obs  <- AUClast + tail(x$C, 1) / lambda.z
    Clast_pred  <- exp(intcpt - lambda.z * tail(x$t, 1))
    AUCinf_pred <-  AUClast + Clast_pred / lambda.z
    res <- list(AUClast = AUClast, Clast_obs = Clast_obs,
                AUCinf_obs = AUCinf_obs, Clast_pred = Clast_pred,
                AUCinf_pred = AUCinf_pred)
    return(res)
  }

  sum.simple <- function(x, digits = 4) {
    # nonparametric summary: remove mean but keep NAs
    res <- summary(x)
    if (nrow(res) == 6) {
      res <- res[c(1:3, 5:6), ]
    } else {
      res <- res[c(1:3, 5:7), ]
    }
    return(res)
  }

  geom.stat <- function(x) {
    stats <- as.data.frame(matrix(nrow = 2, ncol = ncol(x),
                                  dimnames = list(c("geom. mean",
                                                    "geom. CV (%)"),
                                                    names(x))))
    for (i in 1:ncol(stats)) {
      stats[1, i] <- exp(mean(log(x[, i]), na.rm = TRUE))
      stats[2, i] <- 100 * sqrt(exp(sd(log(x[, i]), na.rm = TRUE)^2) - 1)
    }
    return(stats)
  }

  k01    <- log(2) / t12.a   # absorption rate constant
  k10    <- log(2) / t12.e   # elimination rate constant
  # generate sampling schedule
  t      <- sampling(f, D, V, k01, k10, n1, n2, tlast, mins, LLOQ.f)
  x      <- one.comp(f, D, V, k01, k10, t, LLOQ.f)
  C0     <- x$C              # theoretical profile (based on model)
  Cmax   <- x$Cmax
  tmax   <- x$tmax
  LLOQ   <- x$LLOQ
  CV     <- CV0 - C0 * 0.006 # variability increases with decreasing C
  varlog <- log(CV^2 + 1)
  sdlog  <- sqrt(varlog)
  aggr1  <- data.frame(R2adj = NA_real_, intcpt = NA_real_,
                       lambda.z = NA_real_, RE = NA_real_, t.half = NA_real_,
                       lz.start = NA_real_, lz.end = NA_real_,
                       lz.n = NA_integer_)
  aggr2  <- data.frame(AUClast = NA_real_, Clast_obs = NA_real_,
                       AUCinf_obs = NA_real_, Clast_pred = NA_real_,
                       AUCinf_pred = NA_real_)
  aggr3  <- data.frame(matrix(NA, nrow = nsims, ncol = length(t)))
  C      <- numeric()
  if (progress) pb <- txtProgressBar(0, 1, 0, char = "\u2588",
                                     width = NA, style = 3)
  if (setseed) set.seed(123456)
  for (i in 1:nsims) {
    for (j in 1:length(C0)) {
      if (is.na(C0[j])) {
        C[j] <- NA # otherwise, rlnorm() fails
      } else {
        C[j] <- rlnorm(1, meanlog = log(C0[j]) - 0.5 * varlog[j],
                       sdlog = sdlog[j])
      }
    }
    # C < LLOQ set to NA, NAs prior to tmax set to zero
    Cmax.tmp <- max(C, na.rm = TRUE)
    tmax.tmp <- t[C[!is.na(C)] == Cmax.tmp]
    C[C <= LLOQ] <- NA
    C[which(is.na(C[t < tmax.tmp]))] <- 0
    aggr3[i, ] <- C
    aggr1[i, c(1:3, 6:8)] <- est.elim(t, C)[1, ]
    if (!is.na(aggr1$lambda.z[i])) { # only if successful fit
      aggr1$t.half[i] <- log(2) / aggr1$lambda.z[i]
      aggr1$RE[i]     <- 100 * (aggr1$lambda.z[i] - k10) / k10
      aggr2[i, ]      <- as.numeric(AUC.extr(t, C,
                                             aggr1$intcpt[i],
                                             aggr1$lambda.z[i]))
    }
    if (progress) setTxtProgressBar(pb, i / nsims)
  }
  if (progress) close(pb)
  method <- "trapezoidal rule"
  ifelse (rule == "linlog",
    method <- paste("linear-up / log-down", method),
    method <- paste("linear", method))
  NAs <- data.frame(time = t,
                    NAs = sapply(aggr3, function(x) sum(length(which(is.na(x))))))
  cat("Results from the model (without error)",
      "\n  AUClast", sprintf("%.3f", x$AUCt),
      "\n  AUCinf ", sprintf("%.3f", x$AUC),
      paste0("\n\n", prettyNum(nsims, digits = 0, big.mark = ","),
             " simulated profiles, ", method, "\n"))
  print(NAs, row.names = FALSE)
  failed <- as.integer(gsub("[^[:digit:]]", "", sum.simple(aggr1)[6, 1]))
  if (failed >0) cat("In", failed,
                     "profile(s) estimation of the apparent",
                     "elimination failed (nonnegative slope).\n\n")
  print(signif(geom.stat(aggr2), 4))
  if (show.fits) {
    cat("\n")
    print(sum.simple(aggr1))
  }
}

D         <- 200      # dose
f         <- 0.8      # fraction absorbed (BA)
V         <- 3        # volume of distribution
t12.a     <- 0.75     # absorption half life
t12.e     <- 5        # elimination half life
n1        <- 4        # number of samples <= tmax
n2        <- 9        # number of samples >= tmax (n = n1 + n2 - 1)
tlast     <- 24       # last sampling time point (h)
mins      <- 15       # rounding of sampling times (minutes)
CV0       <- 0.40     # CV at low concentrations
LLOQ.f    <- 0.04     # fraction of theoretical Cmax
rule      <- "linlog" # if you insist: "linear"
nsims     <- 2.5e3L   # number of simulations
show.fits <- FALSE    # TRUE for nonparametric summary of fits
progress  <- FALSE    # TRUE for a progress bar
sim.auc(D, f, V, t12.a, t12.e, n1, n2, tlast, mins,
        CV0, LLOQ.f, rule, nsims, show.fits, progress)
# Results from the model (without error) 
#   AUClast 368.471 
#   AUCinf  384.719 
# 
# 2,500 simulated profiles, linear-up / log-down trapezoidal rule
#   time NAs
#   0.00   0
#   1.00   0
#   1.75   0
#   2.50   0
#   3.25   0
#   4.50   0
#   5.75   0
#   7.75   0
#  10.25   0
#  13.75   0
#  18.25   1
#  24.00 447
# In 1 profile(s) estimation of the apparent elimination failed (nonnegative slope).
# 
#              AUClast Clast_obs AUCinf_obs Clast_pred AUCinf_pred
# geom. mean   356.500     2.679    380.400      2.549     379.000
# geom. CV (%)   8.307    40.830      8.614     38.960       8.578
    

Even the linear-up/log-down underestimates \(\small{AUC_{{0-}\textrm{tlast}}}\) because the profile is concave up to the inflection point at \(\small{2\times t_\textrm{max}}\). There is nothing we can do about (see another article).

    

Note that the variability of \(\small{C_\textrm{last}}\) is larger than the one of \(\small{\widehat{C}_\textrm{last}}\). As expected, it propagates partly to the extra­polated \(\small{AUC\textrm{s}}\), where the variability based on \(\small{(2)}\) is slightly smaller then the one based on \(\small{(1)}\).

In either case the variability of \(\small{AUC_{0-\infty}}\) is larger than the one of \(\small{AUC_{{0-}\textrm{tlast}}}\). Midha et al.29 noted already in 1993 for immediate release formulations that – once absorption is essentially complete – point estimates are stable, only variability increases with time. This might be relevant in juridictions applying Average Bio­equi­va­lence with Ex­pand­ing Limits (ABEL) and \(\small{AUC_{0-\infty}}\) is a primary PK metric because the late part of the profile represents absorption (e.g., controlled release formulations for the EMA, where reference-scaling is only acceptable for \(\small{C_\textrm{max}}\) and the sample size depends on \(\small{AUC}\); see this article for an example).

top of section ↩︎ previous section ↩︎

Licenses

CC BY 4.0 Helmut Schütz 2022
R GPL 3.0, pandoc GPL 2.0.
1st version March 06, 2022. Rendered May 05, 2022 13:15 CEST by rmarkdown via pandoc in 0.21 seconds.

Footnotes and References


  1. Yamaoka K, Nakagawa T, Uno T. Statistical Moments in Pharmacokinetics. J Pharmacokin Biopharm. 1978; 6(6): 547–58. doi:10.1007/bf01062109.↩︎

  2. Upton RA, Sansom L, Guentert TW, Powell JR, Thiercellin J-F, Shah VP, Coates PE, Riegelman S. Evaluation of the Absorption from 15 Commercial Theophylline Products Indicating Deficiencies in Currently Applied Bio­avail­ability Criteria. J Pharmacokin Biopharm. 1980; 8(3): 229–42. doi:10.1007/BF01059644.↩︎

  3. Schulz H-U, Steinijans, VW. Striving for standards in bioequivalence assessment: a review. Int J Clin Pharm Ther Toxicol. 1991; 29(8): 293–8. PMID:1743802.↩︎

  4. Purves RD. Bias and Variance of Extrapolated Tails for Area-Under-the-Curve (AUC) and Area-Under-the-Moment-Curve (AUMC). J Pharmacokin Biopharm. 1992; 20(5): 501–10. doi:10.1007/BF01061468.↩︎

  5. Sauter R, Steinijans VW, Diletti E, Böhm E, Schulz H-U. Presentation of results from bioequivalence studies. Int J Clin Pharm Ther Toxicol. 1992; 30(7): 233–56. PMID:1506127.↩︎

  6. Wagner JG. Pharmacokinetics for the Pharmaceutical Scientist. Lancaster, Basel: Technomic; 1993. p. 88.↩︎

  7. Abdallah HY. An area correction method to reduce intrasubject variability in bioequivalence studies. J Pharm Phar­ma­ceut Sci. 1998; 1(2): 60–5. Open Access Open Access.↩︎

  8. Hauschke D, Steinjans V, Pigeot I. Bioequivalence Studies in Drug Development. Chichester: Wiley; 2007. p. 131.↩︎

  9. Derendorf H, Gramatté T, Schäfer HG, Staab A. Pharmakokinetik kompakt. Grundlagen und Praxisrelevanz. Stuttgart: Wissenschaftliche Verlagsanstalt; 3. Auflage 2011. p. 127. [German]↩︎

  10. Fisher D, Kramer W, Burmeister Getz E. Evaluation of a Scenario in Which Estimates of Bioequivalence Are Biased and a Proposed Solution: tlast (Common). J Clin Pharm. 2016; 56(7): 794–800. doi:10.1002/jcph.663. Open Access Open Access.↩︎

  11. FDA, CDER. Draft Guidance. Bioequivalence Studies With Pharmacokinetic End­points for Drugs Submitted Under an ANDA. Rockville. August 2021. Download.↩︎

  12. WHO. Technical Report Series, No. 992. Annex 6. Multisource (generic) pharmaceutical products: guidelines on registration requirements to establish interchangeability. Section 7.4.7 Parameters to be assessed. Geneva. 2017. Online.↩︎

  13. Health Canada. Guidance Document: Conduct and Analysis of Comparative Bioavailability Studies. Appendix 1. Cat:H13‐9/6‐2018E‐PDF. Ottawa. 2018/06/08. Online.↩︎

  14. EMA, CHMP. Guideline on the Investigation of Bioequivalence. CPMP/EWP/QWP/1401/98 Rev. 1/ Corr **. London. 20 January 2010. Online.↩︎

  15. EMA, CHMP. Guideline on the pharmacokinetic and clinical evaluation of modified release dosage forms. EMA/CHMP/EWP/280/96 Rev1. London. 20 November 2014. Online.↩︎

  16. Scheerans C, Derendorf H, Kloft C. Proposal for a Standardised Identification of the Mono-Exponential Ter­mi­nal Phase for Orally Administered Drugs. Biopharm Drug Dispos. 2008; 29(3): 145–57. doi:10.1002/bdd.596.↩︎

  17. Heinzel G, Woloszczak R, Thomann R. TopFit 2.0. Pharmacokinetic and Pharmacodynamic Data Analysis System for the PC. Stuttgart: Springer; 1993.↩︎

  18. Certara USA, Inc. Princeton, NJ. 2022. Phoenix WinNonlin. Online.↩︎

  19. LIXOFT, Antony, France. 2021. PKanalix Documentation. NCA parameters. Online.↩︎

  20. Denney W, Duvvuri S, Buckeridge C. Simple, Automatic Noncompartmental Analysis: The PKNCA R Package. J Phar­ma­cokinet Pharmacodyn. 2015; 42(1): 11–107, S65. doi:10.1007/s10928-015-9432-2.↩︎

  21. Denney B, Buckeridge C, Duvvuri S. PKNCA. Perform Pharmacokinetic Non-Com­part­mental Analysis. Package version 0.9.5. 2021-10-29. CRAN.↩︎

  22. Acharya C, Hooker AC, Turkyilmaz GY, Jonsson S, Karlsson MO. ncappc: NCA Calculations and Population Model Diagnosis. Package version 0.3.0. 2018-08-24. CRAN.↩︎

  23. Huisman J, Jolling K, Mehta K, Bergsma T. qpNCA: Noncompartmental Pharmacokinetic Analysis by qPhar­metra. Package version 1.1.6. 2021-08-16. CRAN.↩︎

  24. Harrold JM, Abraham AK. Ubiquity: a framework for physiological/mechanism-based pharma­co­kinetic / pharma­co­dynamic model development and deployment. J Pharmacokinet Pharmacodyn. 2014; 41(2), 141–51. doi:10.1007/s10928-014-9352-6.↩︎

  25. Harrold J. ubiquity: PKPD, PBPK, and Systems Pharmacology Modeling Tools. Package version 2.0.0. 2021-09-03. CRAN.↩︎

  26. Harrold J. Noncompartmental Analysis. 2021-09-03. CRAN.↩︎

  27. Rackauckas C, Ma Y, Noack A, Dixit V, Kofod Mogensen P, Byrne S, Maddhashiya S, Bayoán Santiago Cal­de­rón J, Nyberg J, Gobburu JVS, Ivaturi V. Accelerated Predictive Healthcare Analytics with Pumas, a High Per­for­mance Pharmaceutical Modeling and Simulation Platform. Nov 30, 2020. doi:10.1101/2020.11.28.402297. Preprint bioRxiv 2020.11.28.40229.↩︎

  28. Pumas-AI, Inc. 2022. The NCA Functions. Online.↩︎

  29. Midha KK, Ormsby ED, Hubbard JW, McKay G, Hawes EM, Gavalas L, McGilveray IJ. Logarithmic Trans­for­mation in Bioequivalence: Application with Two Formulations of Perphenazine. J Pharm Sci. 1993; 82(2): 138–44. doi:10.1002/jps.2600820205↩︎