Consider allowing JavaScript. Otherwise, you have to be proficient in reading LaTeX since formulas will not be rendered. Furthermore, the table of contents in the left column for navigation will not be available and code-folding not supported. Sorry for the inconvenience.

The example in this article was generated with R 4.1.0 by the package truncnorm.1

See also a collection of other articles.

  • Click to show / hide R code.

Introduction

Which trapezoidal rule should we use?

[…] the mean of AUC of the generic had to be within 20% of the mean AUC of the approved product. At first this was determined by using serum versus time plots on specially weighted paper, cutting the plot out and then weighing each separately.
Jerome Philip Skelly. 2010.2
I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

Apart from the few drugs which are subjected to capacity limited elimination like C2H5OH, distribution and/or elimination follows exponential decrease(s). Hence, for the vast majority of drugs the linear trapezoidal rule will lead to a positive bias for all sections of the profile where concentrations decrease.

It is customary in biopharmaceutics to use the trapezoidal method to calculate areas under the concentration-time curve. The popularity of this method may be attributed to its simplicity both in concept and execution. However, in cases where […] there are long intervals between data points, large algorithm errors are known to occur.
Yeh and Kwan, 1978.3

© 2006 Mark Darling @ Wikimedia Commons

That’s not what I call state of the art.

Already in 1981 I coded the linear-up/log-down trapezoidal rule for the TI-59

top of section ↩︎

Al Gore Rhythms

Linear Trapezoidal

For moonshine.

Data points are connected by straight lines and the AUC calculated as the sum of trapezoids.

Fig. 1 Trapezoids.

We see that for \(\small{b=a}\) the trapezoid becomes a rectangle, where \(\small{A=h\times a}\). The anchor point is the arithmetic mean of \(\small{a}\) and \(\small{b}\). \[AUC_i^{i+1}=0.5(t_{i+1}-t_i)(C_{i+1}+C_i)\tag{1}\]

Log Trapezoidal

For a fast bolus.

All concentrations are \(\small{\log_{e}}\)-transformed and the linear trapezoidal applied. At the end the AUC is back-transformed.

Fig. 2 Convex in linear scale.

The anchor point is the geometric mean of \(\small{a}\) and \(\small{b}\). The log trapezoidal does not work if \(\small{a}\) or \(\small{b}\) are zero or if they are equal. In such a case software should fall back to the linear trapezoidal. \[AUC_i^{i+1}=(t_{i+1}-t_i)\frac{C_{i+1}\,-\,C_i}{\log_{e}(C_{i+1}/C_i)}\tag{2}\]

The concentration \(\small{C_0}\) has to be imputed. This is done either automatically by the software (extrapolated from the first two \(\small{\log_{e}}\)-transformed concentrations) or has to be provided by the user.

Lin-up / log-down Trapezoidal

For an infusion or an extravascular administration.

Straightforward. If subsequent concentrations increase or are equal, the linear trapezoidal is applied and the log trapezoidal otherwise. \[AUC_i^{i+1}=\begin{cases}\tag{3} 0.5(t_{i+1}-t_i)(C_{i+1}+C_i) & \text{if $C_{i+1}\geq C_i$},\\ (t_{i+1}-t_i)\frac{C_{i+1}\,-\,C_i}{\log_{e}(C_{i+1}/C_i)} & \text{otherwise}. \end{cases}\]

top of section ↩︎ previous section ↩︎

Data Pretreatment

Don’t import data blindly. Quite often some data are given with a nonnumeric code, e.g.. BQL for values below the LLOQ, NR for not reportable ones (say, due an interferring peak in chromatography), or simply missing. This might lead to trouble, especially if you have to deal with lag times.

Let’s explore an example:
time concentration
   0.00       BQL
   0.25       BQL
   0.50       BQL
   0.75       BQL
   1.00 2.386209
   1.25 3.971026
   1.50 5.072666
   2.00 6.461897
   2.50 5.964697
   3.00 6.154877
   3.50 5.742396
   4.00 4.670705
   4.25 5.749097
   4.50 6.376336
   4.75 6.473850
   5.00 6.518927
   5.25 7.197600
   5.50 7.171507
   6.00 6.881594
   6.50 7.345398
   7.00 7.135752
   7.75 5.797113
   9.00 4.520387
12.00 3.011526
16.00 1.534004
24.00 0.369226

If you import this data and press the button,4 likely you will get an AUC of 77.26564 by the linear trapezoidal and 75.64961 by the linlog trapezoidal. But is this correct? Hey, the lag time shows up as zero! My trusty Phoenix/WinNonlin dropped all BQL’s, imputed 0 at time zero and informs me:
  Note - the concentration at dose time  

  was added for extrapolation purposes.  

What does that mean? It calculated the first partial AUC as \(\small{(1-0)\times 2.386209/2=1.193105}\). That’s correct numerically but still false. Well, the software is right but its master, the wetware isn’t.

Fig. 3 Imputations unveiled.

Of course, that’s nonsense. All ‘hidden’ imputed concentrations are above the LLOQ, the one at 0.75 h is already more than 5times the LLOQ.
However, help is on the way. Apply a set of ‘BQL Rules’ (see the User’s Guide or the online help for details). A screenshot of mine:

Fig. 4 BQL Rule Set.

Then we get what we expect. The lag time is reported with 0.75 h, the AUC with 76.37082 by the linear trapezoidal and with 74.75478 by the linlog trapezoidal. All is good because the first partial AUC is now \(\small{(1-0.75)\times 2.386209/2=0.2982761}\).

top of section ↩︎ previous section ↩︎

Simulations

I picked a difficult one. A biphasic release formulation, one compartment model, PK-parameters: \(\small{D=100}\), dose fractions of the IR and DR components \(\small{\left\{0.6,0.4\right\}}\), fraction absorbed \(\small{f=0.70}\), volume of distribution \(\small{V=5}\), absorption rate constants of the components \(\small{k_{\,01.\textrm{IR}}=1.4\,\textrm{h}^{-1}}\), \(\small{k_{\,01.\textrm{DR}}=0.7\,\textrm{h}^{-1}}\), lag times of the components \(\small{tlag_{\,01.\textrm{IR}}=25\,\textrm{min}}\), \(\small{tlag_{\,01.\textrm{DR}}=4\,\textrm{h}}\), elimination rate constant \(\small{k_{\,10}=0.18\,\textrm{h}^{-1}}\).
The sampling schedule was 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 2.5, 3, 3.5, 4, 4.25, 4.5, 4.75, 5, 5.25, 5.5, 6, 6.5, 7, 7.75, 9, 12, 16, and 24 hours in order to get reliable estimates of both peaks.
Error distributions were uniform for \(\small{f}\) (0.5–1), log-normal for \(\small{V}\) (CV 50%), \(\small{k_{\,01}}\) (CV 25%), \(\small{k_{\,10}}\) (CV 40%), and truncated normal for \(\small{tlag_{\,01.\textrm{IR}}}\) (limits 5–45 min), \(\small{tlag_{\,02.\textrm{DR}}}\) (lower limit 3 h). Analytical error was log-normal with a CV of 5% of the simulated concentration.

1,000 simulated profiles, the LLOQ was set to 5% of the theoretical Cmax, i.e., simulated concentrations below the LLOQ were forced to NA. The AUC was calculated by both methods and compared to the true one (without analytical error).

Cave: 210 LOC.
All results aggreed with Certara’s Phoenix/WinNonlin.

library(truncnorm)
trap.lin <- function(x, y, first.x, last.x) {
  # linear trapezoidal rule
  pAUC.lin <- 0
  for (i in seq(which(x == first.x), which(x == last.x))) {
    if (x[i] == first.x) { # start triangle
      pAUC.lin <- c(pAUC.lin, 0.5*(x[i]-x[i-1])*y[i])
    } else {
      pAUC.lin <- c(pAUC.lin, 0.5*(x[i]-x[i-1])*(y[i]+y[i-1]))
    }
  }
  AUC.lin <- cumsum(pAUC.lin)
  return(tail(AUC.lin, 1))
}
trap.linlog <- function(x, y, first.x, last.x) {
  # lin-up/log-down trapezoidal rule
  pAUC.linlog <- 0
  for (i in seq(which(x == first.x), which(x == last.x))) {
    if (x[i] == first.x) {  # start triangle
      pAUC.linlog <- c(pAUC.linlog, 0.5*(x[i]-x[i-1])*y[i])
      } else {
      if (y[i] >= y[i-1]) { # linear up
        pAUC.linlog <- c(pAUC.linlog, 0.5*(x[i]-x[i-1])*(y[i]+y[i-1]))
      } else {              # log down
        pAUC.linlog <- c(pAUC.linlog,
                         (x[i]-x[i-1])*(y[i]-y[i-1])/
                         (log(y[i]/y[i-1])))
      }
    }
  }
  AUC.linlog <- cumsum(pAUC.linlog)
  return(tail(AUC.linlog, 1))
}
set.seed(123456)
n       <- 1000     # no of profiles
t       <- c(0,0.25,0.5,0.75,1,1.25,1.5,2,2.5,3,3.5,
             4,4.25,4.5,4.75,5,5.25,5.5,6,6.5,7,7.75,9,12,16,24)
D       <- 100
D.f     <- c(0.6, 0.4) # dose fractions of components
# Index ".d" denotes default values, ".c" CV
F.d     <- 0.7      # fraction absorbed (BA)
F.l     <- 0.5      # lower limit
F.u     <- 1        # upper limit
V.d     <- 5        # volume of distribution
V.c     <- 0.50     # CV 50%, lognormal
k01.1.d <- 1.4      # absorption rate constant (1st component)
k01.2.d <- 0.7      # absorption rate constant (2nd component)
k01.c   <- 0.25     # CV 25%, lognormal
k10.d   <- 0.18     # elimination rate constant
k10.c   <- 0.40     # CV 40%, lognormal
tlag1.d <- 0.25     # 1st lag time
tlag1.l <- 5/60     # lower truncation 1
tlag1.u <- 0.75     # upper truncation 1
tlag2.d <- 4        # 2nd lag time
tlag2.l <- 3        # lower truncation 2
tlag.c  <- 0.5      # CV 50%, truncated normal
AErr    <- 0.05     # analytical error CV 5%, lognormal
LLOQ.f  <- 0.05     # fraction of theoretical Cmax

Ct <- function(x, D = D, D.f = D.f, F = F.d, V = V.d,
               k01.1 = k01.1.d, k01.2 = k01.2.d, k10 = k10.d,
               tlag1 = tlag1.d, tlag2 = tlag2.d) {
  C1 <- F.d*D*D.f[1]/V.d*k01.1.d/(k01.1.d-k10.d)*
        (exp(-k10.d*(x-tlag1))-exp(-k01.1.d*(x-tlag1)))
  C2 <- F.d*D*D.f[2]/V.d*k01.2.d/(k01.2.d-k10.d)*
        (exp(-k10.d*(x-tlag2))-exp(-k01.2.d*(x-tlag2)))
  C1[C1 < 0] <- 0
  C2[C2 < 0] <- 0
  return(C1 + C2)
}
data  <- data.frame(subject = rep(1:n, each = length(t)), t = t, C = NA)
# individual PK parameters
F     <- runif(n = n, min = F.l, max = F.u)
V     <- rlnorm(n = n, meanlog = log(V.d)-0.5*log(V.c^2+1),
                sdlog = sqrt(log(V.c^2+1)))
k01.1 <- rlnorm(n = n, meanlog = log(k01.1.d)-0.5*log(k01.c^2+1),
                sdlog = sqrt(log(k01.c^2+1)))
k01.2 <- rlnorm(n = n, meanlog = log(k01.2.d)-0.5*log(k01.c^2+1),
                sdlog = sqrt(log(k01.c^2+1)))
k10   <- rlnorm(n = n, meanlog = log(k10.d)-0.5*log(k10.c^2+1),
                sdlog = sqrt(log(k10.c^2+1)))
tlag1 <- rtruncnorm(n = n, a = tlag1.l, b = tlag1.u,
                    mean = tlag1.d, sd = tlag.c)
tlag2 <- rtruncnorm(n = n, a = tlag2.l, b = Inf,
                    mean = tlag2.d, sd = tlag.c)
x     <- seq(min(t), max(t), length.out=2500)
# theoretical profile
C.th  <- Ct(x = x, D = D, D.f = D.f, F = F.d, V = V.d,
            k01.1 = k01.1.d, k01.2 = k01.2.d, k10 = k10.d,
            tlag1 = tlag1.d, tlag2 = tlag2.d)
LLOQ  <- LLOQ.f * max(C.th)
# individual profiles
for (i in 1:n) {
  C <- Ct(x = t, D = D, D.f = D.f, F = F[i], V = V[i],
          k01.1 = k01.1[i], k01.2 = k01.2[i], k10 = k10[i],
          tlag1 = tlag1[i], tlag2 = tlag2[i])
  for (j in 1:length(t)) { # analytical error (multiplicative)
    if (j == 1) {
      AErr1 <- rnorm(n = 1, mean = 0,
                     sd = abs(C[j]*AErr))
    } else {
      AErr1 <- c(AErr1, rnorm(n = 1, mean = 0,
                              sd = abs(C[j]*AErr)))
    }
  }
  C <- C + AErr1     # add analytical error
  C[C < LLOQ] <- NA  # assign NAs to Cs below LLOQ
  data$C[data$subject == i] <- C
}
res <- data.frame(subject = 1:n, tlag = NA, tlast = NA, Clast = NA,
                  th = NA, lin = NA, linlog = NA)
# NCA
for (i in 1:n) {
  tmp.t        <- data$t[data$subject == i]
  tmp.C        <- data$C[data$subject == i]
  tfirst       <- t[!is.na(tmp.C)][1]
  tlast        <- rev(tmp.t[!is.na(tmp.C)])[1]
  res$tlag[i]  <- tmp.t[which(tmp.t == tfirst)+1]
  res$tlast[i] <- tlast
  res$Clast[i] <- tmp.C[tmp.t == tlast]
  res$lin[i]   <- trap.lin(tmp.t, tmp.C, tfirst, tlast)
  res$linlog[i] <- trap.linlog(tmp.t, tmp.C, tfirst, tlast)
  compl.t <- x[x >= tfirst & x <= tlast]
  C <- Ct(x = compl.t, D = D, D.f = D.f, F = F[i], V = V[i],
          k01.1 = k01.1[i], k01.2 = k01.2[i], k10 = k10[i],
          tlag1 = tlag1[i], tlag2 = tlag2[i])
  res$th[i] <- 0.5*sum(diff(compl.t)*(C[-1]+C[-length(C)]))
}
res$RE.lin    <- 100 * (res$lin - res$th) / res$th
res$RE.linlog <- 100 * (res$linlog - res$th) / res$th
# print(res[, c(1:4, 6:7)], row.names = FALSE) # many lines
summary(res[, 8:9])
wt <- wilcox.test(res$RE.lin, res$RE.linlog, conf.int = TRUE)

# spaghetti plot
C <- Ct(x = x, D = D, D.f = D.f, F = F.d, V = V.d,
        k01.1 = k01.1.d, k01.2 = k01.2.d, k10 = k10.d,
        tlag1 = tlag1.d, tlag2 = tlag2.d)
C[C == 0] <- NA
log.plot <- FALSE
if (log.plot) {
  log.y <- "y"
  ylim <- c(LLOQ*0.9, 1.2*max(data$C, na.rm = TRUE))
} else {
  log.y <- ""
  ylim <- c(0, 1.04*max(data$C, na.rm = TRUE))
}
dev.new(width = 4.5, height = 4.5, record = TRUE)
op <- par(no.readonly = TRUE)
par(mar = c(4, 2.9, 0.2, 0.1), cex.axis = 0.9,
    xaxs = "r", yaxs = "i")
plot(x, C, type="n", axes = FALSE, ylim = ylim, log = log.y,
     xlab = "time", ylab = "")
mtext("concentration", 2, line = 2)
axis(1, at = c(seq(0, 6, 2), c(9, 12, 16, 24)))
axis(2, las = 1)
abline(h = LLOQ, col = "red")
rug(t, ticksize = -0.015)
for (i in 1:n) {
  tmp.C  <- data$C[data$subject == i]
  tfirst <- t[!is.na(tmp.C)][1]      # 1st C >= LLOQ
  tlast  <- rev(t[!is.na(tmp.C)])[1] # last C >= LLOQ
  for (j in 1:(length(t)-1)) {
    if (!is.na(tmp.C[j]) & !is.na(tmp.C[j+1])) {
      if (tmp.C[j+1] >= tmp.C[j]) {  # up (linear)
        lines(x = c(t[j], t[j+1]),
              y = c(tmp.C[j], tmp.C[j+1]), col = "#0000FF80") 
      } else {                       # down (linlog)
        tk <- NULL
        Ck <- NULL
        for (k in seq(t[j], t[j+1], length.out = 100)) {
          tk <- c(tk, k)
          Ck <- c(Ck, exp(log(tmp.C[j])+abs((k-t[j])/(t[j+1]-t[j]))*
                         (log(tmp.C[j+1])-log(tmp.C[j]))))
        }
        lines(tk, Ck, col = "#FF00FF80")
      }
    }
  }
}
lines(x, C, type = "l", lwd = 3, col = "#00000080")
leg <- c(expression(C[t[italic(i)]]>=C[t[italic(i)-1]]*" (linear up)"),
         expression(C[t[italic(i)]]>=C[t[italic(i)-1]]*" (log down)"),
         "theoretical profile", "LLOQ")
legend("topright", box.lty = 0, y.intersp = 1.2, lwd = c(1, 1, 3, 1),
       col = c("#0000FF80", "#FF00FF80", "#00000080", "red"),
       legend = leg, seg.len = 2.5)
box()

# box plots
ylim <- 1.04*c(-1, +1)*max(abs(range(res[, 8:9])))
op <- par(no.readonly = TRUE)
par(mar = c(0, 2.9, 0, 0), cex.axis = 0.9)
plot(c(0.5, 2.5), c(0, 0), type = "n", axes = FALSE,
     ylim = ylim, xlab = "", ylab = "")
abline(h=0, lty=3)
boxplot(res$RE.lin, res$RE.linlog, boxwex = 0.35, notch = TRUE,
        notch.frac = 0.6, ylim = ylim, names = "", ylab = "",
        main = "", axes = FALSE, col = "bisque", outcol = "red",
        add = TRUE)
axis(2, las = 1)
mtext("linear", at = 1, 1, line = -1.1)
mtext("lin-up / log-down", at = 2, 1, line = -1.1)
mtext("relative error (%)", 2, line = 2)
wilc.diff <- paste0("\u2206 ", sprintf("%+.3f", wt$estimate),
                    "\n95% CI ",
                    sprintf("(%+.3f", wt$conf.int[1]), ", ",
                    sprintf("%+.3f)", wt$conf.int[2]))
text(x = 1.5, y = min(res[, 8:9]), labels = wilc.diff, cex = 0.9)
par(op)

Fig. 5 Linear scale.

Fig. 6 Semilog scale.

Lag times5 were in 230 profiles 0.5 h, in 407 0.75 h, in 337 1 h, and in 26 1.25 h. In 493 profiles tlast was at 16 hours and in 507 at 24 hours.

After 16 hours the profiles may give a false impression. Any kind of average plot would be positively biased. This will be elaborated in another article.

Fig. 7 Relative errors.

The linear and lin-up/log-down trapezoidal rules (compared to the theoretical AUC of profiles without analytical error) clearly show the positive bias of the former and that it performs significantly worse than the latter.
Better methods to estimate the absorption phase(s) were proposed6 but are currently not implemented in any software.

There is no good reason to stick to the linear trapezoidal rule any more. The lin-up/log-down trapezoidal rule is implemented for decades (‼) in commonly used software. In Phoenix/WinNonlin it’s just one click in the NCA Options (see the online help for details). Don’t tell me that it’s complicated.

Fig. 8 NCA setup in Phoenix/WinNonlin.

In the R-package bear7 it is the default. THX to Yung-jin!

Fig. 9 AUC calculation in bear.

method = 'lin up/log down' is also the default of the function pk.calc.auxc() in the R-package PKNCA.8 9

    

Interlude

Since the method of Purves is not implemented in any software, we have to stay with what we have. Although the lin-up/log-down trapezoidal performs pretty well for decreasing concentrations, there is still a negative bias in the increasing sections of the profile.

An example of a simple one-compartment model with rich (16 time points) and sparse (9 time points) sampling:

Fig. 10 Rich () and sparse () sampling.
Shaded area is the model.
Early part of the profile in the inset.

one.comp <- function(f, D, V, k01, k10, t, t.last) { # no lagtime
  if (!isTRUE(all.equal(k01, k10))) { # common: k01 ≠ k10
    C    <- f*D*k01/(V*(k01-k10))*(exp(-k10*t)-exp(-k01*t))
    tmax <- log(k01/k10)/(k01-k10)
    Cmax <- f*D*k01/(V*(k01-k10))*(exp(-k10*tmax)-exp(-k01*tmax))
    AUC  <- f*D/V/k10
    x    <- micro.macro(f, D, V, k01, k10)
    AUCt <- (x$C[["A"]]-x$C[["A"]]*exp(-x$E[["alpha"]]*t.last))/x$E[["alpha"]]+
            (x$C[["B"]]-x$C[["B"]]*exp(-x$E[["beta"]]*t.last))/x$E[["beta"]]
  } else {                            # flip-flop
    k    <- k10
    C    <- f*D/V*k*t*exp(-k*t)
    tmax <- 1/k
    Cmax <- f*D/V*k*tmax*exp(-k*tmax)
    AUC  <- f*D/V/k
    AUCt <- NA # no idea
  }
  res <- list(C = C, tmax = tmax, Cmax = Cmax, AUC = AUC, AUCt = AUCt)
  return(res)
}
micro.macro <- function(f, D, V, k01, k10) {
  # Coefficients (C) and exponents (E)
  C <- f*D*k01/(V*(k01-k10))
  C <- setNames(c(-C, +C), c("A", "B"))
  E <- setNames(c(k01, k10), c("alpha", "beta"))
  macro <- list(C = C, E = E)
  return(macro)
}
NCA <- function(t, C, t.first, t.last, lambda.z) {
  pAUC.linlog <- 0
  for (i in seq(which(t == t.first), which(t == t.last))) {
    if (t[i] == t.first) {  # first triangle if 0
      pAUC.linlog <- c(pAUC.linlog, 0.5*(t[i]-t[i-1])*C[i])
      } else {
      if (C[i] >= C[i-1]) { # linear up
        pAUC.linlog <- c(pAUC.linlog, 0.5*(t[i]-t[i-1])*(C[i]+C[i-1]))
      } else {              # log down
        pAUC.linlog <- c(pAUC.linlog,
                         (t[i]-t[i-1])*(C[i]-C[i-1])/
                         (log(C[i]/C[i-1])))
      }
    }
  }
  AUC.linlog <- tail(cumsum(pAUC.linlog), 1)
  AUC.inf    <- AUC.linlog + tail(C, 1)/lambda.z
  Cmax       <- max(C)
  tmax       <- t[C == Cmax]
  res        <- list(AUCt = AUC.linlog, AUCinf = AUC.inf,
                     Cmax = Cmax, tmax = tmax)
  return(res)
}
lz <- function(t, C) {
  m <- lm(log(C) ~ t)
  return(as.numeric(-coef(m)[2]))
}
f           <- 2/3
D           <- 400
V           <- 3.49
t12.01      <- 1
t12.10      <- 6
k01         <- log(2)/t12.01
k10         <- log(2)/t12.10
t           <- seq(0, 24, length.out = 501)
rich.smpl   <- c(0, 0.25, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 5, 6, 9, 12, 16, 24)
sparse.smpl <- c(0, 0.5, 1.5, 2.5, 3.5, 5, 9, 12, 24)
model       <- one.comp(f, D, V, k01, k10, t, 24)
rich        <- one.comp(f, D, V, k01, k10, rich.smpl, 24)
sparse      <- one.comp(f, D, V, k01, k10, sparse.smpl, 24)
rich.NCA    <- NCA(rich.smpl, rich$C, 0, 24,
                   lz(tail(rich.smpl, 3), tail(rich$C, 3)))
sparse.NCA  <- NCA(sparse.smpl, sparse$C, 0, 24,
                   lz(tail(sparse.smpl, 3), tail(sparse$C, 3)))
result      <- data.frame(method = c("exact (from model) ",
                                     "NCA rich sampling  ",
                                     "NCA sparse sampling"),
                          AUCt = sprintf("%.2f",
                                         c(model$AUCt,
                                           rich.NCA$AUCt,
                                           sparse.NCA$AUCt)),
                          AUCinf = sprintf("%.2f",
                                           c(model$AUC,
                                             rich.NCA$AUCinf,
                                             sparse.NCA$AUCinf)),
                          Cmax = sprintf("%.3f",
                                         c(model$Cmax,
                                           rich.NCA$Cmax,
                                           sparse.NCA$Cmax)),
                          tmax = sprintf("%.3f",
                                         c(model$tmax,
                                           rich.NCA$tmax,
                                           sparse.NCA$tmax)))
bias        <- data.frame(sampling = c("rich    ", "sparse  "),
                          AUCt = c(sprintf("%+.3f%%",
                                   100*(rich.NCA$AUCt-model$AUCt)/model$AUCt),
                                   sprintf("%+.3f%%",
                                   100*(sparse.NCA$AUCt-model$AUCt)/model$AUCt)),
                          AUCinf = c(sprintf("%+.3f%%",
                                     100*(rich.NCA$AUCinf-model$AUC)/model$AUC),
                                     sprintf("%+.3f%%",
                                     100*(sparse.NCA$AUCinf-model$AUC)/model$AUC)),
                          Cmax = c(sprintf("%+.3f%%",
                                   100*(rich.NCA$Cmax-model$Cmax)/model$Cmax),
                                   sprintf("%+.3f%%",
                                   100*(sparse.NCA$Cmax-model$Cmax)/model$Cmax)),
                          tmax = c(sprintf("%+.3f", rich.NCA$tmax-model$tmax),
                                   sprintf("%+.3f", sparse.NCA$tmax-model$tmax)))
txt         <- "Bias (NCA results compared to the model)\n"
xlim        <- range(t)
ylim        <- range(model$C)
dev.new(width = 4.5, height = 4.5)
op          <- par(no.readonly = TRUE)
par(mar = c(3.4, 3.1, 0, 0.1), cex.axis = 0.9)
plot(t, model$C, type = "n", ylim = c(0, model$Cmax),
     xlab = "", ylab = "", axes = FALSE)
mtext(expression(italic(t)*" (h)"), 1, line = 2.5)
mtext(expression(italic(C)*" (m/V)"), 2, line = 2)
grid(nx = NA, ny = NULL)
abline(v = seq(0, 24, 4), lty = 3, col = "lightgrey")
polygon(c(t, rev(t)), c(rep(0, length(model$C)), rev(model$C)),
        border = "#87CEFA80", col = "#87CEFA80")
for (i in 1:(length(rich.smpl)-1)) {
  if (!is.na(rich$C[i]) & !is.na(rich$C[i+1])) {
    if (rich$C[i+1] >= rich$C[i]) {     # up (linear)
      lines(x = c(rich.smpl[i], rich.smpl[i+1]),
            y = c(rich$C[i], rich$C[i+1]),
            col = "#0000FF80", lwd = 2)
    } else {                            # down (linlog)
      tk <- NULL
      Ck <- NULL
      for (k in seq(rich.smpl[i], rich.smpl[i+1], length.out = 201)) {
        tk <- c(tk, k)
        Ck <- c(Ck, exp(log(rich$C[i])+abs((k-rich.smpl[i])/
                       (rich.smpl[i+1]-rich.smpl[i]))*
                       (log(rich$C[i+1])-log(rich$C[i]))))
      }
      lines(tk, Ck, col = "#0000FF80", lwd = 2)
    }
  }
}
for (i in 1:(length(sparse.smpl)-1)) {
  if (!is.na(sparse$C[i]) & !is.na(sparse$C[i+1])) {
    if (sparse$C[i+1] >= sparse$C[i]) { # up (linear)
      lines(x = c(sparse.smpl[i], sparse.smpl[i+1]),
            y = c(sparse$C[i], sparse$C[i+1]),
            col = "#FF000080", lwd = 2)
    } else {                           # down (linlog)
      tk <- NULL
      Ck <- NULL
      for (k in seq(sparse.smpl[i], sparse.smpl[i+1], length.out = 201)) {
        tk <- c(tk, k)
        Ck <- c(Ck, exp(log(sparse$C[i])+abs((k-sparse.smpl[i])/
                       (sparse.smpl[i+1]-sparse.smpl[i]))*
                       (log(sparse$C[i+1])-log(sparse$C[i]))))
      }
      lines(tk, Ck, col = "#FF000080", lwd = 2)
    }
  }
}
axis(1, seq(0, 24, 4))
axis(2, las = 1)
box()
plotdim     <- par("plt") # inset plot
xleft       <- plotdim[2] - (plotdim[2] - plotdim[1]) * 0.55
xright      <- plotdim[2]
ybottom     <- plotdim[4] - (plotdim[4] - plotdim[3]) * 0.4
ytop        <- plotdim[4]
par(fig = c(xleft, xright, ybottom, ytop), mar = rep(0, 4),
    cex.axis = 0.8, new = TRUE)
plot(t[t <= model$tmax], model$C[t <= model$tmax],
     type = "n", ylim = c(0, model$Cmax),
     xlab = "", ylab = "", axes = FALSE)
rect(0, 0, t[t <= model$tmax], model$Cmax, col = "white", border = NA)
grid(nx = NA, ny = NULL)
abline(v = 0:3, lty = 3, col = "lightgrey")
polygon(c(t[t <= model$tmax], rev(t[t <= model$tmax])),
        c(rep(0, length(model$C[t <= model$tmax])),
          rev(model$C[t <= model$tmax])),
        border = "#87CEFA80", col = "#87CEFA80")
for (i in 1:(length(rich.smpl)-1)) {
  if (!is.na(rich$C[i]) & !is.na(rich$C[i+1])) {
    if (rich$C[i+1] >= rich$C[i]) {     # up (linear)
      lines(x = c(rich.smpl[i], rich.smpl[i+1]),
            y = c(rich$C[i], rich$C[i+1]),
            col = "#0000FF80", lwd = 2)
    } else {                            # down (linlog)
      tk <- NULL
      Ck <- NULL
      for (k in seq(rich.smpl[i], rich.smpl[i+1], length.out = 201)) {
        tk <- c(tk, k)
        Ck <- c(Ck, exp(log(rich$C[i])+abs((k-rich.smpl[i])/
                       (rich.smpl[i+1]-rich.smpl[i]))*
                       (log(rich$C[i+1])-log(rich$C[i]))))
      }
      lines(tk, Ck, col = "#0000FF80", lwd = 2)
    }
  }
}
for (i in 1:(length(sparse.smpl)-1)) {
  if (!is.na(sparse$C[i]) & !is.na(sparse$C[i+1])) {
    if (sparse$C[i+1] >= sparse$C[i]) { # up (linear)
      lines(x = c(sparse.smpl[i], sparse.smpl[i+1]),
            y = c(sparse$C[i], sparse$C[i+1]),
            col = "#FF000080", lwd = 2)
    } else {                           # down (linlog)
      tk <- NULL
      Ck <- NULL
      for (k in seq(sparse.smpl[i], sparse.smpl[i+1], length.out = 201)) {
        tk <- c(tk, k)
        Ck <- c(Ck, exp(log(sparse$C[i])+abs((k-sparse.smpl[i])/
                       (sparse.smpl[i+1]-sparse.smpl[i]))*
                       (log(sparse$C[i+1])-log(sparse$C[i]))))
      }
      lines(tk, Ck, col = "#FF000080", lwd = 2)
    }
  }
}
axis(1, 0:3)
box()
par(op)
print(result, row.names = FALSE); cat(txt); print(bias, row.names = FALSE)
#               method   AUCt AUCinf   Cmax  tmax
#  exact (from model)  611.80 661.41 53.397 3.102
#  NCA rich sampling   610.25 659.89 53.374 3.000
#  NCA sparse sampling 606.28 656.01 53.092 3.500
# Bias (NCA results compared to the model)
#  sampling    AUCt  AUCinf    Cmax   tmax
#  rich     -0.253% -0.229% -0.043% -0.102
#  sparse   -0.902% -0.816% -0.571% +0.398

Since in the early part (up to the inflection point at \(\small{2\times t_\textrm{max}}\)) the profile is concave, the AUC is underestimated. However, the bias decreases if we extrapolate to infinity.
Naturally, we fare better with rich sampling.

top of section ↩︎ previous section ↩︎

Missing Values

The positive bias of the linear trapezoidal rule gets especially nasty if values are missing. Here I removed the 16 h value of the first simulated biphasic profile.

# you have to run the simulation script before
t.full <- data$t[data$subject == 1]
C.full <- data$C[data$subject == 1]
C.miss <- C.full[-which(t.full == 16)]
t.miss <- t.full[-which(t.full == 16)]
tfirst <- t[!is.na(C.full)][1]
tlast  <- rev(t.full[!is.na(C.full)])[1]
comp   <- data.frame(dataset = c("full", "16 h missing"),
                     lin = NA, linlog = NA)
comp$lin[1]    <- trap.lin(t.full, C.full, tfirst, tlast)
comp$linlog[1] <- trap.linlog(t.full, C.full, tfirst, tlast)
comp$lin[2]    <- trap.lin(t.miss, C.miss, tfirst, tlast)
comp$linlog[2] <- trap.linlog(t.miss, C.miss, tfirst, tlast)
print(comp, row.names = FALSE)
dev.new(width = 4.5, height = 4.5)
op <- par(no.readonly = TRUE)
par(mar = c(4, 2.9, 0.2, 0.1), cex.axis = 0.9,
    xaxs = "r", yaxs = "i")
plot(t.miss, C.miss, type="n", axes = FALSE,
     ylim = c(0, 1.04*max(data$C, na.rm = TRUE)),
     xlab = "time", ylab = "")
mtext("concentration", 2, line = 2)
axis(1, at = c(seq(0, 6, 2), c(9, 12, 16, 24)))
axis(2, las = 1)
abline(h = LLOQ, col = "red")
rug(t, ticksize = -0.015)
points(16, C.full[which(t.full == 16)], pch = 19,
       cex = 1.25, col = "red") # the ‘missing’ one
# linear
lines(t.miss, C.miss, col = "#FF00FF80", lwd = 2)
# lin-up / log-down
for (i in 1:(length(t.miss)-1)) {
  if (!is.na(C.miss[i]) & !is.na(C.miss[i+1])) {
    if (C.miss[i+1] >= C.miss[i]) { # up (linear)
      lines(x = c(t.miss[i], t.miss[i+1]),
            y = c(C.miss[i], C.miss[i+1]),
            col = "#0000FF80", lwd = 2)
    } else {                        # down (linlog)
      tk <- NULL
      Ck <- NULL
      for (k in seq(t[i], t.miss[i+1], length.out = 200)) {
        tk <- c(tk, k)
        Ck <- c(Ck, exp(log(C.miss[i])+abs((k-t.miss[i])/
                       (t.miss[i+1]-t.miss[i]))*
                       (log(C.miss[i+1])-log(C.miss[i]))))
      }
      lines(tk, Ck, col = "#0000FF80", lwd = 2)
    }
  }
}
legend("topright", box.lty = 0, y.intersp = 1.2, lwd = c(2, 2, 1),
       col = c("#FF00FF80", "#0000FF80", "red"), seg.len = 2.5,
       legend = c("linear", "lin-up / log-down", "LLOQ"))
box()
par(op)
#       dataset      lin   linlog
#          full 77.42499 75.71357
#  16 h missing 82.97551 76.56601

Fig. 11 First simulated profile ( is the ‘missing’ value).

Connecting the 12 and 24 hours data by a straight line in linear trapezoidal fires back (AUC0–t +7.2%). The lin-up/log-down is much less affected (AUC0–t +1.1%).

Hence, if the linear trapezoidal is used, in a comparative BA study missing values bias the T/R-ratio.

However, as seen in Fig. 1 and Fig. 2, the more similar two concentrations (in the Figures \(\small{a}\) and \(\small{b}\)), the more similar the areas obtained by the linear and log trapezoidal. Or the other way ’round: The narrower the sampling interval, the smaller the difference obtained by the two methods. Hence, if the linear trapezoidal is used, missings in a part of the profile with rich sampling hurt less than in a part with wide intervals.

top of section ↩︎ previous section ↩︎

Recommendation

Always state in the protocol / report which trapezoidal rule you will use / used.10 It supports assessors11 to verify your results.

Nowadys the linear trapezoidal rule is of historical interest only and should be thrown into the pharmacometric waste bin.

If your SOPs still call for the linear trapezoidal rule, it’s high time for an update.

Abandon calculating the AUC by the linear trapezoidal rule. Your results will inevitably be biased and lead into trouble if values are missing.

top of section ↩︎ previous section ↩︎

Post Scriptum

It’s okay to talk about the linear trapezoidal rule at the beginning of the pharmacology curriculum but only in a historical context.12 Heck, Yeh and Kwan wrote about the bias of the linear trapezoidal already 43 years ago.

[…] the log trapezoidal method can be be used advantageously in combination with a scond method, such as the linear trapezoidal rule, to yield optimal estimates.
Gibaldi and Perrier, 1982.13

For a good while we are living in the 21st century. 😉 Nowadays, the lin-up/log-down trapezoidal rule should be elaborated.14 15

Defining in NCA the lag time as the time point before the time of the first measurable concentration is a convenient consensus. It’s quite obvious that the true one lies somewhere in between. A more realistic approach was suggested16 but is not implemented in any software. Before you think about a home brewn solution: It might be judged by regulatory assessors as borderline modeling – which it isn’t.

I used up to eight significant digits in the example above only to allow accurate recalculation. Our bioanalytical methods are never that precise (more than up to four digits belong to the realm of science fiction). In the past I experienced questions from assessors when data in full precision (15 digits) were used in calculations but were not reproducible based on the ‘nice’ rounded numbers given in the analytical and statistical report. This issue will be elaborated in another article.

top of section ↩︎ previous section ↩︎

License

CC BY 4.0 Helmut Schütz 2021
1st version April 03, 2021.
Rendered 2021-06-25 11:44:47 CEST by rmarkdown in 0.27 seconds.

Footnotes and References


  1. Mersman O, Trautmann H, Steuer D, Bornkamp B. truncnorm: Truncated Normal Distribution. 2018-02-27. CRAN.↩︎

  2. Skelly JP. A History of Biopharmaceutics in the Food and Drug Administration 1968–1993. AAPS J. 2010; 12(1): 44–50. doi:10.1208/s12248-009-9154-8. PMC Free Full Text Free Full Text.↩︎

  3. Yeh KC, Kwan KC. A Comparison of Numerical Integrating Algorithms by Trapezoidal, Lagrange, and Spline Approximation. J Pharmacokin Biopharm. 1978; 6(1): 79–98. doi:10.1007/BF01066064.↩︎

  4. »You press the button, we do the rest.« was an advertising slogan coined by George Eastman, the founder of Kodak, in 1888.↩︎

  5. In NCA defined as the time point before the time of the first measurable concentration.↩︎

  6. Purves RD. Optimum Numerical Integration Methods for Estimation of Area-Under-the-Curve (AUC) and Area-under-the-Moment-Curve (AUMC). J Pharmacokin Biopharm. 1992; 20(3): 211-26. doi:10.1007/bf01062525.↩︎

  7. Lee, H-y, Lee Y-j. bear: the data analysis tool for average bioequivalence (ABE) and bioavailability (BA). v2.8.9. 2020-09-21. homepage↩︎

  8. Denney W, Duvvuri S, Buckeridge C. Simple, Automatic Noncompartmental Analysis: The PKNCA R Package. J Pharmacokin Pharmacodyn. 2015; 42(1): 11–107, S65. doi:10.1007/s10928-015-9432-2.↩︎

  9. Denney B, Buckeridge C, Duvvuri S. PKNCA. Perform Pharmacokinetic Non-Compartmental Analysis. version 0.9.4. 2020-06-01. CRAN.↩︎

  10. WHO. Technical Report Series, No. 992. Annex 6. Multisource (generic) pharmaceutical products: guidelines on registration requirements to establish interchangeability. Section 7.4.7 Parameters to be assessed. Geneva. 2017. download.↩︎

  11. … and consultants. For me it is regularly a kind of scavenger hunt.↩︎

  12. Dost FH. Der Blutspiegel. Konzentrationsabläufe in der Kreislaufflüssigkeit. Leipzig: VEB Thieme; 1953. German.↩︎

  13. Gibaldi M, Perrier D. Pharmacokinetics. Appendix D. Estimation of Areas. New York: Marcel Dekker; 1982.↩︎

  14. Papst G. Area under the concentration-time curve. In: Cawello W. (ed) Parameters for Compartment-free Pharmacokinetics. Aachen: Shaker Verlag; 2003. p. 65–80.↩︎

  15. Gabrielsson J, Weiner D. Pharmacokinetic and Pharmacodynamic Data Analysis. Stockholm: Apotekarsocieteten; 5th ed. 2016. p. 141–58.↩︎

  16. Csizmadia F, Endrényi L. Model-Independent Estimation of Lag Times with First-Order Absorption and Disposition. J Pharm Sci. 1998; 87(5): 608–12. doi:10.1021/js9703333.↩︎