Consider allowing JavaScript. Otherwise, you have to be proficient in reading since formulas will not be rendered. Furthermore, the table of contents in the left column for navigation will not be available and code-folding not supported. Sorry for the inconvenience.
The script was run on a Xeon E3-1245v3 @ 3.40GHz (1/4 cores) 16GB RAM with 4.4.0 on Windows 7 build 7601, Service Pack 1, Universal C Runtime 10.0.10240.16390.
How should we extrapolate the AUC to infinity?
Good question because two variants are implemented in software, namely based on the observed (\(\small{C_\textrm{last}}\)) and the predicted (\(\small{\widehat{C}_\textrm{last}}\)) last concentration: \[AUC_{0-\infty}=AUC_{{0-}\textrm{tlast}} + C_\textrm{last}/\widehat{\lambda}_\textrm{z}\tag{1}\] \[AUC_{0-\infty}=AUC_{{0-}\textrm{tlast}} + \widehat{C}_\textrm{last}/\widehat{\lambda}_\textrm{z}\tag{2}\] where \(\small{AUC_{{0-}\textrm{tlast}}}\) is calculated by a trapezoidal rule, \(\small{\widehat{\lambda}_\textrm{z}}\) and \(\small{\widehat{C}_0}\) are the apparent elimination rate constant and the concentration of a virtual intravenous dose at time zero estimated by semilogarithmic regression. Then \[\eqalign{ \widehat{C}_\textrm{last}&=\widehat{C}_0\cdot\exp(-\widehat{\lambda}_\textrm{z} \cdot \textrm{t}_\textrm{last})\\ \small{\textsf{for}}\;\widehat{\lambda}_\textrm{z}&>0}\tag{3}\]
However, this might be a somewhat academic question in most cases.
“When the time course is measured until plasma concentration becomes 5% of its maximum, the relative cutoff errors in AUC, MRT, and VRT are smaller than about 5%, 10%, and 40%, respectively […]. If the time course is measured until plasma concentration becomes 1% of its maximum, the relative cutoff errors in AUC, MRT, and VRT are smaller than about 1%, 2%, and 10%, respectively.
In the early days of Noncompartmental Analysis \(\small{(1)}\) was commonly used.
Already 44 years ago \(\small{(2)}\) was occasionally used
(e.g., in a project2 sponsored by the
FDA) and for the
first time recommended 33 years ago.3 4 5 6 For a while it was mandatory for
manuscripts submitted to the ‘Bioequivalence Section’ of the International
Journal of Clinical Pharmacology, Therapy and Toxicology (edited by
one of the pioneers of bioequivalence, Henning
Blume).
Other references: By an author of the
FDA7 and more recent ones
as well.8
9 10 11
Makes sense. \(\small{C_\textrm{last}}\) might be close to the Lower Limit of Quantification (LLOQ) and hence, has the lowest accuracy and highest variability of the entire profile. If one bases the extrapolation on it, one sets a fox to keep the geese. The intrinsic high variability of \(\small{C_\textrm{last}}\) propagates into the extrapolated area. Contrary to that, \(\small{\widehat{C}_\textrm{last}}\) takes information of preceding (higher, and therefore, more accurate and precise) concentrations into account and thus, is in general more reliable.
“In general, we recommend use of the predicted rather than the observed last concentration when computing the extrapolated area.
Little is mentioned in guidelines. According to the WHO, the method has to be specified in the protocol.12 The FDA,13 China’s CDE, the ANVISA,14 and the WHO15 recommend \(\small{(1)}\), whereas Health Canada16 recommends \(\small{(2)}\). Nothing is stated by the EMA17 18 and the ICH.19
I’m using solely \(\small{(2)}\) since the early 1990s and never received a single deficiency letter in this respect (yes, none from the FDA as well). Of course, the method should be outlined in the protocol.16 Referring only to an SOP is not a good idea because it is not accessible to regulatory assessors in the first place. My standard wording for drugs in IR formulations following monoexponential elimination:
“Extrapolation from \(\small{\textrm{t}_\textrm{last}}\) to infinite time will be performed by fitting – at least three – terminal concentrations to the monoexponential model \(\small{\widehat{C}_\textrm{t}=\widehat{C}_0\cdot\exp(-\widehat{\lambda}_\textrm{z}\cdot \textrm{t})}\) by means of unweighted semilogarithmic regression. The starting point for the estimation will default to \(\small{\geq 2\times \textrm{t}_\textrm{max}}\).20 The selected time points will be adjusted if deemed necessary, i.e., a visual inspection of fits will be performed.3 5 8 20 Increasing concentrations in the late phase of the profile will be excluded from the estimation of \(\small{\widehat{\lambda}_\textrm{z}}\). To avoid under- or over-estimation of the extrapolated area, the estimated last concentration \(\small{\widehat{C}_\textrm{last}}\) will be used in extrapolations.3 5 8 11
If PK is expected to be multiphasic, i.e., with distribution phase(s) and elimination, the second sentence is:
“The intersection of the last two phase lines in the (semilogarithmic) profile post \(\small{C_\textrm{max}}\) is used as a ‘visual marker’ for the beginning of the monoexponential terminal phase.20
Algorithms for the estimation of \(\small{\widehat{\lambda}_\textrm{z}}\) will be elaborated in another article.
It must be mentioned that any algorithm might fail on ‘flat’ profiles (controlled release formulations with flip-flop PK) or on profiles of multiphasic release formulations.
Nota bene, visual inspection of fits is mandatory.3 5 8 20 It was shown in simulations of two-compartment models to outperform standard automatic methods.21 Our brain is an excellent pattern recognition machine.
One software vendor rightly notes:
“Using this methodology, Phoenix will almost always compute an estimate for Lambda Z. It is the user’s responsibility to evaluate the appropriateness of the estimated value.
Both methods are implemented in numerous pieces of software. Most commonly used terminology:
Term | Metric |
---|---|
AUClast
|
\(\small{AUC_{{0-}\textrm{tlast}}}\) |
AUCINF_obs
|
\(\small{AUC_{0-\infty}}\) based on \(\small{(1)}\) |
AUCINF_pred
|
\(\small{AUC_{0-\infty}}\) based on \(\small{(2)}\) |
Clast_pred
|
\(\small{\widehat{C}_\textrm{last}}\) |
Lambda_z
|
\(\small{\widehat{\lambda}_\textrm{z}}\) |
Lambda_z_intercept
|
\(\small{\widehat{C}_0}\) |
Already in 1993 both methods were implemented in TopFit 2.0.23 They are available in Phoenix WinNonlin24 (see
the User’s Guide or the online manual25 for details), PKanalix,26 the R
packages PKNCA
,27 28
ncappc
,29 qpNCA
,30 and
ubiquity
,31 32 33 as well as in the Julia library Pumas
.34 35
One-compartment model, first order absorption and elimination, no lag
time. PK-parameters: \(\small{D=200}\), \(\small{f=0.80}\), \(\small{V=3}\), absorption half life
45 minutes, elimination half life five hours.
The error distribution was log-normal,
where the error increased with decreasing concentrations. The
LLOQ was set to 4% of
the model’s \(\small{C_\textrm{max}}\).
Simulated concentrations below the
LLOQ prior to \(\small{t_\textrm{max}}\) were set to zero,
and later ones to NA
. Automatic algorithm for the
estimation of \(\small{\widehat{\lambda}_\textrm{z}}\)
based on maximizing \(\small{R_\textrm{adj}^2}\), which is the
default in all other software as well.
Sorry, 264 LOC.
Function | Purpose |
---|---|
one.comp()
|
Calculates concentrations, the exact (i.e., the model’s) \(\small{C_\textrm{max}}\), \(\small{t_\textrm{max}}\), \(\small{AUC_{0-\textrm{tlast}}}\), \(\small{AUC_{0-\infty}}\), and LLOQ for given vector of sampling times, PK-parameters, and LLOQ specified as a fraction of the model’s \(\small{C_\textrm{max}}\). |
micro2macro()
|
Converts the model’s PK-parameters (micro constants \(\small{f}\), \(\small{D}\), \(\small{V}\), \(\small{k_{01}}\), and \(\small{k_{10}}\)) to macro (hybrid) constants \(\small{A}\), \(\small{B}\), \(\small{\alpha}\), and \(\small{\beta}\). Note that in a model without lag time \(\small{\left| A \right| = B}\). |
round.up()
|
Rounds x up to the next multiple of y .
|
sampling()
|
Calculates a vector of ‘optimal’ sampling times: Equally spaced to \(\small{t_\textrm{max}}\) and then following a geometric progression to \(\small{t_{\textrm{tlast}}}\). |
est.elim()
|
Estimates the apparent elimination by semilogarithmc regression. Returns
\(\small{R_\textrm{adj}^2}\), \(\small{\widehat{C}_0}\), \(\small{\widehat{\lambda}_\textrm{z}}\),
start- and end-times, and the number of values used. If estimation fails
(slope ≥ 0), returns NA for all.
|
calc.AUC()
|
Calculates \(\small{AUC_{{0-}\textrm{tlast}}}\) by a
trapezoidal rule. Implemented are linlog (default) and
linear . See this
article for details.
|
AUC.extr()
|
Extrapolates the \(\small{AUC}\). Returns \(\small{AUC_{{0-}\textrm{tlast}}}\) (for comparison), \(\small{AUC_{0-\infty}}\) (observed and predicted), \(\small{C_\textrm{last}}\), and \(\small{\widehat{C}_\textrm{last}}\). |
sum.simple()
|
Nonparametric summary (i.e., removes the arithmetic mean from
summary() .
|
geom.stat()
|
Calculates the geometric mean and CV. |
<- function(D, f, V, t12.a, t12.e, n1, n2, tlast,
sim.auc setseed = TRUE,
mins, CV0, LLOQ.f, rule, nsims, show.fits = FALSE, progress = FALSE) {
<- function(f, D, V, k01, k10, t, LLOQ.f) {
one.comp # one-compartment model, first order absorption
# and elimination, no lagtime
<- micro2macro(f, D, V, k01, k10) # get hybrid (macro) constants
x if (!isTRUE(all.equal(k01, k10))) { # common: k01 != k10
<- f * D * k01 / (V * (k01 - k10)) *
C exp(-k10 * t) - exp(-k01 * t))
(# values based on the model
<- log(k01 / k10) / (k01 - k10)
tmax <- f * D * k01 / (V * (k01 - k10)) *
Cmax exp(-k10 * tmax) - exp(-k01 * tmax))
(<- f * D / V / k10
AUC <- (x$C[["A"]] - x$C[["A"]] * exp(-x$E[["alpha"]] * tlast)) /
AUCt $E[["alpha"]] +
x$C[["B"]] - x$C[["B"]] * exp(-x$E[["beta"]] * tlast)) /
(x$E[["beta"]]
xelse { # flip-flop
} <- k10
k <- f * D / V * k * t * exp(-k * t)
C <- 1 / k
tmax <- f * D / V * k * tmax * exp(-k * tmax)
Cmax <- f * D / V / k
AUC <- NA # sorry, no idea
AUCt
}<- Cmax * LLOQ.f
LLOQ <= LLOQ] <- NA # set values below the LLOQ to NA
C[C which(is.na(C[t < tmax]))] <- 0 # set NAs prior tmax to zero
C[<- list(C = C, Cmax = Cmax, tmax = tmax, AUC = AUC, AUCt = AUCt,
res LLOQ = LLOQ)
return(res)
}
<- function(f, D, V, k01, k10) {
micro2macro # Convert parameters (micro constants) to macro (hybrid) constants
# Coefficients (C) and exponents (E)
<- f * D * k01 / (V * (k01 - k10))
C <- setNames(c(-C, +C), c("A", "B"))
C <- setNames(c(k01, k10), c("alpha", "beta"))
E <- list(C = C, E = E)
macro return(macro)
}
<- function(x, y) {
round.up # round x up to the next multiple of y
return(y * (x %/% y + as.logical(x %% y)))
}
<- function(f, D, V, k01, k10, n1, n2, tlast, mins, LLOQ.f) {
sampling # <= tmax: equally spaced
# >= tmax: geometric progression
# rounded to 'mins' minutes
<- one.comp(f, D, V, k01, k10, t = tlast, LLOQ.f)$tmax
tmax <- seq(0, tmax, length.out = n1)
t for (i in (n1+1):(n1+n2-1)) {
<- t[i-1] * (tlast / tmax)^(1 / (n2 - 1))
t[i]
}<- round.up(t * 60, mins) / 60
t length(t)] <- tlast
t[return(t)
}
<- function(t, C) {
est.elim # estimate lambda.z by "maximum adjusted R-squared approach"
<- data.frame(t = t, C = C)
data <- max(data$C, na.rm = TRUE)
Cmax <- data$t[data$C[!is.na(data$C)] == Cmax]
tmax <- data[data$t > tmax, ] # discard tmax and earlier
data <- data[complete.cases(data), ] # discard NAs
data <- tail(data$t, 1)
lz.end # start with the last three concentrations
<- tail(data, 3)
x <- a <- b <- numeric()
r2 <- lm(log(C) ~ t, data = x)
m 1] <- coef(m)[[1]]
a[1] <- coef(m)[[2]]
b[1] <- summary(m)$adj.r.squared
r2[# work backwards towards tmax
<- 1
i for (j in 4:nrow(data)) {
<- i + 1
i <- tail(data, j)
x <- lm(log(C) ~ t, data = x)
m <- coef(m)[[1]]
a[i] <- coef(m)[[2]]
b[i] <- summary(m)$adj.r.squared
r2[i] # don't proceed if no improvement
if (r2[i] < r2[i-1] | abs(r2[i] - r2[i-1]) < 0.0001) break
}# location of the largest adjusted R2
<- which(r2 == max(r2, na.rm = TRUE))[1]
loc if (b[loc] >= 0 || r2[loc] <= 0) { # not meaningful
<- intcpt <- lambda.z <- lz.start <- lz.end <- lz.n <- NA
R2adj else {
} <- r2[loc]
R2adj <- a[loc]
intcpt <- -b[loc]
lambda.z <- x$t[2]
lz.start <- nrow(x)
lz.n
}<- data.frame(R2adj = R2adj, intcpt = intcpt, lambda.z = lambda.z,
res lz.start = lz.start, lz.end = lz.end, lz.n = lz.n)
return(res)
}
<- function(t, C, rule = "linlog",
calc.AUC digits = 5, details = FALSE) {
<- data.frame(t = t, C = C, pAUC = 0)
x <- x[with(x, order(t)), ]
x <- x[complete.cases(x), ]
x for (i in 1:(nrow(x) - 1)) {
if (rule == "linlog") { # linear-up / log-down
if (x$C[i+1] < x$C[i]) {
$pAUC[i+1] <- (x$t[i+1] - x$t[i]) * (x$C[i+1] - x$C[i]) /
xlog(x$C[i+1] / x$C[i])
else {
} $pAUC[i+1] <- 0.5 * (x$t[i+1] - x$t[i]) *
x$C[i+1] + x$C[i])
(x
}else { # linear
} $pAUC[i+1] <- 0.5 * (x$t[i+1] - x$t[i]) *
x$C[i+1] + x$C[i])
(x
}
}$AUC <- cumsum(x$pAUC)
x<- x[with(x, order(t)), ] # sort by time
x if (details) { # entire data.frame
<- round(x, digits)
res else { # only tlast and AUClast
} <- setNames(as.numeric(round(tail(x[, c(1, 4)], 1), digits)),
res c("tlast", "AUClast"))
}if (rule == "linlog") { # cosmetics ;-)
attr(res, "trapezoidal rule") <- "linear-up/log-down"
else {
} attr(res, "trapezoidal rule") <- "linear"
}return(res)
}
<- function(t, C, intcpt, lambda.z) {
AUC.extr <- calc.AUC(t, C, rule = rule, details = TRUE)
x <- tail(x$AUC, 1)
AUClast <- tail(x$C, 1)
Clast_obs <- AUClast + tail(x$C, 1) / lambda.z
AUCinf_obs <- exp(intcpt - lambda.z * tail(x$t, 1))
Clast_pred <- AUClast + Clast_pred / lambda.z
AUCinf_pred <- list(AUClast = AUClast, Clast_obs = Clast_obs,
res AUCinf_obs = AUCinf_obs, Clast_pred = Clast_pred,
AUCinf_pred = AUCinf_pred)
return(res)
}
<- function(x, digits = 4) {
sum.simple # nonparametric summary: remove arithmetic means
<- summary(x)[-4, ]
res return(res)
}
<- function(x) {
geom.stat <- as.data.frame(matrix(nrow = 2, ncol = ncol(x),
stats dimnames = list(c("geom. mean",
"geom. CV (%)"),
names(x))))
for (i in 1:ncol(stats)) {
1, i] <- exp(mean(log(x[, i]), na.rm = TRUE))
stats[2, i] <- 100 * sqrt(exp(sd(log(x[, i]), na.rm = TRUE)^2) - 1)
stats[
}return(stats)
}
<- log(2) / t12.a # absorption rate constant
k01 <- log(2) / t12.e # elimination rate constant
k10 # generate sampling schedule
<- sampling(f, D, V, k01, k10, n1, n2, tlast, mins, LLOQ.f)
t <- one.comp(f, D, V, k01, k10, t, LLOQ.f)
x <- x$C # theoretical profile (based on model)
C0 <- x$Cmax
Cmax <- x$tmax
tmax <- x$LLOQ
LLOQ <- CV0 - C0 * 0.006 # variability increases with decreasing C
CV <- log(CV^2 + 1)
varlog <- sqrt(varlog)
sdlog <- data.frame(R2adj = NA_real_, intcpt = NA_real_,
aggr1 lambda.z = NA_real_, RE = NA_real_, t.half = NA_real_,
lz.start = NA_real_, lz.end = NA_real_,
lz.n = NA_integer_)
<- data.frame(AUClast = NA_real_, Clast_obs = NA_real_,
aggr2 AUCinf_obs = NA_real_, Clast_pred = NA_real_,
AUCinf_pred = NA_real_)
<- data.frame(matrix(NA, nrow = nsims, ncol = length(t)))
aggr3 <- numeric()
C if (progress) pb <- txtProgressBar(0, 1, 0, width = NA, style = 3)
if (setseed) set.seed(123456)
for (i in 1:nsims) {
for (j in 1:length(C0)) {
if (is.na(C0[j])) {
<- NA # otherwise, rlnorm() fails
C[j] else {
} <- rlnorm(1, meanlog = log(C0[j]) - 0.5 * varlog[j],
C[j] sdlog = sdlog[j])
}
}# C < LLOQ set to NA, NAs prior to tmax set to zero
<- max(C, na.rm = TRUE)
Cmax.tmp <- t[C[!is.na(C)] == Cmax.tmp]
tmax.tmp <= LLOQ] <- NA
C[C which(is.na(C[t < tmax.tmp]))] <- 0
C[<- C
aggr3[i, ] c(1:3, 6:8)] <- est.elim(t, C)[1, ]
aggr1[i, if (!is.na(aggr1$lambda.z[i])) { # only if successful fit
$t.half[i] <- log(2) / aggr1$lambda.z[i]
aggr1$RE[i] <- 100 * (aggr1$lambda.z[i] - k10) / k10
aggr1<- as.numeric(AUC.extr(t, C,
aggr2[i, ] $intcpt[i],
aggr1$lambda.z[i]))
aggr1
}if (progress) setTxtProgressBar(pb, i / nsims)
}if (progress) close(pb)
<- "trapezoidal rule"
method ifelse (rule == "linlog",
<- paste("linear-up / log-down", method),
method <- paste("linear", method))
method <- data.frame(time = t,
NAs NAs = sapply(aggr3, function(x) sum(length(which(is.na(x))))))
cat("Results from the model (without error)",
"\n AUClast", sprintf("%.3f", x$AUCt),
"\n AUCinf ", sprintf("%.3f", x$AUC),
paste0("\n\n", prettyNum(nsims, big.mark = ","),
" simulated profiles, ", method, "\n"))
print(NAs, row.names = FALSE)
if (nrow(sum.simple(aggr1)) == 6) { # there is a NA row
<- as.integer(gsub("[^[:digit:]]", "", sum.simple(aggr1)[6, 1]))
failed cat("In", failed, "profile(s) automatic estimation of the",
"\napparent elimination failed (nonnegative slope).\n\n")
}print(signif(geom.stat(aggr2), 4))
if (show.fits) {
cat("\n")
print(sum.simple(aggr1))
}
}
<- 200 # dose
D <- 0.8 # fraction absorbed (BA)
f <- 3 # volume of distribution
V <- 0.75 # absorption half life
t12.a <- 5 # elimination half life
t12.e <- 4 # number of samples <= tmax
n1 <- 9 # number of samples >= tmax (n = n1 + n2 - 1)
n2 <- 24 # last sampling time point (h)
tlast <- 15 # rounding of sampling times (minutes)
mins <- 0.40 # CV at low concentrations
CV0 <- 0.04 # fraction of theoretical Cmax
LLOQ.f <- "linlog" # if you insist: "linear"
rule <- 2.5e3L # number of simulations
nsims <- TRUE # for reproducibility (default)
setseed <- FALSE # TRUE for nonparametric summary of fits
show.fits <- FALSE # TRUE for a progress bar
progress sim.auc(D, f, V, t12.a, t12.e, n1, n2, tlast, mins, CV0, LLOQ.f,
rule, nsims, setseed, show.fits, progress)
# Results from the model (without error)
# AUClast 368.471
# AUCinf 384.719
#
# 2,500 simulated profiles, linear-up / log-down trapezoidal rule
# time NAs
# 0.00 0
# 1.00 0
# 1.75 0
# 2.50 0
# 3.25 0
# 4.50 0
# 5.75 0
# 7.75 0
# 10.25 0
# 13.75 0
# 18.25 2
# 24.00 531
# In 6 profile(s) automatic estimation of the
# apparent elimination failed (nonnegative slope).
#
# AUClast Clast_obs AUCinf_obs Clast_pred AUCinf_pred
# geom. mean 356.700 2.769 382.000 2.645 380.800
# geom. CV (%) 8.316 41.520 9.641 39.570 9.609
Even the linear-up/log-down underestimates \(\small{AUC_{{0-}\textrm{tlast}}}\) because the profile is concave up to the inflection point at \(\small{2\times t_\textrm{max}}\). There is nothing we can do about (see another article).
Note that the variability of \(\small{C_\textrm{last}}\) is larger than the one of \(\small{\widehat{C}_\textrm{last}}\). As expected, and it propagates partly to the extrapolated \(\small{AUC\textrm{s}}\), where the variability based on \(\small{(1)}\) is – given, slightly – larger then the one based on \(\small{(2)}\).
In either case the variability of \(\small{AUC_{0-\infty}}\) is larger than the one of \(\small{AUC_{{0-}\textrm{tlast}}}\). For immediate release formulations point estimates of \(\small{\textrm{p}AUC_{0-\textrm{t}\,\geq\,\small{2\times t_\textrm{max}}}}\) are stable, only variability of the respective partial areas increases with time.36
“Once absorption is over, formulation differences no longer apply.
However, this might be relevant in jurisdictions applying Average Bioequivalence with Expanding Limits (ABEL) and \(\small{AUC_{0-\infty}}\) is a primary pharmacokinetic metric because the late part of the profile represents absorption (e.g., controlled release formulations for the EMA, where reference-scaling is only acceptable for \(\small{C_\textrm{max}}\) and the sample size depends on \(\small{AUC}\); see this article for an example).
top of section ↩︎ previous section ↩︎
\(\small{\widehat{\lambda}_\textrm{z}}\) obtained by an automatic algorithm should be inspected for its plausibility.
\(\small{AUC_{0-\infty}}\) based on either \(\small{(1)}\) or \(\small{(2)}\) can be used. The planned method must be unambiguous stated in the protocol.
From a theoretical point of view – and as demonstrated in the Simulation above – \(\small{AUC_{0-\infty}}\) calculated by \(\small{(2)}\) is expected to result in slighty lower variability and therefore, is recommended.2 3 4 5 6 7 8 9 10 11 16 However, in the common case that the sample size is based on \(\small{C_\textrm{max}}\),37 this advantage over \(\small{(1)}\) likely is not relevant.
top of section ↩︎ previous section ↩︎
Helmut Schütz 2024
R
GPL 3.0,
klippy
MIT,
pandoc
GPL 2.0.
1st version March 6, 2022. Rendered May 12, 2024 01:05 CEST
by rmarkdown
via pandoc in 0.18 seconds.
Yamaoka K, Nakagawa T, Uno T. Statistical Moments in Pharmacokinetics. J Pharmacokin Biopharm. 1978; 6(6): 547–58. doi:10.1007/bf01062109.↩︎
Upton RA, Sansom L, Guentert TW, Powell JR, Thiercellin J-F, Shah VP, Coates PE, Riegelman S. Evaluation of the Absorption from 15 Commercial Theophylline Products Indicating Deficiencies in Currently Applied Bioavailability Criteria. J Pharmacokin Biopharm. 1980; 8(3): 229–42. doi:10.1007/BF01059644.↩︎
Schulz H-U, Steinijans, VW. Striving for standards in bioequivalence assessment: a review. Int J Clin Pharm Ther Toxicol. 1991; 29(8): 293–8. PMID:1743802.↩︎
Purves RD. Bias and Variance of Extrapolated Tails for Area-Under-the-Curve (AUC) and Area-Under-the-Moment-Curve (AUMC). J Pharmacokin Biopharm. 1992; 20(5): 501–10. doi:10.1007/BF01061468.↩︎
Sauter R, Steinijans VW, Diletti E, Böhm E, Schulz H-U. Presentation of results from bioequivalence studies. Int J Clin Pharm Ther Toxicol. 1992; 30(7): 233–56. PMID:1506127.↩︎
Wagner JG. Pharmacokinetics for the Pharmaceutical Scientist. Lancaster, Basel: Technomic; 1993. p. 88.↩︎
Abdallah HY. An area correction method to reduce intrasubject variability in bioequivalence studies. J Pharm Pharmaceut Sci. 1998; 1(2): 60–5. Open Access.↩︎
Hauschke D, Steinjans V, Pigeot I. Bioequivalence Studies in Drug Development. Chichester: Wiley; 2007. p. 131.↩︎
Derendorf H, Gramatté T, Schäfer HG, Staab A. Pharmakokinetik kompakt. Grundlagen und Praxisrelevanz. Stuttgart: Wissenschaftliche Verlagsanstalt; 3. Auflage 2011. p. 127. [German]↩︎
Fisher D, Kramer W, Burmeister Getz E. Evaluation of a Scenario in Which Estimates of Bioequivalence Are Biased and a Proposed Solution: tlast (Common). J Clin Pharm. 2016; 56(7): 794–800. doi:10.1002/jcph.663. Open Access.↩︎
Gabrielsson J, Weiner D. Pharmacokinetic and Pharmacodynamic Data Analysis. Stockholm: Apotekarsocieteten; 5th ed. 2016. p. 147.↩︎
WHO. Technical Report Series, No. 996. Annex 9. Guidance for organizations performing in vivo bioequivalence studies. Geneva. 2016. Online.↩︎
FDA, CDER. Draft Guidance. Bioequivalence Studies With Pharmacokinetic Endpoints for Drugs Submitted Under an ANDA. Silver Spring. August 2021. Download.↩︎
ANVISA. Resolução - RDC Nº 742. Dispõe sobre os critérios para a condução de estudos de biodisponibilidade relativa / bioequivalência (BD/BE) e estudos farmacocinéticos. Brasilia. August 10, 2022. Effective July 3, 2023. Online.↩︎
WHO. Technical Report Series, No. 1003. Annex 6. Multisource (generic) pharmaceutical products: guidelines on registration requirements to establish interchangeability. Section 7.4.7 Parameters to be assessed. Geneva. 2017. Online.↩︎
Health Canada. Guidance Document: Conduct and Analysis of Comparative Bioavailability Studies. Appendix 1. Ottawa. 2018/06/08. Online.↩︎
EMA, CHMP. Guideline on the Investigation of Bioequivalence. London. 20 January 2010. Online.↩︎
EMA, CHMP. Guideline on the pharmacokinetic and clinical evaluation of modified release dosage forms. London. 20 November 2014. Online.↩︎
ICH. Bioequivalence for Immediate Release Solid Oral Dosage Forms. M13A. Draft version. 20 December 2022. Online.↩︎
Scheerans C, Derendorf H, Kloft C. Proposal for a Standardised Identification of the Mono-Exponential Terminal Phase for Orally Administered Drugs. Biopharm Drug Dispos. 2008; 29(3): 145–57. doi:10.1002/bdd.596.↩︎
Noe DA. Performance characteristics of the adjusted r2 algorithm for determining the start of the terminal disposition phase and comparison with a simple r2 algorithm and a visual inspection method. Pharmaceut Stat. 2019; 1–13. doi:10.1002/pst.1979.↩︎
Certara USA, Inc. Princeton, NJ. 7/9/20. Lambda Z or Slope Estimation settings. Online.↩︎
Heinzel G, Woloszczak R, Thomann R. TopFit 2.0. Pharmacokinetic and Pharmacodynamic Data Analysis System for the PC. Stuttgart: Springer; 1993.↩︎
Certara USA, Inc. Princeton, NJ. 2022. Phoenix WinNonlin. Online.↩︎
Certara USA, Inc. Princeton, NJ. 7/9/20. NCA parameter formulas. Online.↩︎
LIXOFT, Antony, France. 2021. PKanalix Documentation. NCA parameters. Online.↩︎
Denney W, Duvvuri S, Buckeridge C. Simple, Automatic Noncompartmental Analysis: The PKNCA R Package. J Pharmacokinet Pharmacodyn. 2015; 42(1): 11–107, S65. doi:10.1007/s10928-015-9432-2.↩︎
Denney B, Buckeridge C, Duvvuri S. PKNCA. Perform Pharmacokinetic Non-Compartmental Analysis. Package version 0.10.2. 2023-04-29. CRAN.↩︎
Acharya C, Hooker AC, Turkyilmaz GY, Jonsson S, Karlsson MO. ncappc: NCA Calculations and Population Model Diagnosis. Package version 0.3.0. 2018-08-24. CRAN.↩︎
Huisman J, Jolling K, Mehta K, Bergsma T. qpNCA: Noncompartmental Pharmacokinetic Analysis by qPharmetra. Package version 1.1.6. 2021-08-16. CRAN.↩︎
Harrold JM, Abraham AK. Ubiquity: a framework for physiological/mechanism-based pharmacokinetic / pharmacodynamic model development and deployment. J Pharmacokinet Pharmacodyn. 2014; 41(2), 141–51. doi:10.1007/s10928-014-9352-6.↩︎
Harrold J. ubiquity: PKPD, PBPK, and Systems Pharmacology Modeling Tools. Package version 2.0.3. 2024-03-08. CRAN.↩︎
Rackauckas C, Ma Y, Noack A, Dixit V, Kofod Mogensen P, Byrne S, Maddhashiya S, Bayoán Santiago Calderón J, Nyberg J, Gobburu JVS, Ivaturi V. Accelerated Predictive Healthcare Analytics with Pumas, a High Performance Pharmaceutical Modeling and Simulation Platform. Nov 30, 2020. doi:10.1101/2020.11.28.402297. Preprint: bioRxiv 2020.11.28.40229.↩︎
Midha KK, Hubbard JW, Rawson MJ. Retrospective evaluation of relative extent of absorption by the use of partial areas under plasma concentration versus time curves in bioequivalence studies on conventional release products. Eur J Pharm Sci. 1996; 4(6): 381–4. doi:10.1016/0928-0987(95)00166-2.↩︎
Very rarely the variability of \(\small{AUC}\) is larger than the one of \(\small{C_\textrm{max}}\) and hence, drives the sample size.↩︎