#' ---
#' title: "An example to illustrate boundary re-calculations during the trial"
#' author: "Marcel Wolbers, Gernot Wassmer, and Friedrich Pahlke"
#' date: "Last change: `r format(Sys.time(), '%d %B, %Y')`"
#' number: 6
#' header-includes:
#' - \usepackage{fancyhdr}
#' - \pagestyle{fancy}
#' - \setlength{\headheight}{23pt}
#' - \fancyfoot[C]{www.rpact.com}
#' - \fancyfoot[L]{\thepage}
#' output:
#' rmarkdown::html_document:
#' highlight: pygments
#' number_sections: yes
#' self_contained: yes
#' toc: yes
#' toc_depth: 3
#' toc_float: yes
#' css: style.css
#' includes:
#' before_body: header.html
#' after_body: footer2.html
#' ---
#'
#'
#'
#' This R markdown file provides an example for updating the group sequential boundaries when using an alpha spending function approach based on observed information rates.
#'
## ---- include=TRUE, echo=TRUE--------------------------------------------
# Load rpact
library(rpact)
packageVersion("rpact") # version should be version 2.0.1 or later
#'
#'
#' Group-sequential designs based on $\alpha$-spending functions protect the type I error exactly even if the pre-planned interim schedule is not exactly adhered to. However, this requires re-calculation of the group-sequential boundaries at each interim analysis based on actually observed information fractions. Unless deviations from the planned information fractions are substantial, the re-calculated boundaries are quite similar to the pre-planned boundaries and the re-calculation will affect the actual test decision only on rare occasions.
#'
#' Importantly, it is not allowed that the timing of future interim analyses is "motivated" by results from earlier interim analyses as this could inflate the type I error rate. Deviations from the planned infromation fractions should thus only occur due to operational reasons (as it is difficult to hit the exact number of events exactly in a real trial) or due to external evidence.
#'
#' The general principles for these boundary re-calculation are as follows (see also, Wassmer & Brannath, 2016, p78f):
#'
#' - Upates at interim analyses prior to the final analysis:
#' + Information fractions are updated according to the actually observed information fraction at the interim analysis relative to the **planned** maximum information.
#' + The planned $\alpha$-spending function is then applied to these updated information fractions.
#' - Updates at the final analysis in case the observed information at the final analysis is larger ("over-running") or smaller ("under-running") than the planned maximum information:
#' + Information fractions are updated according to the actually observed information fraction at all interim analyses relative to the **observed** maximum information. $\Rightarrow$ Information fraction at final analysis is re-set to 1 but information fractions for earlier interim analyses are also changed.
#' + The originally planned $\alpha$-spending function cannot be applied to these updated information fractions because this would modify the critical boundaries of earlier interim analyses which is clearly not allowed. Instead, one uses the $\alpha$ that has actually been spent at earlier interim analyses and spends all remaining $\alpha$ at the final analysis.
#'
#' This general principle be implemented via a user-defined $\alpha$-spending function and is illustrated for an example trial with a survival endpoint below.
#'
#'
#' # Original trial design
#'
#' The original trial design for this example is based on a standard O'Brien&Fleming type $\alpha$-spending function with planned efficacy interim analyses after 50% and 75% of information as specified below.
#'
## ---- include=TRUE, echo=TRUE--------------------------------------------
# Initial design
design <- getDesignGroupSequential(sided = 1, alpha = 0.025, beta = 0.2,
informationRates = c(0.5, 0.75, 1), typeOfDesign="asOF")
# Initial sample size calculation
sampleSizeResult <- getSampleSizeSurvival(
design = design,
lambda2 = log(2)/60,hazardRatio = 0.75,
dropoutRate1 = 0.025, dropoutRate2 = 0.025, dropoutTime = 12,
accrualTime = 0,accrualIntensity = 30,
maxNumberOfSubjects = 1000)
# Summarize design
simpleBoundarySummary <- function(result) {
parameters <- list("Stage" = c(1:result$.design$kMax),
"Information rate" = round(t(result$.design$informationRates), 2),
"Number of events" = round(t(result$eventsPerStage[, 1]), 1),
"Analysis time under H1" = round(t(result$analysisTime[, 1]), 1),
"Cumulative alpha spent" = round(result$.design$alphaSpent, 4),
"Stage levels" = round(result$.design$stageLevels, 4),
"Efficacy boundary (Z-value scale)" = round(result$.design$criticalValues, 3),
"Efficacy boundary (treatment effect scale)" =
round(t(result$criticalValuesEffectScale[, 1]), 3))
names(parameters) <- paste0(format(names(parameters)), " ")
for (paramName in names(parameters)) {
values <- parameters[[paramName]]
values <- format(c(" ", values))[2:(length(values) + 1)]
cat(paramName, values, "\n")
}
}
simpleBoundarySummary(sampleSizeResult)
#'
#' # Boundary update at the first interim analysis
#'
#' Assume that the first interim was conducted after 205 rather than the planned 194 events.
#'
#' The updated design is calculated as per the code below. Note that for the calculation of boundary values on the treatment effect scale, we use the function `getPowerSurvival` with the updated design rather than the function `getSampleSizeSurvival` as we are only updating the boundary, not the sample size or the maximum number of events.
#'
## ---- include=TRUE, echo=TRUE--------------------------------------------
# Update design using observed information fraction at first interim.
# Information fraction of later interim analyses is not changed.
designUpdate1 <- getDesignGroupSequential(sided = 1, alpha = 0.025, beta = 0.2,
informationRates = c(205/387,0.75,1), typeOfDesign="asOF")
# Recalculate the power to get boundary values on the effect scale
# (Use original maxNumberOfEvents and sample size)
powerUpdate1 <- getPowerSurvival(
design = designUpdate1,
lambda2 = log(2)/60,hazardRatio = 0.75,
dropoutRate1 = 0.025, dropoutRate2 = 0.025, dropoutTime = 12,
accrualTime = 0,accrualIntensity = 30,
maxNumberOfSubjects = 1000, maxNumberOfEvents = 387, directionUpper = FALSE)
powerUpdate1
#'
#' The updated information rates and corresponding boundaries as per the output above are summarized as follows:
## ---- include=TRUE, echo=FALSE-------------------------------------------
simpleBoundarySummary(powerUpdate1)
#'
#' # Boundary update at the second interim analysis
#' Assume that the efficacy boundary was not crossed at the first interim analysis and the trial continued to the second interim analysis which was conducted after 285 rather than the planned 291 events. The updated design is calculated in the same way as for the first interim analysis as per the code below. The idea is to use the cumulative alpha spent from the first stage and an updated cumulative alpha that is spent for the second stage. For the second stage, this can be obtained with the original O'Brien & Fleming alpha spending function:
#'
## ---- include=TRUE, echo=TRUE--------------------------------------------
# Update design using observed information fraction at first and second interim.
designUpdate2 <- getDesignGroupSequential(sided = 1, alpha = 0.025, beta = 0.2,
informationRates = c(205/387,285/387,1), typeOfDesign="asOF")
# Recalculate power to get boundary values on effect scale
# (Use original maxNumberOfEvents and sample size)
powerUpdate2 <- getPowerSurvival(
design = designUpdate2,
lambda2 = log(2)/60,hazardRatio = 0.75,
dropoutRate1 = 0.025, dropoutRate2 = 0.025, dropoutTime = 12,
accrualTime = 0,accrualIntensity = 30,
maxNumberOfSubjects = 1000, maxNumberOfEvents = 387, directionUpper = FALSE)
powerUpdate2
#'
#' # Boundary update at the final analysis
#' Assume that the efficacy boundary was also not crossed at the second interim analysis and the trial continued to the final analysis which was conducted after 393 rather than the planned 387 events. The updated design is calculated as per the code below. The idea here to use the cumulative alpha spent from the first *and* the second stage stage and the final alpha that is spent for the last stage. An updated correlation has to be used and the original O'Brien & Fleming alpha spending function cannot be used anymore. Instead, the alpha spending function needs to be user defined as follows:
#'
## ---- include=TRUE, echo=TRUE--------------------------------------------
# Update boundary with information fractions as per actually observed event numbers
# !! use user-defined alpha-spending and spend alpha according to actual alpha spent
# according to the second interim analysis
designUpdate3 <- getDesignGroupSequential(sided = 1, alpha = 0.025, beta = 0.2,
informationRates = c(205,285,393)/393,
typeOfDesign = "asUser",
userAlphaSpending = designUpdate2$alphaSpent)
# Recalculate power to get boundary values on effect scale
# (Use planned sample size and **observed** maxNumberOfEvents)
powerUpdate3 <- getPowerSurvival(
design = designUpdate3,
lambda2 = log(2)/60,hazardRatio = 0.75,
dropoutRate1 = 0.025, dropoutRate2 = 0.025, dropoutTime = 12,
accrualTime = 0,accrualIntensity = 30,
maxNumberOfSubjects = 1000, maxNumberOfEvents = 393, directionUpper = FALSE)
powerUpdate3
#'
#' # Overview of all boundaries updates
#' For easier comparison, all discussed boundary updates are summarized more conveniently below. Note that each update only affects boundaries for the current or later analyses, i.e., earlier boundaries are never retrospectively modified.
#'
#' ## Original design
#'
## ---- include=TRUE, echo=FALSE-------------------------------------------
simpleBoundarySummary(sampleSizeResult)
#'
#' ## Updated boundaries at the first interim analysis
#'
## ---- include=TRUE, echo=FALSE-------------------------------------------
simpleBoundarySummary(powerUpdate1)
#'
#' ## Updated boundaries at the second interim analysis
#'
## ---- include=TRUE, echo=FALSE-------------------------------------------
simpleBoundarySummary(powerUpdate2)
#'
#' ## Updated boundaries at the final analysis
#'
## ---- include=TRUE, echo=FALSE-------------------------------------------
simpleBoundarySummary(powerUpdate3)
#'
#' ***
#'
#' System: rpact `r packageVersion("rpact")`, `r R.version.string`, platform: `r R.version$platform`
#'
## ---- include=TRUE, echo=FALSE, results='asis'---------------------------
print(citation("rpact"), bibtex = FALSE)
#'
#'