#'
#' ---
#' title: "Analysis of a multi-arm design with a binary endpoint"
#' author: "Gernot Wassmer and Friedrich Pahlke"
#' date: "Last change: `r format(Sys.time(), '%d %B, %Y')`"
#' number: 21
#' header-includes:
#' - \usepackage{fancyhdr}
#' - \pagestyle{fancy}
#' - \setlength{\headheight}{23pt}
#' - \fancyfoot[C]{www.rpact.com}
#' - \fancyfoot[L]{\thepage}
#' output:
#' rmarkdown::html_document:
#' highlight: pygments
#' number_sections: yes
#' self_contained: yes
#' toc: yes
#' toc_depth: 3
#' toc_float: yes
#' css: style.css
#' includes:
#' before_body: header.html
#' after_body: footer.html
#' ---
#'
#'
#'
#' # Summary {-}
#'
#' This [R Markdown](https://rmarkdown.rstudio.com) document shows how to analyse and interpret multi-arm designs for testing proportions with [rpact](https://cran.r-project.org/package=rpact).
#'
#'
#' # Introduction
#'
#' This vignette provides examples of how to analyse a trial with multiple arms and a binary endpoint. It shows how to calculate the conditional power at a given stage and to select/deselect treatment arms. For designs with multiple arms, rpact enables the analysis using the **closed combination testing principle**. For a description of the methodology please refer to Part III of the book ["Group Sequential and Confirmatory Adaptive Designs in Clinical Trials"](http://monograph.wassmer.brannath.rpact.net/) by Gernot Wassmer & Werner Brannath.
#'
#' Suppose the trial was conducted as a multi-arm multi-stage trial with three active treatments arms and a control arm when the trial started. In the interim stages, it should be possible to de-select treatment arms if the treatment effect is too small to show significance - assuming reasonable sample size - at the end of the trial. This should hold true even if a certain sample size increase was taken into account. The endpoint is a failure and it is intended to test each active arm against control. This is to test the hypotheses
#' $$ H_{0i}:\pi_{\text{arm} i} = \pi_\text{control} \qquad\text{against} \qquad H_{1i}:\pi_{\text{arm} i} < \pi_\text{control}\;, i = 1,2,3\,,$$
#' in the many-to-one comparisons setting. That is, it is intended to show that the failure rate is smaller in active arms as compared to control and so the power is directed towards negative values of $\pi_{\text{arm} i} - \pi_\text{control}$.
#'
#' # Create the design
#'
#' **First, load the rpact package**
#'
## ---- include = TRUE, echo = TRUE-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
library(rpact)
packageVersion("rpact") # version should be version 3.0.0 or later
#'
## ---- include = TRUE, echo = FALSE, results = 'hide'----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
setLogLevel("DISABLED")
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
options("rpact.summary.output.size" = "large") # small, medium, large
#'
#' In rpact, we first have to select the combination test with the corresponding
#' stopping boundaries to be used in the closed testing procedure. We choose a
#' design with critical values within the Wang & Tsiatis $\Delta$-class of
#' boundaries with $\Delta = 0.25$. Planning two interim stages and a final stage,
#' assuming equally sized stages, the design is defined through
#'
## ---- include = TRUE, echo = TRUE-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
designIN <- getDesignInverseNormal(kMax = 3, alpha = 0.025,
typeOfDesign = "WT", deltaWT = 0.25)
summary(designIN)
#'
#' This definition fixes the weights in the combination test which are the same
#' over the three stages. This is a reasonable choice although the amount of
#' information seems to be not the same over the stages
#' (see [Wassmer, 2010](https://doi.org/10.1080/10543406.2011.551336)).
#'
#' # Analysis
#'
#' ## First stage
#'
#' In each treatment and the control arm, subjects were randomized such
#' that around 40 subjects per arm will be observed.
#' Assume that the following actual sample sizes and failures in the control and
#' the three experimental treatment arms were obtained for the first stage of the
#' trial:
#'
#' | arm | n | failures |
#' | ----- | ----- | ----- |
#' | active 1 | 42 | 7 |
#' | active 2 | 39 | 8 |
#' | active 3 | 38 | 14 |
#' | control | 41 | 18 |
#'
#' These data are defined as an rpact data set with the function `getDataset()`
#' for the later use in `getAnalysisResults()` through
#'
## ---- include = TRUE, echo = TRUE-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dataRates <- getDataset(
events1 = 7,
events2 = 8,
events3 = 14,
events4 = 18,
sampleSizes1 = 42,
sampleSizes2 = 39,
sampleSizes3 = 38,
sampleSizes4 = 41
)
#'
#' That is, you can use the `getDataset()` function in the usual way and simply
#' extend it to the multiple treatment arms situation. Note that the arm with the
#' highest index **always refers to the control group**. For the control group,
#' specifically, it is **mandatory to enter values over all stages**. As we will
#' see below, it is possible to omit information of de-selected active arms.
#'
#' Using
#'
## ---- include = TRUE, echo = TRUE, eval = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## results <- getAnalysisResults(design = designIN, dataInput = dataRates,
## directionUpper = FALSE)
## summary(results)
#'
#' one obtains the test results for the first stage of this trial
#' (note the `directionUpper = FALSE` specification that yields small
#' $p$-values for negative test statistics):
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
results <- getAnalysisResults(design = designIN,
dataInput = dataRates, directionUpper = FALSE)
summary(results)
#'
#' First of all, at the first interim no hypothesis can be rejected with the closed
#' combination test. This is seen from the `test action: reject (i)` variable.
#' It is remarkable, however, that the $p$-value for the comparison of treatment
#' arm 1 against control (p = `r round(results$.stageResults$separatePValues[1, 1], 4)`)
#' is quite small and even the $p$-value for the global intersection is (p(1, 2, 3)
#' = `r round(results$.closedTestResults$adjustedStageWisePValues[1,1],4)`) is not
#' too far from showing significance. It is important to know that, by default, the
#' **Dunnett many-to-one comparison test** for binary data is used as the test for
#' the intersection hypotheses, and the **approximate pairwise score test** (which
#' is the signed square root of the $\chi^2$ test) is used for the calculation of
#' the separate $p$-values. Note that in this presentation the intersection tests
#' for the whole closed system of hypotheses is provided such that the closed test
#' can completely be reproduced.
#'
#' The repeated $p$-values (`r round(results$repeatedPValues[1,1], 4)`,
#' `r round(results$repeatedPValues[2,1], 4)`, and
#' `r round(results$repeatedPValues[3,1], 4)`, respectively) precisely correspond
#' with the test decision meaning that a repeated $p$-value is smaller or equal to
#' the overall significance level (0.025) if and only if the corresponding
#' hypothesis can be rejected at the considered stage. This direct correspondence
#' is not generally true for the repeated confidence intervals (i.e., they can
#' contain the value zero although the null hypothesis can be rejected), but it is
#' true for the situation at hand. The repeated confidence intervals can be
#' displayed with the `plot(, type = 2)` command by
#'
## ---- include = TRUE, echo = TRUE, warning = FALSE------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot(results, type = 2)
#'
#' For assessing the conditional power, a sample size specification for the
#' remaining stages needs to be done. We assume that around 80 subjects will be
#' obtained **per considered comparison** (i.e., for both treatment arms together)
#' and **per stage**. Use `?getAnalysisResults()` to obtain the information about
#' how to specify the parameter `nPlanned`. Assuming 80 subjects you have to re-run
#' (`options("rpact.summary.output.size" = "small")` reduces the output of the
#' summary)
#'
#'
## ---- include = TRUE, echo = TRUE, eval = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## options("rpact.summary.output.size" = "small")
## results <- getAnalysisResults(design = designIN, dataInput = dataRates,
## directionUpper = FALSE, nPlanned = c(80,80))
## summary(results)
#'
#' to obtain
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
options("rpact.summary.output.size" = "small") # small, medium, large
results <- getAnalysisResults(design = designIN, dataInput = dataRates, directionUpper = FALSE, nPlanned = c(80,80))
summary(results)
#'
#' The `Conditional power (i)` variable shows very high power (esp. for the final
#' stage) for treatment arms 1 and 2, but not for arm 3. Note that the conditional
#' power is calculated under the assumption that the **observed rates are the true
#' rates**. This can be changed, however, by setting `piControl` and/or
#' `piTreatments` equal to the desired values (`piTreatments` can even be a
#' vector), e.g.,
#'
## ---- include = TRUE, echo = TRUE-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
results <- getAnalysisResults(design = designIN, dataInput = dataRates,
directionUpper = FALSE, nPlanned = c(80,80),
piTreatments = c(0.2,0.2,0.3), piControl = 0.4)
summary(results)
#'
#' Note that the title of the summary describes the situation under which the
#' conditional power calculation is performed.
#'
## ---- include = TRUE, echo = TRUE-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot(results, type = 1, piTreatmentRange = c(0,0.5), legendPosition = 3)
#'
#' Altogether, based on the results of the first interim the decision to drop
#' treatment arm 3 and to recruit further 40 patients to each treatment arms 1 and
#' 2 (and to the control group) was taken.
#'
#' ## Second stage
#'
#' Also for the second stage, in each of the reaming treatment arms and the control
#' arm, subjects were randomized such that around 40 subjects per arm will be
#' observed. Assume the following failures and actual sample sizes in the control
#' and the two remaining arms:
#'
#' | arm | n | failures |
#' | ----- | ----- | ----- |
#' | active 1 | 37 | 9 |
#' | active 2 | 41 | 13 |
#' | active 3 | | |
#' | control | 42 | 19 |
#'
#' With `getDataset()`, these data for the second stage are appended to the first
#' stage data as follows:
#'
## ---- include = TRUE, echo = TRUE-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dataRates <- getDataset(events1 = c(7, 9),
events2 = c(8, 13),
events3 = c(14, NA),
events4 = c(18, 19),
sampleSizes1 = c(42, 37),
sampleSizes2 = c(39, 41),
sampleSizes3 = c(38, NA),
sampleSizes4 = c(41, 42)
)
#'
#' and the stage 2 results are obtained with
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
options("rpact.summary.output.size" = "large") # small, medium, large
results <- getAnalysisResults(design = designIN, dataInput = dataRates,
directionUpper = FALSE)
summary(results)
#'
#' Treatment arm 1 is significantly better than control, see
#' `Test action: reject (1)`, and reflected in both `Repeated $p$-value (1)` and
#' the `Repeated confidence interval (1)` excluding 0. For treatment arm 2,
#' however, significance could not be shown, although both, the global intersection
#' hypothesis and the single hypothesis referring to treatment arm 2, can be
#' rejected with the corresponding combination test. The reason for
#' non-significance is the overall adjusted test statistic for testing
#' $H_{02}\cap H_{03}$ which is
#' `r round(results$.closedTestResults$overallAdjustedTestStatistics[4,2],3)` <
#' 2.305.
#'
#' In order to show significance also for treatment arm 2, one might calculate the
#' power if the sample size was reduced to 20 subjects per considered arm
#' (treatment arm 2 and control). This is achieved through
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
options("rpact.summary.output.size" = "small") # small, medium, large
results <- getAnalysisResults(design = designIN, dataInput = dataRates,
directionUpper = FALSE, nPlanned = 40)
summary(results)
#' showing that conditional power might be reduced to around 80% if the sample
#' size was decreased. However, as showing in this graph
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot(results, type = 1, piTreatmentRange = c(0,0.5), legendPosition = 3)
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
results <- getAnalysisResults(design = designIN, dataInput = dataRates,
directionUpper = FALSE, nPlanned = 40,
piTreatments = 0.2)
#'
#' this is predominantly due to the relatively large observed overall failure rate
#' in stage 2. Assuming a failure rate of (say) 20% yields conditional power of
#' `r round(100*results$conditionalPower[2,3],1)`% which is obtained from
#'
## ---- include = TRUE, echo = TRUE, eval = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## results <- getAnalysisResults(design = designIN, dataInput = dataRates,
## directionUpper = FALSE, nPlanned = 40,
## piTreatments = 0.2)
## round(100*results$conditionalPower[2,3],1)
#'
#' Therefore, it might be reasonable to drop treatment arm 1 (for which
#' significance was already shown) and compare treatment arm 2 only against
#' control in the final stage.
#'
#' ## Final stage
#'
#' Assume the following sample sizes and failures for the final stage where only
#' (additional) active arm 2 and control data were obtained.
#'
#' | arm | n | failures |
#' | ----- | ----- | ----- |
#' | active 1 | | |
#' | active 2 | 18 | 7 |
#' | active 3 | | |
#' | control | 19 | 11 |
#'
#' These data for the final stage are entered as follows:
#'
## ---- include = TRUE, echo = TRUE-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dataRates <- getDataset(events1 = c(7, 9, NA),
events2 = c(8, 13, 7),
events3 = c(14, NA, NA),
events4 = c(18, 19, 11),
sampleSizes1 = c(42, 37, NA),
sampleSizes2 = c(39, 41, 18),
sampleSizes3 = c(38, NA, NA),
sampleSizes4 = c(41, 42, 19)
)
#' and
## ---- include = TRUE, echo = TRUE, eval = FALSE---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## results <- getAnalysisResults(design = designIN, dataInput = dataRates,
## directionUpper = FALSE)
## summary(results)
#'
#' provides the results (significance for treatment arm 2 could additionally be shown):
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
results <- getAnalysisResults(design = designIN, dataInput = dataRates,
directionUpper = FALSE)
summary(results)
#'
#' Summarizing the results, `plot(results, type = 2, legendPosition = 4)` produces
#' a plot of the sequence of repeated confidence intervals over the stages:
#'
## ---- include = TRUE, echo = FALSE----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot(results, type = 2, legendPosition = 4)
#'
#' # Closing remarks
#'
#' This example describes a range of design modifications, namely selecting
#' treatments arms and performing sample size recalculation for both stages. It is
#' important to recognize that neither the type of adaptation nor the adaptation
#' rule was pre-specified. Despite this, the closed combination test provides
#' control of the experimentwise error rate in the strong sense. To utilize the
#' whole repertoire of possible adaptations, one might also use the
#' `conditional rejection probability (i)` values in order to **completely
#' redefine** the design, which includes, for example, to change the number of
#' remaining stages, to change the type of intersection test, or even to add a
#' treatment arm.
#'
#' Note that in multi-arm designs no final analysis $p$-values, confidence
#' intervals, and median unbiased treatment effect estimates are calculated. This
#' is in contrast to the single hypothesis adaptive designs where, using the
#' stage-wise ordering of the sample space, at the final stage such calculations
#' were done with rpact (for example, see the vignette [Analysis of a
#' group-sequential trial with a survival
#' endpoint](https://vignettes.rpact.org/html/rpact_analysis_examples.html)). There
#' is current research work on this topic and it is planned to include this in a
#' future release of the package.
#'
#' ***
#'
#' System: rpact `r packageVersion("rpact")`, `r R.version.string`, platform: `r R.version$platform`
#'
## ---- include = TRUE, echo = FALSE, results = 'asis'----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
printCitation()
#'
#'
#'