#' ---
#' title: "Designing group-sequential trials with a binary endpoint with rpact"
#' author: "Marcel Wolbers, Gernot Wassmer, and Friedrich Pahlke"
#' date: "Last change: `r format(Sys.time(), '%d %B, %Y')`"
#' number: 3
#' header-includes:
#' - \usepackage{fancyhdr}
#' - \pagestyle{fancy}
#' - \setlength{\headheight}{23pt}
#' - \fancyfoot[C]{www.rpact.com}
#' - \fancyfoot[L]{\thepage}
#' output:
#' rmarkdown::html_document:
#' highlight: pygments
#' number_sections: yes
#' self_contained: yes
#' toc: yes
#' toc_depth: 3
#' toc_float: yes
#' css: style.css
#' includes:
#' before_body: header.html
#' after_body: footer2.html
#' ---
#'
#'
#'
#' # Summary {-}
#'
#' This [R Markdown](https://rmarkdown.rstudio.com) document provides examples for designing trials
#' with binary endpoints using [rpact](https://cran.r-project.org/package=rpact).
#'
#' # Sample size calculation for a superiority trial with two groups without interim analyses
#'
#' The **sample size** for a trial with binary endpoints can be calculated using the
#' function `getSampleSizeRates()`. This function is fully documented in the help
#' page (`?getSampleSizeRates`). Hence, we only provide some examples below.
#'
#' First, load the library `rpact`
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
library(rpact)
packageVersion("rpact") # version should be version 2.0.5 or later
#'
#' To get the **direction** of the effects correctly, note that in rpact the
#' **index "2" in an argument name always refers to the control group, "1" to the
#' intervention group, and treatment effects compare treatment versus control**.
#' Specifically, for binary endpoints, the probabilities of an event in the control
#' group and intervention group, respectively, are given by arguments `pi2` and `pi1`.
#' The default treatment effect is the absolute risk difference `pi1 - pi2` but the
#' relative risk scale `pi1/pi2` is also supported if the argument `riskRatio`
#' is set to `TRUE`.
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
# Example of a standard trial:
# - probability 25% in control (pi2 = 0.25) vs 40% (pi1 = 0.4) in intervention
# - one-sided test (sided = 1)
# - Type I error 0.025 (alpha = 0.025) and power 80% (beta = 0.2)
sampleSizeResult <- getSampleSizeRates(pi2 = 0.25, pi1 = 0.4,
sided = 1, alpha = 0.025, beta = 0.2)
sampleSizeResult
#' As per the output above, the required **total sample size** is
#' `r ceiling(sampleSizeResult$nFixed)` and the critical value corresponds to a minimal
#' detectable difference (on the absolute risk difference scale, the default) of
#' approximately `r formatC(sampleSizeResult$criticalValuesEffectScale, format = "f", digits = 3)`.
#' This calculation assumes that pi2 = 0.25 is the observed rate in treatment group 2.
#'
#' A useful summary is provided with the generic `summary()` function:
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
summary(sampleSizeResult)
#'
#' You can change the randomization allocation between the treatment groups using `allocationRatioPlanned`:
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
# Example: Extension of standard trial
# - 2(intervention):1(control) randomization (allocationRatioPlanned = 2)
summary(getSampleSizeRates(pi2 = 0.25, pi1 = 0.4,
sided = 1, alpha = 0.025, beta = 0.2,
allocationRatioPlanned = 2))
#' `allocationRatioPlanned = 0` can be defined in order to obtain the optimum allocation
#' ratio minimizing the overall sample size (the optimum ample size is only slightly smaller
#' than sample size with equal allocation; practically, this has no effect):
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
# Example: Extension of standard trial
# optimum randomization ratio
summary(getSampleSizeRates(pi2 = 0.25, pi1 = 0.4,
sided = 1,alpha = 0.025,beta = 0.2,
allocationRatioPlanned = 0))
#'
#' **Power** can be calculated using the function `getPowerRates()`. This function
#' has the same arguments as `getSampleSizeRates()` except that the maximum total
#' sample size needs to be defined (`maxNumberOfSubjects`) and the Type II error
#' `beta` is no longer needed. For one-sided tests, the direction of the test is
#' also required (`directionUpper = TRUE` indicates that the alternative is that
#' the probability in the intervention group `pi1` is larger than the probability
#' in the control group `pi2` (`directionUpper = FALSE` is the other direction):
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
# Example: Calculate power for a simple trial with total sample size 304
# as in the example above in case of pi2 = 0.25 (control) and
# pi1 = 0.37 (intervention)
powerResult <- getPowerRates(pi2 = 0.25, pi1 = 0.37,
maxNumberOfSubjects = 304, sided = 1,alpha = 0.025)
powerResult
#'
#' The calculated **power** is provided in the output as **"Overall reject"** and
#' is `r formatC(powerResult$overallReject,format = "f",digits = 3)` for the example.
#'
#' The `summary()` command produces the output
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
summary(powerResult)
#'
#' The `getPowerRates()` (as well as `getSampleSizeRates()`) functions can also be
#' called with a vector argument for the probability `pi1` in the intervention
#' group. This is illustrated below via a plot of power depending on this
#' probability. For examples of all available plots, see the R Markdown document
#' [How to create admirable plots with rpact](https://vignettes.rpact.org/html/rpact_plot_examples.html).
#'
## ---- include = TRUE, echo = TRUE, fig.align = "center", fig.cap = "Figure: example for an overall power plot"----
# Example: Calculate power for simple design (with sample size 304 as above)
# for probabilities in intervention ranging from 0.3 to 0.5
powerResult <- getPowerRates(pi2 = 0.25,pi1 = seq(0.3,0.5,by = 0.01),
maxNumberOfSubjects = 304,sided = 1,alpha = 0.025)
# one of several possible plots, this one plotting true effect size vs power
plot(powerResult,type = 7)
#'
#' # Sample size calculation for a non-inferiority trial with two groups without interim analyses
#'
#' Sample size calculation proceeds in the same fashion as for superiority trials except
#' that the role of the null and the alternative hypothesis are reversed.
#' I.e., in this case, the non-inferiority margin $\Delta$ corresponds to the treatment
#' effect under the null hypothesis (`thetaH0`) which one aims to reject.
#' Testing in non-inferiority trials is always one-sided.
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
# Example: Sample size for a non-inferiority trial
# Assume pi(control) = pi(intervention) = 0.2
# Test H0: pi1 - pi2 = 0.1 (risk increase in intervention >= Delta = 0.1)
# vs. H1: pi1 - pi2 < 0.1
sampleSizeNoninf <- getSampleSizeRates(pi2 = 0.2, pi1 = 0.2,
thetaH0 = 0.1, sided = 1, alpha = 0.025, beta = 0.2)
sampleSizeNoninf
summary(sampleSizeNoninf)
#'
#' # Sample size calculation for a single arm trial without interim analyses
#'
#' The function `getSampleSizeRates()` allows to set the number of `groups`
#' (which is = 2 by default) to 1 for the design of single arm trials.
#' The probability under the null hypothesis can be specified with the argument
#' `thetaH0`and the specific alternative hypothesis which is used for the sample
#' size calculation with the argument `pi1`. The sample size calculation can be
#' based either on a normal approximation (`normalApproximation = TRUE`, the default)
#' or on exact binomial probabilities (`normalApproximation = FALSE`).
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
# Example: Sample size for a single arm trial which tests
# H0: pi = 0.1 vs. H1: pi = 0.25
# (use conservative exact binomial calculation)
samplesSizeResults <- getSampleSizeRates(groups = 1, thetaH0 = 0.1, pi1 = 0.25,
normalApproximation = FALSE, sided = 1, alpha = 0.025, beta = 0.2)
summary(samplesSizeResults)
#'
#' # Sample size calculation for group-sequential designs
#'
#' Sample size calculation for a group-sequential trial is performed in **two steps**:
#'
#' 1. **Define the (abstract) group-sequential design** using the function
#' `getDesignGroupSequential()`. For details regarding this step, see the R
#' markdown file "Defining group-sequential boundaries with rpact".
#' 2. **Calculate sample size** for the binary endpoint by feeding the abstract design
#' into the function `getSampleSizeRates()`. Note that the power 1 - beta needs to be
#' defined in the design function, and not in `getSampleSizeRates()`.
#'
#' In general, rpact supports both one-sided and two-sided group-sequential designs.
#' However, if futility boundaries are specified, only one-sided tests are permitted.
#'
#' R code for a simple example is provided below:
## ---- include = TRUE, echo = TRUE---------------------------------------------
# Example: Group-sequential design with O'Brien & Fleming type alpha-spending and
# one interim at 60% information
design <- getDesignGroupSequential(sided = 1, alpha = 0.025, beta = 0.2,
informationRates = c(0.6, 1), typeOfDesign = "asOF")
# Sample size calculation assuming event probabilities are 25% in control
# (pi2 = 0.25) vs 40% (pi1 = 0.4) in intervention
sampleSizeResultGS <- getSampleSizeRates(design,pi2 = 0.25,pi1 = 0.4)
# Standard rpact output (sample size object only, not design object)
sampleSizeResultGS
#'
#' The `summary()` command produces the output
#'
## ---- include = TRUE, echo = TRUE---------------------------------------------
summary(sampleSizeResultGS)
#'
#' ***
#'
#' System: rpact `r packageVersion("rpact")`, `r R.version.string`, platform: `r R.version$platform`
#'
## ---- include = TRUE, echo = FALSE, results = 'asis'--------------------------
printCitation()
#'
#'