ROeS 2025, Graz
RPACT GbR
September 16, 2025



Sample size and power can be calulcated for testing:
Sample size calculation for a continuous endpoint
Sequential analysis with a maximum of 3 looks (group sequential design), one-sided overall significance level 2.5%, power 80%. The results were calculated for a two-sample t-test, H0: mu(1) - mu(2) = 0, H1: effect = 2, standard deviation = 5.
| Stage | 1 | 2 | 3 |
|---|---|---|---|
| Planned information rate | 33.3% | 66.7% | 100% |
| Cumulative alpha spent | 0.0001 | 0.0060 | 0.0250 |
| Stage levels (one-sided) | 0.0001 | 0.0060 | 0.0231 |
| Efficacy boundary (z-value scale) | 3.710 | 2.511 | 1.993 |
| Futility boundary (z-value scale) | 0 | 0 | |
| Efficacy boundary (t) | 4.690 | 2.152 | 1.384 |
| Futility boundary (t) | 0 | 0 | |
| Cumulative power | 0.0204 | 0.4371 | 0.8000 |
| Number of subjects | 69.9 | 139.9 | 209.8 |
| Expected number of subjects under H1 | 170.9 | ||
| Overall exit probability (under H0) | 0.5001 | 0.1309 | |
| Overall exit probability (under H1) | 0.0684 | 0.4202 | |
| Exit probability for efficacy (under H0) | 0.0001 | 0.0059 | |
| Exit probability for efficacy (under H1) | 0.0204 | 0.4167 | |
| Exit probability for futility (under H0) | 0.5000 | 0.1250 | |
| Exit probability for futility (under H1) | 0.0480 | 0.0035 |
Legend:
Perform interim and final analyses during the trial using group sequential method or p-value combination test (inverse normal or Fisher)
Calculate adjusted point estimates and confidence intervals (cf., Robertson et al. (2023), Robertson et al. (2025))
Perform sample size reassessment based on the observed data, based on calculation of conditional power
Some highlights:
Obtain operating characteristics of different designs:
getSampleSizeCounts() and getPowerCounts()
getSimulationCounts()
Sample size calculation for a count data endpoint
Fixed sample analysis, two-sided significance level 5%, power 90%. The results were calculated for a two-sample Wald-test for count data, H0: lambda(1) / lambda(2) = 1, H1: effect = 0.75, lambda(2) = 0.4, overdispersion = 0.5, fixed exposure time = 1.
| Stage | Fixed |
|---|---|
| Stage level (two-sided) | 0.0500 |
| Efficacy boundary (z-value scale) | 1.960 |
| Lower efficacy boundary (t) | 0.844 |
| Upper efficacy boundary (t) | 1.171 |
| Lambda(1) | 0.300 |
| Number of subjects | 1736.0 |
| Maximum information | 127.0 |
Legend:
$overallReject
[1] 0.835 0.028
1.65 sec elapsed
getDesignGroupSequential(), getDesignCharacteristics(), and the corresponding getSampleSizexxx() and getPowerxxx() functions characterize a delayed response group sequential test given certain input parameters in terms of power, maximum sample size and expected sample sizegetGroupSequentialProbabilities()Given boundary sets \(\{u^0_1,\dots,u^0_{K-1}\}\), \(\{u_1,\dots,u_K\}\) and \(\{c_1,\dots,c_K\}\), a \(K\)-stage delayed response group sequential design has the following structure:


According to Hampson and Jennison (2013), the boundaries \(\{c_1, \dots, c_K\}\) with \(c_K = u_K\) are chosen such that “reversal probabilities” are balanced, to ensure type I error control.
More precisely, \(c_1,\ldots,c_{K - 1}\) are chosen as the (unique) solution of: \[\begin{align*} \begin{split} &P_{H_0}(Z_1 \in (u^0_1, u_1), \dots, Z_{k-1} \in (u^0_{k-1}, u_{k-1}), Z_k \geq u_k, \tilde{Z}_k \leq c_k) \\ &= P_{H_0}(Z_1 \in (u^0_1, u_1), \dots, Z_{k-1} \in (u^0_{k-1}, u_{k-1}), Z_k \leq u^0_k, \tilde{Z}_k \geq c_k). \end{split} \end{align*}\]
Sequential analysis with a maximum of 3 looks (delayed response group sequential design)
Kim & DeMets alpha spending design with delayed response (gammaA = 2) and Kim & DeMets beta spending (gammaB = 2), one-sided overall significance level 2.5%, power 80%, undefined endpoint, inflation factor 1.0514, ASN H1 0.9269, ASN H01 0.9329, ASN H0 0.8165.
| Stage | 1 | 2 | 3 |
|---|---|---|---|
| Planned information rate | 30% | 70% | 100% |
| Delayed information | 16% | 20% | |
| Cumulative alpha spent | 0.0022 | 0.0122 | 0.0250 |
| Cumulative beta spent | 0.0180 | 0.0980 | 0.2000 |
| Stage levels (one-sided) | 0.0022 | 0.0109 | 0.0212 |
| Upper bounds of continuation | 2.841 | 2.295 | 2.030 |
| Lower bounds of continuation (binding) | -0.508 | 1.096 | |
| Decision critical values | 1.387 | 1.820 | 2.030 |
| Reversal probabilities | <0.0001 | 0.0018 | |
| Cumulative power | 0.1026 | 0.5563 | 0.8000 |
| Futility probabilities under H1 | 0.019 | 0.083 |
Performance scores combine different performance criteria in one value (e.g. Liu, Zhu, and Cui (2008), Wu and Cui (2012))
Evaluation perspectives: global and conditional
getPerformanceScore() of a simulation object.# Initialize group sequential design with O'Brien-Fleming-boundaries
design <- getDesignGroupSequential(
kMax = 2,
typeOfDesign = "OF",
futilityBounds = 0,
bindingFutility = TRUE
)
# Perform simulation
maxNumberOfSubjects <- 200
n1 <- maxNumberOfSubjects * design$informationRates[1]
alternative <- c(0.2, 0.3, 0.4, 0.5)
design |>
getSimulationMeans(
alternative = alternative,
plannedSubjects = c(n1, maxNumberOfSubjects),
minNumberOfSubjectsPerStage = c(NA, 1),
maxNumberOfSubjectsPerStage = c(NA, 2*maxNumberOfSubjects),
conditionalPower = 0.8,
maxNumberOfIterations = 1e05,
seed = 123
) |>
# Calculate performance score
getPerformanceScore()Performance
Graphical user interface
Web based usage without local installation on nearly any device
Provides an easy entry to to learn and demonstrate the usage of rpact
Starting point for your R Markdown or Quarto reports
Online available at cloud.rpact.com
Just in the near future you will see in rpact:
getFutilityBounds()

rpact code examplesFurther information, installation, and usage:
All information and resources about RPACT on one dashboard page
RPACT Connect: connect.rpact.com
