Skip to content

Decision Criteria

The Decision Criteria area contains 2 sections:

Decision Criteria

Decision Criteria Review

Decision Criteria specify how the results of the analyses should be handled to define what success looks like. E.g., is it based on a p-value or the upper limit of a confidence interval or does it depend on the joint significance of two analyses? At least one decision criterion must be specified for a project. The Decision Criteria Description field in the Decision Criteria Details section can be used to add a short description of the decision criterion if desired.

The different types of decision criteria are explained below.

At least one Single Decision Criterion must be created for a project.

To create a Single Decision Criterion, click on Single in the Decision Criteria Details section, type an identifier into the Decision Criteria Identifier field and use the drop-down menu to select the analysis on which the decision criterion is to be based. Next fill in the Decision Criteria Definition section. A Single Decision Criterion evaluates success by comparing an analysis metric (specified in the left-hand side of the Decision Criteria Definition) to a fixed value or another metric (specified in the right-hand side of the Decision Criteria Definition).

The choice of available metrics will depend on the selected analysis with examples including p-value, odds ratio, difference in means, posterior probabilities, and lower/upper limit of a confidence interval. Some metrics will also require specification of options such as sidedness of the test and the specific subgroups for comparison.

If applicable, subgroup options will depend on the explanatory variables included in the analysis. For example, if an ANOVA has been requested where the ANOVA includes a grouping variable with 3 levels, you will be required to choose 2 of the 3 subgroups. If you are performing a linear regression with two categorical explanatory variables, one with 2 levels and one with 3 levels, then you will have 6 possible combinations of categories to choose from. Those metrics which have a direction will be determined by the order in which the subgroups are specified in KerusCloud. For example, a difference in means will be calculated as mean of first group minus mean of second group, and an odds ratio will be calculated such that the odds of the first group is in the numerator and the odds of the second group is in the denominator.

For metrics such as P(Difference in Means), the Cutoff Value and Comparison Logic are used to calculate the posterior probability. If Cutoff Value equals “2” and Comparison Logic equals “>”, the metric calculated is P(Difference in Means > 2).

For regression analysis types, whenever the covariate in a Decision Criteria is an interaction between a factor and at least one numeric variable, it is possible to enter a value for the numeric variable, as a Covariate Term Value, within the Decision Criteria definition. The Covariate Term Value will be used to calculate the metric value. The Covariate Term Value is potentially applicable to all regression types (including Mixed Model regressions) and to all metrics on those regressions except for the Number in Analysis regression metric in mixed model regressions.

Use the Type menu on the right-hand side to indicate whether the left-hand side metric should be compared to a fixed value or to another metric. For a fixed value select Type equals “Value” and enter the value in the Value box. For a metric select Type equals “Metric” and define the metric as for the left-hand side.

Use the boxes in the central column of the Decision Criteria Definition section to indicate the logic that should be used to compare the left-hand side metric to the value or metric defined on the right-hand side. Available logic options are: less than (<), less than or equal to (<=), greater than (>), greater than or equal to (>=), equal to (==) and not equal to (!=).

Combined Decision Criteria allow success to be defined based on the outcome of multiple decision criteria. At least two Single Decision Criteria must be created before a Combined Decision Criterion can be created. To create a Combined Decision Criterion, click on Combined in the Decision Criteria Details section and enter an identifier into the Decision Criteria Identifier field. Use the drop-down menus to select the identifiers of the two existing decision criteria and the logic (either AND or OR) that should be used to combine their outcomes. Next to each of the decision criteria selection menus there is an “Is/Not” toggle button. Click on the “Is” button to activate the “Not” condition for the criterion. When the “Not” condition is activated, the button will change to blue. This will indicate that in order for the combined criterion to be met, the single criterion marked with “Not” should not be met. Conversely, clicking the blue “Not” button, will revert the condition for the criterion, resetting the button back to the navy “Is” state.

A Multiplicity Decision Criterion can be created when at least two Single Decision Criteria with p- value metrics have been defined. To create a Multiplicity Decision Criterion, click on the Multiplicity type and enter an identifier. Then choose at least two existing decision criteria from the Decision Criteria Included menu. This menu contains all existing Single Decision Criteria with p-value metrics. In the Decision Criteria Definition section choose a multiplicity adjustment method and logic from the drop-down menus and enter the error rate. The set of p-values for the included Single Decision Criteria will be adjusted for multiple testing using the selected method. Available methods are Benjamini-Hochberg, Benjamini-Yekutieli, Bonferroni, Holm and Hochberg adjustment methods. Each adjusted p-value is compared to the error rate entered in the Decision Criteria Definition section. The success of the Multiplicity Decision Criterion can be defined as requiring all adjusted p-values to be less that the target error rate by selecting the AND logic or defined as requiring at least one adjusted p-value to be less than the target error rate by selecting the OR logic option. The adjusted p-values are only used for evaluating the success of the Multiplicity Decision Criterion – the outcomes of the included Single Decision Criteria are not affected.

If a Group Sequential Design has been defined for a project this can affect the outcome of a Single Decision Criterion. If the Efficacy Rule for the Group Sequential Design and the decision criterion left-hand side metric are both based on the same analysis, analysis metric, metric covariates (if applicable), subgroups (if applicable, and with the subgroups in the same order) and have identical metric options then the analysis metric on the left-hand side of the decision criterion will be compared to the adjusted alpha value at instead of the right-hand side value entered in the decision criterion. The adjusted alpha value for each interim time point is calculated by the group sequential alpha spending specified in the Group Sequential Design. If any details of the analysis and metric are not identical for the Group Sequential Design and Single Decision Criterion, then the right-hand side value or metric entered in the decision criterion will be used to evaluate success.

This provides an overview of the number of decision criteria defined and whether they are Single, Combined or Multiplicity criteria and displays the number of Kerus Credits which will be used.

You can choose from three different speed options to run a task. After selecting a speed, the credit usage for the task will be updated, allowing you to select the option which best balances speed and cost for your needs. Find out more.

Once happy with the set up check the number of Kerus Credits which will be used and click on the Go button to generate the results and obtain the probability of success for each of the criteria. The success of all criteria will be evaluated for all scenarios, treatment allocations, designs and sample sizes requested.