6  AutoScore for ordinal outcomes (AutoScore-Ordinal)

AutoScore-Ordinal refers to the AutoScore framework for developing point-based scoring models for ordinal outcomes. Similar to the implementation described in Chapter 4 for binary outcomes, AutoScore-Ordinal is implemented by five functions: AutoScore_rank_Ordinal(), AutoScore_parsimony_Ordinal(), AutoScore_weighting_Ordinal(), AutoScore_fine_tuning_Ordinal() and AutoScore_testing_Ordinal().

In this chapter, we demonstrate the use of AutoScore-Ordinal to develop sparse risk scores for an ordinal outcome, adjust parameters to improve interpretability, assess the performance of the final model and map the score to predict risks for new data. To facilitate clinical applications, in the following sections we demonstrate AutoScore application in 3 demos with large and small datasets and with missing information.

Important
  • Scoring models below are based on simulated data to demonstrate AutoScore usage.
  • Variable names are intentionally masked to avoid misinterpretation and misuse.

Cite the following paper for AutoScore-Ordinal:

6.1 Demo 1: large sample

In Demo 1, we demonstrate the use of AutoScore-Ordinal on a dataset with 20,000 observations using split-sample approach (i.e., to randomly divide the full dataset into training, validation and test sets) for model development.

Important
  • Before proceeding, follow the steps in Chapter 2 to ensure all data requirements are met.
  • Refer to Chapter 3 for how to generate simple descriptive statistics before building prediction models.

Load package and data

library(AutoScore)
data("sample_data_ordinal")
dim(sample_data_ordinal)
[1] 20000    21
head(sample_data_ordinal)
  label Age Gender Util_A Util_B Util_C    Util_D Comorb_A Comorb_B Comorb_C
1     1  63 FEMALE     P2      0   0.00 3.5933333        0        0        0
2     1  41 FEMALE     P2      0   0.96 3.6288889        0        0        0
3     1  86   MALE     P1      0   0.00 2.6502778        0        0        0
4     1  51   MALE     P2      0   0.00 4.9711111        0        0        0
5     1  23 FEMALE     P1      0   0.00 0.5352778        0        0        0
6     1  32 FEMALE     P2      0   4.13 4.4008333        0        0        0
  Comorb_D Comorb_E Lab_A Lab_B Lab_C Vital_A Vital_B Vital_C Vital_D Vital_E
1        0        0   117   3.9   136      91      19     100      70     152
2        1        0   500   3.6   114      91      16     100      70     147
3        0        0    72   4.1   136     100      18      99      65     126
4        0        0    67   5.0   122      73      17      97      46     100
5        0        0  1036   4.1   138      74      18      98      89     114
6        0        0   806   4.1   136      77      18      98      74     157
  Vital_F
1    25.7
2    22.6
3    25.7
4    24.9
5    25.7
6    25.3
check_data_ordinal(sample_data_ordinal)
Data type check passed. 
No NA in data. 

Prepare training, validation, and test datasets

  • Option 1: Prepare three separate datasets to train, validate, and test models.
  • Option 2: Use demo codes below to randomly split your dataset into training, validation, and test datasets (70%, 10%, 20%, respectively), possibly stratified by outcome categories (strat_by_label = TRUE) to ensure they are well represented in all three datasets.
set.seed(4)
out_split <- split_data(data = sample_data_ordinal, ratio = c(0.7, 0.1, 0.2), 
                        strat_by_label = TRUE)
train_set <- out_split$train_set
validation_set <- out_split$validation_set
test_set <- out_split$test_set

6.1.1 STEP(i): generate variable ranking list

AutoScore-Ordinal Module 1

  • Variables are ranked by random forest for multiclass classification.
  • ntree: Number of trees in the random forest algorithm (Default: 100).
ranking <- AutoScore_rank_Ordinal(train_set = train_set, ntree = 100)
The ranking based on variable importance was shown below for each variable: 
   Util_D     Lab_A   Vital_F   Vital_A       Age   Vital_E   Vital_D     Lab_B 
413.60631 379.51127 378.30195 372.84319 372.68880 364.51371 339.60643 296.86038 
    Lab_C    Util_C    Util_B   Vital_C   Vital_B  Comorb_A    Util_A    Gender 
279.47643 244.28653 201.34337 186.47331 168.45639 115.28191  98.78811  51.88705 
 Comorb_B  Comorb_D  Comorb_C  Comorb_E 
 41.11154  32.31979  17.64803  11.87098 

6.1.2 STEP(ii): select model with parsimony plot

AutoScore-Ordinal Modules 2+3+4

  • n_min: Minimum number of selected variables (Default: 1).
  • n_max: Maximum number of selected variables (Default: 20).
  • categorize: Methods for categorizing continuous variables. Options include "quantile" or "kmeans" (Default: "quantile").
  • quantiles: Predefined quantiles to convert continuous variables to categorical ones. (Default: c(0, 0.05, 0.2, 0.8, 0.95, 1)) Available if categorize = "quantile".
  • max_cluster: The maximum number of cluster (Default: 5). Available if categorize = "kmeans".
  • max_score: Maximum total score (Default: 100).
  • auc_lim_min: Minimum y_axis limit in the parsimony plot (Default: 0.5).
  • auc_lim_max: Maximum y_axis limit in the parsimony plot (Default: “adaptive”).
  • link: link function in the ordinal regression, which affects predictive performance. Options include "logit" (for proportional odds model), "cloglog" (for proportional hazard model) and "probit" (Default: "logit").
Important

Use the same link parameter throughout descriptive analysis and model building steps.

link <- "logit"
mAUC <- AutoScore_parsimony_Ordinal(
  train_set = train_set, validation_set = validation_set, 
  rank = ranking, link = link, max_score = 100, n_min = 1, n_max = 20,
  categorize = "quantile", quantiles = c(0, 0.05, 0.2, 0.8, 0.95, 1), 
  auc_lim_min = 0, auc_lim_max = "adaptive"
)
Select 1 variables:  Mean area under the curve: 0.4555607 
Select 2 variables:  Mean area under the curve: 0.5110174 
Select 3 variables:  Mean area under the curve: 0.5780548 
Select 4 variables:  Mean area under the curve: 0.5912554 
Select 5 variables:  Mean area under the curve: 0.6685143 
Select 6 variables:  Mean area under the curve: 0.672106 
Select 7 variables:  Mean area under the curve: 0.6690071 
Select 8 variables:  Mean area under the curve: 0.6710102 
Select 9 variables:  Mean area under the curve: 0.6706072 
Select 10 variables:  Mean area under the curve: 0.6721932 
Select 11 variables:  Mean area under the curve: 0.7003498 
Select 12 variables:  Mean area under the curve: 0.6995013 
Select 13 variables:  Mean area under the curve: 0.6994186 
Select 14 variables:  Mean area under the curve: 0.7476355 
Select 15 variables:  Mean area under the curve: 0.7489346 
Select 16 variables:  Mean area under the curve: 0.7448716 
Select 17 variables:  Mean area under the curve: 0.744752 
Select 18 variables:  Mean area under the curve: 0.744752 
Select 19 variables:  Mean area under the curve: 0.745261 
Select 20 variables:  Mean area under the curve: 0.7472124 

Note
  • Users could use mAUC for further analysis or export it to CSV to other software for plotting.
write.csv(data.frame(mAUC), file = "mAUC.csv")
  • Determine the optimal number of variables (num_var) based on the parsimony plot obtained in STEP(ii).
  • The final list of variables is the first num_var variables in the ranked list ranking obtained in STEP(i).
  • Optional: User can adjust the finally included variables final_variables based on the clinical preferences and knowledge.
# Example 1: Top 5 variables are selected
num_var <- 5
final_variables <- names(ranking[1:num_var])

# Example 2: Top 14 variables are selected
num_var <- 14
final_variables <- names(ranking[1:num_var])

# Example 3: Top 5 variables, the 11th and 14th variable are selected
final_variables <- names(ranking[c(1:5, 11, 14)])

6.1.3 STEP(iii): generate initial scores with final variables

Re-run AutoScore-Ordinal Modules 2+3

  • Generate cut_vec with current cutoffs of continuous variables, which can be fine-tuned in STEP(iv).
  • Performance of resulting scores is evaluated using the mean AUC across dichotomous classifications (mAUC), with 95% CI computed using bootstrap (Default: n_boot = 100 bootstrap samples). Setting n_boot = 1 disables bootstrap and reports mAUC without CI.
  • Run time increases with larger n_boot value. Code below uses n_boot = 10 for demonstration.
cut_vec <- AutoScore_weighting_Ordinal(
  train_set = train_set, validation_set = validation_set, 
  final_variables = final_variables, link = link, max_score = 100,
  categorize = "quantile", quantiles = c(0, 0.05, 0.2, 0.8, 0.95, 1), 
  n_boot = 10
)
****Included Variables: 
  variable_name
1        Util_D
2         Lab_A
3       Vital_F
4       Vital_A
5           Age
6        Util_B
7      Comorb_A
****Initial Scores: 


========  ============  =====
variable  interval      point
========  ============  =====
Util_D    <0.652          7  
          [0.652,1.32)    7  
          [1.32,3.93)     2  
          [3.93,5.93)     1  
          >=5.93          0  
                             
Lab_A     <46             6  
          [46,61)         0  
          [61,134)        1  
          [134,584)       8  
          >=584           6  
                             
Vital_F   <16.7           8  
          [16.7,20.5)     4  
          [20.5,25.4)     0  
          [25.4,28.1)     1  
          >=28.1          4  
                             
Vital_A   <58             2  
          [58,68)         0  
          [68,97)         3  
          [97,113)        6  
          >=113          13  
                             
Age       <27             0  
          [27,46)         4  
          [46,78)        14  
          [78,87)        18  
          >=87           21  
                             
Util_B    <1              0  
          [1,4)          10  
          >=4            21  
                             
Comorb_A  0               0  
          1              22  
========  ============  =====
***Performance (based on validation set):
mAUC: 0.7402     95% CI: 0.7157-0.7593 (from 10 bootstrap samples)
***The cutoffs of each variables generated by the AutoScore-Ordinal are saved in cut_vec. You can decide whether to revise or fine-tune them 

6.1.4 STEP(iv): fine-tune initial score from STEP(iii)

AutoScore-Ordinal Module 5 & Re-run AutoScore-Ordinal Modules 2+3

  • Revise cut_vec with domain knowledge to update the scoring table (AutoScore-Ordinal Module 5).
  • Re-run AutoScore-Ordinal Modules 2+3 to generate the updated scores.
  • Users can choose any cutoff values and/or any number of categories, but are suggested to choose numbers close to the automatically determined values.
## For example, we have current cutoffs of continuous variable: Age 
## ==============  ===========  =====
## variable        interval     point
## ==============  ===========  =====
## Age                 <27          0  
##                     [27,46)      4  
##                     [46,78)     14  
##                     [78,87)     18 
##                     >=87        21 
  • Current cutoffs:c(27, 46, 78, 87). We can fine tune the cutoffs as follows:
# Example 1: rounding to a nice number
cut_vec$Age <- c(25, 45, 75, 85)

# Example 2: changing cutoffs according to clinical knowledge or preference 
cut_vec$Age <- c(25, 50, 75, 85)

# Example 3: combining categories
cut_vec$Age <- c(45, 75, 85)
  • mAUC and 95% bootstrap CI (Default: n_boot = 100 bootstrap samples) are reported after fine-tuning.
  • Run time increases with larger n_boot value. Code below uses n_boot = 10 for demonstration.
cut_vec$Util_D <- c(2 / 3, 4 / 3, 4, 6)
cut_vec$Vital_F <- c(17, 20, 25, 28)
cut_vec$Vital_A <- c(60, 70, 95, 115)
cut_vec$Lab_A <- c(45, 60, 135, 595)
cut_vec$Age <- c(25, 45, 75, 85)
scoring_table <- AutoScore_fine_tuning_Ordinal(
  train_set = train_set, validation_set = validation_set,
  final_variables = final_variables, link = link, cut_vec = cut_vec,
  max_score = 100, n_boot = 10
)
***Fine-tuned Scores: 


========  ============  =====
variable  interval      point
========  ============  =====
Util_D    <0.667          7  
          [0.667,1.33)    7  
          [1.33,4)        2  
          [4,6)           1  
          >=6             0  
                             
Lab_A     <45             7  
          [45,60)         0  
          [60,135)        1  
          [135,595)       8  
          >=595           6  
                             
Vital_F   <17             8  
          [17,20)         4  
          [20,25)         0  
          [25,28)         0  
          >=28            4  
                             
Vital_A   <60             0  
          [60,70)         0  
          [70,95)         2  
          [95,115)        5  
          >=115          13  
                             
Age       <25             0  
          [25,45)         4  
          [45,75)        14  
          [75,85)        19  
          >=85           22  
                             
Util_B    <1              0  
          [1,4)          10  
          >=4            21  
                             
Comorb_A  0               0  
          1              22  
========  ============  =====
***Performance (based on Validation Set, after fine-tuning):
mAUC: 0.7466     95% CI: 0.7369-0.7633 (from 10 bootstrap samples)

6.1.5 STEP(v): evaluate final risk scores on test dataset

AutoScore-Ordinal Module 6

  • mAUC and generalised c-index are reported for the test set, with 95% bootstrap CI (Default: n_boot = 100 bootstrap samples).
  • Run time increases with larger n_boot value. Code below uses n_boot = 10 for demonstration.
pred_score <- AutoScore_testing_Ordinal(
  test_set = test_set, link = link, final_variables = final_variables, 
  cut_vec = cut_vec, scoring_table = scoring_table, 
  with_label = TRUE, n_boot = 10
)
***Performance using AutoScore-Ordinal (based on unseen test Set):
mAUC: 0.7552     95% CI: 0.7585-0.7776 (from 10 bootstrap samples)
Generalised c-index: 0.7267      95% CI: 0.7229-0.7378 (from 10 bootstrap samples)
head(pred_score)
  pred_score Label
1         40     1
2         35     1
3         29     1
4         26     1
5         33     1
6         22     1
  • Users can compute mAUC and generalised c-index (with 95% bootstrap CI) for previously saved pred_score.
print_performance_ordinal(
  label = pred_score$Label, score = pred_score$pred_score, 
  n_boot = 10, report_cindex = TRUE
)
mAUC: 0.7552     95% CI: 0.7400-0.7632 (from 10 bootstrap samples)
Generalised c-index: 0.7267      95% CI: 0.7155-0.7360 (from 10 bootstrap samples)

6.1.6 Map score to risk

  • The interactive figure below maps score to predicted risks.
  • point_size: Size of points indicating all attainable scores (Default: 0.5).
plot_predicted_risk(pred_score = pred_score, max_score = 100, 
                    final_variables = final_variables, link = link,
                    scoring_table = scoring_table, point_size = 1)
  • Given the proportion of subjects for each score value (see figure above), select reasonable score breaks (Default: 5, 10, 15, …, 70) to report the average predicted risk within each score interval, which can be used to predict risk for a new subject.
  • When selecting score breaks, avoid creating score intervals with too few observations.
conversion_table_ordinal(pred_score = pred_score, link = link,
                         score_breaks = seq(from = 5, to = 70, by = 5), 
                         digits = 4)
Score Predicted risk, category 1 Predicted risk, category 2 Predicted risk, category 3
[0,5] 0.9742 0.0192 0.0065
(5,10] 0.9617 0.0285 0.0098
(10,15] 0.9454 0.0404 0.0142
(15,20] 0.9226 0.0569 0.0205
(20,25] 0.8914 0.0791 0.0295
(25,30] 0.8497 0.1081 0.0423
(30,35] 0.7956 0.1441 0.0602
(35,40] 0.7284 0.1864 0.0851
(40,45] 0.6489 0.2321 0.1190
(45,50] 0.5602 0.2758 0.1640
(50,55] 0.4675 0.3109 0.2216
(55,60] 0.3769 0.3307 0.2924
(60,65] 0.2942 0.3309 0.3749
(65,70] 0.2231 0.3116 0.4653
(70,100] 0.0840 0.1718 0.7442
Note
  • Users could use pred_score for further analysis or export it to CSV to other software (e.g., generating the calibration curve).
write.csv(pred_score, file = "pred_score.csv")

6.2 Demo 2: small sample

In Demo 2, we demonstrate the use of AutoScore-Ordinal on a smaller dataset where there are no sufficient samples to form a separate training and validation dataset. Thus, the cross validation is employed to generate the parsimony plot.

Load small dataset with 5000 samples

data("sample_data_ordinal_small")

Prepare training and test datasets

  • Option 1: Prepare two separate datasets to train and test models.
  • Option 2: Use demo codes below to randomly split your dataset into training and test datasets (70% and 30%, respectively). For cross-validation, train_set is equal to validation_set and the ratio of validation_set should be 0. Then cross-validation will be implemented in the STEP(ii), AutoScore_parsimony_Ordinal().
set.seed(4)
out_split <- split_data(data = sample_data_ordinal_small, ratio = c(0.7, 0, 0.3), 
                        cross_validation = TRUE, strat_by_label = TRUE)
train_set <- out_split$train_set
validation_set <- out_split$validation_set
test_set <- out_split$test_set

6.2.1 STEP(i): generate variable ranking list

AutoScore-Ordinal Module 1

  • Variables are ranked by random forest for multiclass classification.
  • ntree: Number of trees in the random forest algorithm (Default: 100).
ranking <- AutoScore_rank_Ordinal(train_set = train_set, ntree = 100)
The ranking based on variable importance was shown below for each variable: 
      Age     Lab_A    Util_D   Vital_A   Vital_F   Vital_E   Vital_D     Lab_B 
98.791839 98.073161 97.021662 93.282141 92.469166 91.677271 82.051893 72.462999 
    Lab_C    Util_B    Util_C   Vital_C   Vital_B    Util_A  Comorb_A    Gender 
70.507041 57.778464 57.270171 45.468903 43.633031 26.120379 25.414958 13.405373 
 Comorb_B  Comorb_D  Comorb_C  Comorb_E 
 9.595971  7.320633  4.477519  2.993603 

6.2.2 STEP(ii): select the best model with parsimony plot

AutoScore-Ordinal Modules 2+3+4

  • nmin: Minimum number of selected variables (Default: 1).
  • nmax: Maximum number of selected variables (Default: 20).
  • categorize: Methods for categorizing continuous variables. Options include "quantile" or "kmeans" (Default: "quantile").
  • quantiles: Predefined quantiles to convert continuous variables to categorical ones. (Default: c(0, 0.05, 0.2, 0.8, 0.95, 1)) Available if categorize = "quantile".
  • max_cluster: The maximum number of cluster (Default: 5). Available if categorize = "kmeans".
  • max_score: Maximum total score (Default: 100).
  • auc_lim_min: Minimum y_axis limit in the parsimony plot (Default: 0.5).
  • auc_lim_max: Maximum y_axis limit in the parsimony plot (Default: “adaptive”).
  • cross_validation: TRUE if cross-validation is needed, especially for small datasets.
  • fold: The number of folds used in cross validation (Default: 10). Available if cross_validation = TRUE.
  • do_trace: If set to TRUE, all results based on each fold of cross-validation would be printed out and plotted (Default: FALSE). Available if cross_validation = TRUE.
  • link: link function in the ordinal regression, which affects predictive performance. Options include "logit" (for proportional odds model), "cloglog" (for proportional hazard model) and "probit" (Default: "logit").
Important

Use the same link parameter throughout descriptive analysis and model building steps.

link <- "logit"
mAUC <- AutoScore_parsimony_Ordinal(
  train_set = train_set, validation_set = validation_set, link = link,
  rank = ranking, max_score = 100, n_min = 1, n_max = 20,
  categorize = "quantile", quantiles = c(0, 0.05, 0.2, 0.8, 0.95, 1), 
  auc_lim_min = 0, auc_lim_max = "adaptive",
  cross_validation = TRUE, fold = 10, do_trace = FALSE
)
***list of fianl Mean AUC values through cross validation are shown below 
   auc_set.sum
1    0.4373127
2    0.5706558
3    0.6276256
4    0.6361478
5    0.6398421
6    0.6279290
7    0.6349191
8    0.6341125
9    0.6298176
10   0.6928897
11   0.6937431
12   0.6811859
13   0.6803831
14   0.6877744
15   0.7371436
16   0.7366716
17   0.7362270
18   0.7368480
19   0.7362830
20   0.7373470

Note
  • Users could use mAUC for further analysis or export it to CSV to other software for plotting.
write.csv(data.frame(mAUC), file = "mAUC.csv")
  • Determine the optimal number of variables (num_var) based on the parsimony plot obtained in STEP(ii).
  • The final list of variables is the first num_var variables in the ranked list ranking obtained in STEP(i).
  • Optional: User can adjust the finally included variables final_variables based on the clinical preferences and knowledge.
# Example 1: Top 6 variables are selected
num_var <- 6
final_variables <- names(ranking[1:num_var])

# Example 2: Top 14 variables are selected
num_var <- 14
final_variables <- names(ranking[1:num_var])

# Example 3: Top 3 variables, the 10th and 15th variable are selected
final_variables <- names(ranking[c(1:3, 10, 15)])

6.2.3 STEP(iii): generate initial scores with final variables

Re-run AutoScore-Ordinal Modules 2+3

  • Generate cut_vec with current cutoffs of continuous variables, which can be fine-tuned in STEP(iv).
  • Performance of resulting scores is evaluated using the mean AUC across dichotomous classifications (mAUC), with 95% CI computed using bootstrap (Default: n_boot = 100 bootstrap samples). Setting n_boot = 1 disables bootstrap and reports mAUC without CI.
cut_vec <- AutoScore_weighting_Ordinal(
  train_set = train_set, validation_set = validation_set, 
  final_variables = final_variables, link = link, max_score = 100,
  categorize = "quantile", quantiles = c(0, 0.05, 0.2, 0.8, 0.95, 1), 
  n_boot = 10
)
****Included Variables: 
  variable_name
1           Age
2         Lab_A
3        Util_D
4        Util_B
5      Comorb_A
****Initial Scores: 


========  ===========  =====
variable  interval     point
========  ===========  =====
Age       <27            0  
          [27,46)       12  
          [46,78)       21  
          [78,87)       29  
          >=87          33  
                            
Lab_A     <46           13  
          [46,61)        0  
          [61,136)       2  
          [136,608)     10  
          >=608         12  
                            
Util_D    <0.64          6  
          [0.64,1.3)     8  
          [1.3,3.83)     4  
          [3.83,5.74)    4  
          >=5.74         0  
                            
Util_B    <1             0  
          [1,4)         10  
          >=4           23  
                            
Comorb_A  0              0  
          1             23  
========  ===========  =====
***Performance (based on validation set):
mAUC: 0.7440     95% CI: 0.7229-0.7539 (from 10 bootstrap samples)
***The cutoffs of each variables generated by the AutoScore-Ordinal are saved in cut_vec. You can decide whether to revise or fine-tune them 

6.2.4 STEP(iv): fine-tune initial score from STEP(iii)

AutoScore-Ordinal Module 5 & Re-run AutoScore-Ordinal Modules 2+3

  • Revise cut_vec with domain knowledge to update the scoring table (AutoScore-Ordinal Module 5).
  • Re-run AutoScore-Ordinal Modules 2+3 to generate the updated scores.
  • Users can choose any cutoff values and/or any number of categories, but are suggested to choose numbers close to the automatically determined values.
## For example, we have current cutoffs of continuous variable: Age 
## ==============  ===========  =====
## variable        interval     point
## ==============  ===========  =====
## Age                 <27          0  
##                     [27,46)      4  
##                     [46,78)     14  
##                     [78,87)     18 
##                     >=87        21 
  • Current cutoffs:c(27, 46, 78, 87). We can fine tune the cutoffs as follows:
# Example 1: rounding to a nice number
cut_vec$Age <- c(25, 45, 75, 85)

# Example 2: changing cutoffs according to clinical knowledge or preference 
cut_vec$Age <- c(25, 50, 75, 85)

# Example 3: combining categories
cut_vec$Age <- c(45, 75, 85)
  • mAUC and 95% bootstrap CI (Default: n_boot = 100 bootstrap samples) are reported after fine-tuning.
  • Run time increases with larger n_boot value. Code below uses n_boot = 10 for demonstration.
cut_vec$Util_D <- c(2 / 3, 4 / 3, 4, 6)
cut_vec$Lab_A <- c(45, 60, 135, 595)
cut_vec$Age <- c(25, 45, 75, 85)
cut_vec$Vital_A <- c(60, 70, 95, 115)
scoring_table <- AutoScore_fine_tuning_Ordinal(
  train_set = train_set, validation_set = validation_set, link = link,
  final_variables = final_variables, cut_vec = cut_vec, max_score = 100, 
  n_boot = 10
)
***Fine-tuned Scores: 


========  ============  =====
variable  interval      point
========  ============  =====
Age       <25             0  
          [25,45)         9  
          [45,75)        20  
          [75,85)        26  
          >=85           30  
                             
Lab_A     <45            14  
          [45,60)         0  
          [60,135)        2  
          [135,595)       9  
          >=595          11  
                             
Util_D    <0.667          6  
          [0.667,1.33)    9  
          [1.33,4)        5  
          [4,6)           6  
          >=6             0  
                             
Util_B    <1              0  
          [1,4)          11  
          >=4            24  
                             
Comorb_A  0               0  
          1              23  
========  ============  =====
***Performance (based on Validation Set, after fine-tuning):
mAUC: 0.7450     95% CI: 0.7341-0.7645 (from 10 bootstrap samples)

6.2.5 STEP(v): evaluate final risk scores on test dataset

AutoScore-Ordinal Module 6

  • mAUC and generalised c-index are reported for the test set, with 95% bootstrap CI (Default: n_boot = 100 bootstrap samples).
  • Run time increases with larger n_boot value. Code below uses n_boot = 10 for demonstration.
pred_score <- AutoScore_testing_Ordinal(
  test_set = test_set, link = link, final_variables = final_variables, 
  cut_vec = cut_vec, scoring_table = scoring_table, 
  with_label = TRUE, n_boot = 10
)
***Performance using AutoScore-Ordinal (based on unseen test Set):
mAUC: 0.7425     95% CI: 0.7281-0.7679 (from 10 bootstrap samples)
Generalised c-index: 0.6998      95% CI: 0.6907-0.7225 (from 10 bootstrap samples)
head(pred_score)
  pred_score Label
1         25     1
2         37     1
3         61     1
4         36     1
5         15     1
6         19     1

6.2.6 Map score to risk

6.3 Demo 3: data with missing values

In Demo 3, we demonstrate AutoScore-Ordinal for application to data with missing values in two variables (i.e., Vital_A, Vital_B).

check_data_ordinal(sample_data_ordinal_missing)
Data type check passed. 

WARNING: NA detected in data: -----
        Variable name No. missing %missing
Vital_A       Vital_A        4000       20
Vital_B       Vital_B       12000       60
SUGGESTED ACTION:
 * Consider imputation and supply AutoScore with complete data.
 * Alternatively, AutoScore can handle missing values as a separate 'Unknown' category, IF:
    - you believe the missingness in your dataset is informative, AND
    - missing is prevalent enough that you prefer to preserve them as NA rather than removing or doing imputation, AND
    - missing is not too prevalent, which may make results unstable.

AutoScore can automatically treat the missingness as a new category named Unknown. The following steps are the same as those in Demo 1 (6.1).

Important
  • High missing rate may cause the variable ranking less reliable, and thus, caution is needed for variable selection using parsimony plot.
set.seed(4)
out_split <- split_data(data = sample_data_ordinal_missing, ratio = c(0.7, 0.1, 0.2), 
                        strat_by_label = TRUE)
train_set <- out_split$train_set
validation_set <- out_split$validation_set
test_set <- out_split$test_set

link <- "logit"
ranking <- AutoScore_rank_Ordinal(train_set = train_set, ntree = 100)
The ranking based on variable importance was shown below for each variable: 
   Util_D     Lab_A   Vital_F       Age   Vital_E   Vital_A   Vital_D     Lab_B 
425.24001 400.43374 392.73640 386.44000 381.80802 360.50478 355.61072 315.06078 
    Lab_C    Util_C    Util_B   Vital_C  Comorb_A    Util_A    Gender  Comorb_B 
290.87695 254.94879 207.48694 196.58751 115.91414 101.57517  55.84234  43.89240 
 Comorb_D  Comorb_C  Comorb_E   Vital_B 
 33.75145  17.30841  11.98427   1.16588 

mAUC <- AutoScore_parsimony_Ordinal(
  train_set = train_set, validation_set = validation_set, link = link,
  rank = ranking, max_score = 100, n_min = 1, n_max = 20,
  categorize = "quantile", quantiles = c(0, 0.05, 0.2, 0.8, 0.95, 1), 
  auc_lim_min = 0
)
Select 1 variables:  Mean area under the curve: 0.4555607 
Select 2 variables:  Mean area under the curve: 0.5110174 
Select 3 variables:  Mean area under the curve: 0.5780548 
Select 4 variables:  Mean area under the curve: 0.6524352 
Select 5 variables:  Mean area under the curve: 0.6579466 
Select 6 variables:  Mean area under the curve: 0.675316 
Select 7 variables:  Mean area under the curve: 0.6714668 
Select 8 variables:  Mean area under the curve: 0.6731644 
Select 9 variables:  Mean area under the curve: 0.6709335 
Select 10 variables:  Mean area under the curve: 0.671352 
Select 11 variables:  Mean area under the curve: 0.6993675 
Select 12 variables:  Mean area under the curve: 0.7013214 
Select 13 variables:  Mean area under the curve: 0.7473641 
Select 14 variables:  Mean area under the curve: 0.7504558 
Select 15 variables:  Mean area under the curve: 0.7501569 
Select 16 variables:  Mean area under the curve: 0.7495503 
Select 17 variables:  Mean area under the curve: 0.7478535 
Select 18 variables:  Mean area under the curve: 0.7470084 
Select 19 variables:  Mean area under the curve: 0.7468875 
Select 20 variables:  Mean area under the curve: 0.7444887 

Note
  • The Unknown category indicating the missingness will be displayed in the final scoring table.
final_variables <- names(ranking[c(1:6)])
cut_vec <- AutoScore_weighting_Ordinal(
  train_set = train_set, validation_set = validation_set, 
  final_variables = final_variables, link = link, max_score = 100,
  categorize = "quantile", quantiles = c(0, 0.05, 0.2, 0.8, 0.95, 1), 
  n_boot = 10
)
****Included Variables: 
  variable_name
1        Util_D
2         Lab_A
3       Vital_F
4           Age
5       Vital_E
6       Vital_A
****Initial Scores: 


========  ============  =====
variable  interval      point
========  ============  =====
Util_D    <0.652         11  
          [0.652,1.32)   10  
          [1.32,3.93)     3  
          [3.93,5.93)     2  
          >=5.93          0  
                             
Lab_A     <46             9  
          [46,61)         0  
          [61,134)        1  
          [134,584)      11  
          >=584           7  
                             
Vital_F   <16.7          12  
          [16.7,20.5)     4  
          [20.5,25.4)     0  
          [25.4,28.1)     1  
          >=28.1          5  
                             
Age       <27             0  
          [27,46)         4  
          [46,78)        19  
          [78,87)        25  
          >=87           30  
                             
Vital_E   <99            15  
          [99,112)       10  
          [112,153)       5  
          [153,179)       3  
          >=179           0  
                             
Vital_A   <57             6  
          [57,66)         0  
          [66,99)         5  
          [99,115)        9  
          >=116          21  
          Unknown         6  
========  ============  =====
***Performance (based on validation set):
mAUC: 0.6753     95% CI: 0.6360-0.7193 (from 10 bootstrap samples)
***The cutoffs of each variables generated by the AutoScore-Ordinal are saved in cut_vec. You can decide whether to revise or fine-tune them