* using log directory 'd:/Rcompile/CRANpkg/local/4.5/fairmodels.Rcheck' * using R version 4.5.1 (2025-06-13 ucrt) * using platform: x86_64-w64-mingw32 * R was compiled by gcc.exe (GCC) 14.2.0 GNU Fortran (GCC) 14.2.0 * running under: Windows Server 2022 x64 (build 20348) * using session charset: UTF-8 * checking for file 'fairmodels/DESCRIPTION' ... OK * checking extension type ... Package * this is package 'fairmodels' version '1.2.1' * package encoding: UTF-8 * checking package namespace information ... OK * checking package dependencies ... OK * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for hidden files and directories ... OK * checking for portable file names ... OK * checking whether package 'fairmodels' can be installed ... OK * checking installed package size ... OK * checking package directory ... OK * checking 'build' directory ... OK * checking DESCRIPTION meta-information ... OK * checking top-level files ... OK * checking for left-over files ... OK * checking index information ... OK * checking package subdirectories ... OK * checking code files for non-ASCII characters ... OK * checking R files for syntax errors ... OK * checking whether the package can be loaded ... [2s] OK * checking whether the package can be loaded with stated dependencies ... [1s] OK * checking whether the package can be unloaded cleanly ... [1s] OK * checking whether the namespace can be loaded with stated dependencies ... [1s] OK * checking whether the namespace can be unloaded cleanly ... [2s] OK * checking loading without being on the library search path ... [1s] OK * checking use of S3 registration ... OK * checking dependencies in R code ... OK * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK * checking R code for possible problems ... [8s] OK * checking Rd files ... [1s] NOTE checkRd: (-1) choose_metric.Rd:35: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) choose_metric.Rd:36: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) choose_metric.Rd:37: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) confusion_matrx.Rd:20: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) confusion_matrx.Rd:21: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) confusion_matrx.Rd:22: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) confusion_matrx.Rd:23: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) disparate_impact_remover.Rd:28: Lost braces 28 | pigeonholing. The number of pigeonholes is fixed and equal to min{101, unique(a)}, where a is vector with values for subgroup. So if some subgroup is not numerous and | ^ checkRd: (-1) fairness_check.Rd:47: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:48: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:49: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:50: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:51: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:52: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:53: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:54: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:55: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:56: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:57: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:58: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:61: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:62: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:63: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:64: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:65: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_check.Rd:66: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_heatmap.Rd:12: Lost braces 12 | \item{scale}{logical, if code{TRUE} metrics will be scaled to mean 0 and sd 1. Default \code{FALSE}} | ^ checkRd: (-1) fairness_heatmap.Rd:19: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_heatmap.Rd:20: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_heatmap.Rd:21: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_heatmap.Rd:22: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_pca.Rd:18: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_pca.Rd:19: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_pca.Rd:20: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_pca.Rd:21: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_pca.Rd:22: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_radar.Rd:18: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) fairness_radar.Rd:19: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) group_matrices.Rd:25: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) group_matrices.Rd:26: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) group_matrices.Rd:27: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) group_matrices.Rd:28: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) group_metric.Rd:30: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) group_metric.Rd:31: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) group_metric.Rd:32: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) group_metric.Rd:33: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) metric_scores.Rd:18: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) metric_scores.Rd:19: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) performance_and_fairness.Rd:20: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) performance_and_fairness.Rd:21: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) performance_and_fairness.Rd:22: Lost braces in \itemize; \value handles \item{}{} directly checkRd: (-1) performance_and_fairness.Rd:23: Lost braces in \itemize; \value handles \item{}{} directly * checking Rd metadata ... OK * checking Rd cross-references ... OK * checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK * checking Rd \usage sections ... OK * checking Rd contents ... OK * checking for unstated dependencies in examples ... OK * checking contents of 'data' directory ... OK * checking data for non-ASCII characters ... [0s] OK * checking LazyData ... OK * checking data for ASCII and uncompressed saves ... OK * checking installed files from 'inst/doc' ... OK * checking files in 'vignettes' ... OK * checking examples ... [10s] ERROR Running examples in 'fairmodels-Ex.R' failed The error most likely occurred in: > ### Name: fairness_heatmap > ### Title: Fairness heatmap > ### Aliases: fairness_heatmap > > ### ** Examples > > > data("german") > > y_numeric <- as.numeric(german$Risk) - 1 > > lm_model <- glm(Risk ~ ., + data = german, + family = binomial(link = "logit") + ) > > rf_model <- ranger::ranger(Risk ~ ., + data = german, + probability = TRUE, + num.trees = 200, + num.threads = 1 + ) > > explainer_lm <- DALEX::explain(lm_model, data = german[, -1], y = y_numeric) Preparation of a new explainer is initiated -> model label : lm (  default  ) -> data : 1000 rows 9 cols -> target variable : 1000 values -> predict function : yhat.glm will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package stats , ver. 4.5.1 , task classification (  default  ) -> predicted values : numerical, min = 0.1369187 , mean = 0.7 , max = 0.9832426 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -0.9572803 , mean = 6.648002e-17 , max = 0.8283475  A new explainer has been created!  > explainer_rf <- DALEX::explain(rf_model, data = german[, -1], y = y_numeric) Preparation of a new explainer is initiated -> model label : ranger (  default  ) -> data : 1000 rows 9 cols -> target variable : 1000 values -> predict function : yhat.ranger will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package ranger , ver. 0.17.0 , task classification (  default  ) -> predicted values : numerical, min = 0.07287302 , mean = 0.6989152 , max = 0.9974848 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -0.7219256 , mean = 0.001084826 , max = 0.6142332  A new explainer has been created!  > > fobject <- fairness_check(explainer_lm, explainer_rf, + protected = german$Sex, + privileged = "male" + ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 2 in total (  compatible  ) -> Metric calculation : 10/13 metrics calculated for all models ( 3 NA created )  Fairness object created succesfully  > > # same explainers with different cutoffs for female > fobject <- fairness_check(explainer_lm, explainer_rf, fobject, + protected = german$Sex, + privileged = "male", + cutoff = list(female = 0.4), + label = c("lm_2", "rf_2") + ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : female: 0.4, male: 0.5 -> Fairness objects : 1 object (  compatible  ) -> Checking explainers : 4 in total (  compatible  ) -> Metric calculation : 10/13 metrics calculated for all models ( 3 NA created )  Fairness object created succesfully  > > > fh <- fairness_heatmap(fobject) > > plot(fh) Error in rep(yes, length.out = len) : attempt to replicate an object of type 'object' Calls: plot -> plot.fairness_heatmap -> ifelse Execution halted * checking for unstated dependencies in 'tests' ... OK * checking tests ... [30s] ERROR Running 'testthat.R' [30s] Running the tests in 'tests/testthat.R' failed. Complete output: > library(testthat) > library(fairmodels) > > > test_check("fairmodels") Welcome to DALEX (version: 2.5.2). Find examples and detailed introduction at: http://ema.drwhy.ai/ Additional features will be available after installation of: ggpubr. Use 'install_dependencies()' to get all suggested dependencies Loaded gbm 2.2.2 This version of gbm is no longer under development. Consider transitioning to gbm3, https://github.com/gbm-developers/gbm3 Preparation of a new explainer is initiated -> model label : ranger (  default  ) -> data : 6172 rows 7 cols -> target variable : 6172 values -> predict function : yhat.ranger will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package ranger , ver. 0.17.0 , task classification (  default  ) -> predicted values : numerical, min = 0.1622415 , mean = 0.5451887 , max = 0.866639 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -0.848226 , mean = -0.0003085905 , max = 0.7835325  A new explainer has been created!  Preparation of a new explainer is initiated -> model label : lm (  default  ) -> data : 6172 rows 7 cols -> target variable : 6172 values -> predict function : yhat.glm will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package stats , ver. 4.5.1 , task classification (  default  ) -> predicted values : numerical, min = 0.004522979 , mean = 0.5448801 , max = 0.8855426 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -0.8822826 , mean = -5.053611e-13 , max = 0.9767658  A new explainer has been created!  Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 2 in total (  compatible  ) -> Metric calculation : 13/13 metrics calculated for all models  Fairness object created succesfully  Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 2 in total (  compatible  ) -> Metric calculation : 13/13 metrics calculated for all models Fairness object created succesfully Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from numeric  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 2 in total (  compatible  ) -> Metric calculation : 13/13 metrics calculated for all models Fairness object created succesfully Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : Creating fairness classification object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( from first fairness object  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 2 objects (  compatible  ) -> Checking explainers : 4 in total (  compatible  ) Creating fairness classification object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( from first fairness object  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 1 object (  compatible  ) -> Checking explainers : 3 in total (  model type not supported  ) Creating fairness classification object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( from first fairness object  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 2 objects (  not compatible  ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( from first fairness object  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 2 objects (  not compatible  ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 2 objects (  not compatible  ) Preparation of a new explainer is initiated -> model label : lm (  default  ) -> data : 1000 rows 3 cols -> target variable : 1000 values -> predict function : yhat.lm will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package stats , ver. 4.5.1 , task regression (  default  ) -> predicted values : numerical, min = -325.7554 , mean = 753.651 , max = 1654.856 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -338.0257 , mean = -9.237739e-13 , max = 353.6267  A new explainer has been created!  Creating fairness regression object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from character  ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 3/3 metrics calculated for all models  Fairness regression object created succesfully  Creating fairness regression object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from character  ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 3/3 metrics calculated for all models Fairness regression object created succesfully Creating fairness regression object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from character  ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 3/3 metrics calculated for all models  Fairness regression object created succesfully  Creating fairness regression object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from character  ) -> Fairness objects : 1 object (  compatible  ) -> Checking explainers : 2 in total (  compatible  ) Creating fairness regression object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from numeric  ) Creating fairness regression object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from character  ) Creating fairness regression object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( from first fairness object  ) Creating fairness regression object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( from first fairness object  ) Creating fairness regression object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( changed from character  ) -> Fairness objects : 1 object (  compatible  ) -> Checking explainers : 2 in total (  model type not supported  ) Preparation of a new explainer is initiated -> model label : lm (  default  ) -> data : 1000 rows 3 cols -> target variable : 1000 values -> predict function : yhat.lm will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package stats , ver. 4.5.1 , task regression (  default  ) -> predicted values : numerical, min = -325.7554 , mean = 753.651 , max = 1654.856 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -338.0257 , mean = -9.237739e-13 , max = 353.6267  A new explainer has been created!  Creating fairness regression object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from character  ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 3/3 metrics calculated for all models  Fairness regression object created succesfully  Creating fairness regression object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( from first fairness object  ) -> Fairness objects : 1 object (  compatible  ) -> Checking explainers : 2 in total (  compatible  ) -> Metric calculation : 3/3 metrics calculated for all models  Fairness regression object created succesfully  Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 11/13 metrics calculated for all models ( 2 NA created )  Fairness object created succesfully  Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  not compatible  ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 11/13 metrics calculated for all models ( 2 NA created )  Fairness object created succesfully  Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 1 object (  compatible  ) -> Checking explainers : 2 in total (  compatible  ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 11/13 metrics calculated for all models ( 2 NA created )  Fairness object created succesfully  Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 2 objects (  not compatible  ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 2 in total (  y not equal  ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 11/13 metrics calculated for all models ( 2 NA created )  Fairness object created succesfully  Creating fairness classification object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( from first fairness object  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 2 objects (  compatible  ) -> Checking explainers : 2 in total (  compatible  ) -> Metric calculation : 13/13 metrics calculated for all models  Fairness object created succesfully  Performace metric not given, setting deafult ( accuracy ) Performace metric not given, setting deafult ( accuracy ) Performace metric not given, setting deafult ( accuracy ) Fairness Metric not given, setting deafult ( TPR ) Fairness Metric not given, setting deafult ( TPR ) Fairness Metric not given, setting deafult ( TPR ) Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 2 in total (  compatible  ) -> Metric calculation : 6/13 metrics calculated for all models ( 7 NA created )  Fairness object created succesfully  Fairness Metric not given, setting deafult ( TPR ) Performace metric not given, setting deafult ( accuracy ) Creating object with: Fairness metric: TPR Performance metric: accuracy Creating object with: Fairness metric: FPR Performance metric: f1 Fairness data top rows for FPR group score model 1 African_American 0.35204756 lm 2 Asian 0.04347826 lm 3 Caucasian 0.16393443 lm 4 Hispanic 0.11562500 lm 5 Native_American 0.16666667 lm 6 Other 0.07762557 lm Performance data for f1 : 1 lm 0.6039853 2 ranger 0.6443375 Fairness Metric is NULL, setting deafult parity loss metric ( TPR ) Performace metric is NULL, setting deafult ( accuracy ) Creating object with: Fairness metric: TPR Performance metric: accuracy Performace metric is NULL, setting deafult ( accuracy ) Creating object with: Fairness metric: non_existing Performance metric: accuracy Fairness Metric is NULL, setting deafult parity loss metric ( TPR ) Creating object with: Fairness metric: TPR Performance metric: non_existing Fairness Metric is NULL, setting deafult parity loss metric ( TPR ) Creating object with: Fairness metric: TPR Performance metric: auc Fairness Metric is NULL, setting deafult parity loss metric ( TPR ) Creating object with: Fairness metric: TPR Performance metric: accuracy Fairness Metric is NULL, setting deafult parity loss metric ( TPR ) Creating object with: Fairness metric: TPR Performance metric: precision Fairness Metric is NULL, setting deafult parity loss metric ( TPR ) Creating object with: Fairness metric: TPR Performance metric: recall Preparation of a new explainer is initiated -> model label : lm (  default  ) -> data : 6172 rows 7 cols -> target variable : 6172 values -> predict function : yhat.glm will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package stats , ver. 4.5.1 , task classification (  default  ) -> predicted values : numerical, min = 0.1144574 , mean = 0.4551199 , max = 0.995477 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -0.9767658 , mean = 5.053909e-13 , max = 0.8822826  A new explainer has been created!  Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 8/13 metrics calculated for all models ( 5 NA created )  Fairness object created succesfully  Preparation of a new explainer is initiated -> model label : lm (  default  ) -> data : 1000 rows 3 cols -> target variable : 1000 values -> predict function : yhat.lm will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package stats , ver. 4.5.1 , task regression (  default  ) -> predicted values : numerical, min = -119.546 , mean = 756.4906 , max = 1594.562 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -302.6659 , mean = 3.478115e-13 , max = 332.7938  A new explainer has been created!  Creating fairness regression object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( changed from character  ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 3/3 metrics calculated for all models  Fairness regression object created succesfully  Preparation of a new explainer is initiated -> model label : ranger (  default  ) -> data : 1000 rows 3 cols -> target variable : 1000 values -> predict function : yhat.ranger will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package ranger , ver. 0.17.0 , task regression (  default  ) -> predicted values : numerical, min = 361.6527 , mean = 756.1869 , max = 1136.792 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -669.0748 , mean = 0.3037205 , max = 630.6428  A new explainer has been created!  Creating fairness regression object -> Privileged subgroup : character ( from first fairness object  ) -> Protected variable : factor ( from first fairness object  ) -> Fairness objects : 1 object (  compatible  ) -> Checking explainers : 2 in total (  compatible  ) -> Metric calculation : 3/3 metrics calculated for all models  Fairness regression object created succesfully  Creating fairness classification object -> Privileged subgroup : character ( Ok  ) -> Protected variable : factor ( Ok  ) -> Cutoff values for explainers : 0.5 ( for all subgroups ) -> Fairness objects : 0 objects -> Checking explainers : 1 in total (  compatible  ) -> Metric calculation : 13/13 metrics calculated for all models  Fairness object created succesfully  changing protected to factor Preparation of a new explainer is initiated -> model label : lm (  default  ) -> data : 15 rows 2 cols -> target variable : 15 values -> predict function : yhat.glm will be used (  default  ) -> predicted values : No value for predict function target column. (  default  ) -> model_info : package stats , ver. 4.5.1 , task classification (  default  ) -> predicted values : numerical, min = 7.884924e-12 , mean = 0.4666667 , max = 1 -> residual function : difference between y and yhat (  default  ) -> residuals : numerical, min = -7.884924e-12 , mean = -5.256659e-13 , max = 7.884915e-12  A new explainer has been created!  [ FAIL 3 | WARN 2 | SKIP 0 | PASS 299 ] ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test_heatmap.R:2:3'): Test heatmap ────────────────────────────────── Error in `rep(yes, length.out = len)`: attempt to replicate an object of type 'object' Backtrace: ▆ 1. ├─base::plot(fairness_heatmap(fobject)) at test_heatmap.R:2:3 2. └─fairmodels:::plot.fairness_heatmap(fairness_heatmap(fobject)) 3. └─base::ifelse(...) ── Failure ('test_plot_density.R:14:3'): Test plot_density ───────────────────── plt$labels$x not equal to "probability". target is NULL, current is character ── Error ('test_plot_fairmodels.R:8:3'): Test plot_fairmodels ────────────────── Error in `rep(yes, length.out = len)`: attempt to replicate an object of type 'object' Backtrace: ▆ 1. ├─base::suppressWarnings(...) at test_plot_fairmodels.R:8:3 2. │ └─base::withCallingHandlers(...) 3. ├─fairmodels:::expect_s3_class(...) 4. │ ├─testthat::expect(...) at D:\RCompile\CRANpkg\local\4.5\fairmodels.Rcheck\tests\testthat\helper_objects.R:70:20 5. │ └─base::class(object) %in% class 6. ├─fairmodels::plot_fairmodels(fc, type = "fairness_heatmap") 7. └─fairmodels:::plot_fairmodels.fairness_object(fc, type = "fairness_heatmap") 8. └─fairmodels:::plot_fairmodels.default(x, type, ...) 9. └─fairmodels:::plot.fairness_heatmap(fairness_heatmap(x, ...)) 10. └─base::ifelse(...) [ FAIL 3 | WARN 2 | SKIP 0 | PASS 299 ] Error: Test failures Execution halted * checking for unstated dependencies in vignettes ... OK * checking package vignettes ... OK * checking re-building of vignette outputs ... [62s] ERROR Error(s) in re-building vignettes: --- re-building 'Advanced_tutorial.Rmd' using rmarkdown --- finished re-building 'Advanced_tutorial.Rmd' --- re-building 'Basic_tutorial.Rmd' using rmarkdown Quitting from Basic_tutorial.Rmd:254-257 [unnamed-chunk-19] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error in `rep()`: ! attempt to replicate an object of type 'object' --- Backtrace: ▆ 1. ├─base::plot(fheatmap, text_size = 3) 2. └─fairmodels:::plot.fairness_heatmap(fheatmap, text_size = 3) 3. └─base::ifelse(...) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Error: processing vignette 'Basic_tutorial.Rmd' failed with diagnostics: attempt to replicate an object of type 'object' --- failed re-building 'Basic_tutorial.Rmd' SUMMARY: processing the following file failed: 'Basic_tutorial.Rmd' Error: Vignette re-building failed. Execution halted * checking PDF version of manual ... [23s] OK * checking HTML version of manual ... [9s] OK * DONE Status: 3 ERRORs, 1 NOTE