An overview on pakcage aides

CRAN Lifecycle: stable Licence


1 Brief overview

The aides, a R package, emerges as a valuable collection of functions designed to provide supplementary information and intricacies in the critical processes of data synthesis and evidence evaluation. In the realm of evidence-based decision-making, these processes are pivotal, shaping the foundation upon which informed conclusions are drawn. The aides, essentially a toolkit for pooled analysis of aggregated data, is meticulously crafted to enhance the inclusivity and depth of this decision-making approach.

Developed with core values of flexibility, ease of use, and comprehensibility, aides plays a crucial role in simplifying the often complex analysis process. This accessibility extends to both seasoned professionals and the broader public, fostering a more widespread engagement with synthesized evidence. The significance of such engagement cannot be overstated, as it empowers individuals to navigate through the intricacies of data, promoting a better understanding of the evidence at hand.

Moreover, aides is committed to staying at the forefront of advances in the methodology of data synthesis and evidence evaluation. This commitment ensures that users have access to some advanced methods, further enhancing the robustness and reliability of their decision-making processes. In the long term, the overarching goal of the aides package is to contribute to knowledge translation, enabling individuals to make decisions based on a comprehensive understanding of the evidence. In essence, aides serves as a beacon, guiding users through the complex terrain of data synthesis and evidence evaluation, ultimately facilitating informed and impactful decision-making.

Users are suggested to use functions in aides by calling the library with following syntax:

library(aides)


2 Features

Briefly, aides currently consists of three tasks as follows:


3 Functions and examples

Users can import their data and do relevant tests or graphics using functions in package aides. The present package consists of eight functions listed as follows:

Disparity:

PlotDistrSS() Lifecycle: stable, TestDisparity() Lifecycle: stable, and PlotDisparity() Lifecycle: stable.


Discordance:

TestDiscordance() Lifecycle: experimental.


Sequential analysis:

DoSA() Lifecycle: stable, DoOSA() Lifecycle: stable, PlotOSA() Lifecycle: stable, and PlotPower() Lifecycle: experimental.


3.1 Examples of functions:

3.1.1 Disparity:

The following steps and syntax demonstrate how user can carry out disparity test. Figure 3.2 visualizes the test based on the excessive cases of oulier(s).

Step 1. Import data (Olkin 1995)

library(meta)
data("Olkin1995")
dataOlkin1995 <- Olkin1995


Step 2. Process data

dataOlkin1995$n <- dataOlkin1995$n.exp + dataOlkin1995$n.cont


Step 3. Check distribution of study sizes. Using function shapiro.test() is a simple way to test whether sample sizes distribute normally, and further visualization with statistics can be carried out by function PlotDistrSS().

shapiro.test(dataOlkin1995$n)
PlotDistrSS(dataOlkin1995$n)

#> 
#>  Shapiro-Wilk normality test
#> 
#> data:  dataOlkin1995$n
#> W = 0.2596, p-value < 2.2e-16
An example for visualization of distribution of study sizes

Figure 3.1: An example for visualization of distribution of study sizes

If users would like to check normality using Kolmogorov-Smirnov test, they can set parameter method with argument "ks" in the function PlotDistrSS().

PlotDistrSS(n = n,
            data = dataOlkin1995, 
            study = author, 
            time = year,
            method = "ks")


Step 4. Test assumption of disparity in study size

TestDisparity(n = n, 
              data = dataOlkin1995, 
              study = author, 
              time = year)
#> Summary of disparities in sample size test:
#> Number of outliers = 13 (Excessive cases = 36509; P-value < 0.001)
#> Variability = 3.658 (P-value < 0.001)
#> 
#> Outlier detection method: MAD
#> Variability detection method: CV


Step 5. Illustrate disparity plot

TestDisparity(n = n,
              data = dataOlkin1995, 
              study = author, 
              time = year, 
              plot = TRUE)
An example for disparity-outlier plot

Figure 3.2: An example for disparity-outlier plot


Due to non-normal distribution among the study sizes as shown in Step 3 (Figure 3.1 and also see the result of shapiro test), robust method is recommended for testing variability, which can be carried out by the following syntax:

rsltDisparity <- TestDisparity(n = n, 
                               data = dataOlkin1995, 
                               study = author, 
                               time = year,
                               vrblty = "MAD")
#> Summary of disparities in sample size test:
#> Number of outliers = 13 (Excessive cases = 36509; P-value < 0.001)
#> Variability = 0.951 (P-value < 0.001)
#> 
#> Outlier detection method: MAD
#> Variability detection method: MAD


The following syntax instead of step 5 aforementioned is recommended for illustrating disparity plot of variability based on robust coefficient of variation:

PlotDisparity(rsltDisparity, 
              which = "CV", 
              szFntAxsX = 1)
An example for disparity-variability (robust) plot

Figure 3.3: An example for disparity-variability (robust) plot


3.1.2 Discordance:

The following steps and syntax demonstrate how user can carry out discordance test. Figure 3.4 visualizes the test.

Step 1. Import data (example of the study by Fleiss 1993)

library(meta)
data("Fleiss1993bin")
dataFleiss1993bin <- Fleiss1993bin


Step 2. Process data

dataFleiss1993bin$n  <- dataFleiss1993bin$n.asp + dataFleiss1993bin$n.plac
dataFleiss1993bin$se <- sqrt((1 / dataFleiss1993bin$d.asp) - (1 / dataFleiss1993bin$n.asp) + (1 / dataFleiss1993bin$d.plac) - (1 / dataFleiss1993bin$n.plac))


Step 3. Test assumption of discordance in study size

TestDiscordance(n = n, 
                se = se, 
                study = study,
                data = dataFleiss1993bin)
#> Summary of discordance in ranks test:
#>  Statistics (Bernoulli exact): 2
#>  P-value: 0.423
#>  Note: No significant finding in the test of discordance in study size ranks.


Step 4. Illustrate discordance plot

TestDiscordance(n = n, 
                se = se, 
                study = study, 
                data = dataFleiss1993bin, 
                plot = TRUE)
An example for discordance plot

Figure 3.4: An example for discordance plot


3.1.3 Sequential analysis:

The following steps and syntax demonstrate how user can carry out sequential analysis. Figure 3.5 sequential analysis plot.

Step 1. Import data (example of the study by Fleiss 1993)

library(meta)
data("Fleiss1993bin")
dataFleiss1993bin <- Fleiss1993bin


Step 2. Do sequential analysis

DoSA(Fleiss1993bin, 
     source = study, 
     time = year,
     r1 = d.asp, 
     n1 = n.asp, 
     r2 = d.plac, 
     n2 = n.plac, 
     measure = "RR",
     PES = 0.1,
     group = c("Aspirin", "Placebo"))
#> Summary of sequential analysis (main information)
#>  Acquired sample size: 28003
#>  Required sample size (heterogeneity adjusted): 1301
#>  Cumulative z score: -2.035
#>  Alpha-spending boundary: 0.422 and -0.422
#>  Adjusted confidence interval is not necessary to be performed.
#> 
#> Summary of sequential analysis (additional information)
#>  1. Assumed information
#>  1.1. Defined type I error: 0.05
#>  1.2. Defined type II error: 0.2
#>  1.3. Defined power: 0.8
#>  1.4. Presumed effect: 0.1
#>       (risks in group 1 and 2 were 10.44608758%, and 12.34144781% respectively; RRR = 0.181)
#>  1.5. Presumed variance: 0.101
#> 
#>  2. Meta-analysis
#>  2.1. Setting of the meta-analysis
#>  Data were pooled using inverse variance approach in random-effects model with DL method.
#>  2.2. Result of the meta-analysis 
#>  Log RR: -0.113 (95% CI: -0.222 to -0.004)
#>  
#>  3. Adjustment factor 
#>  The required information size is calculated with adjustment factor based on diversity (D-squared). Relevant parameters are listed as follows.
#>  3.1. Heterogeneity (I-squared): 39.6%
#>  3.2. Diversity (D-squared): 76%
#>  3.3. Adjustement factor: 4.103


Step 3. Visualize sequential analysis

DoSA(Fleiss1993bin, 
     source = study, 
     time = year,
     r1 = d.asp, 
     n1 = n.asp, 
     r2 = d.plac, 
     n2 = n.plac, 
     measure = "RR",
     PES = 0.1,
     group = c("Aspirin", "Placebo"),
     plot = TRUE)
An example for sequential analysis

Figure 3.5: An example for sequential analysis


Observed sequential analysis is recommended for those pooled analysis without pre-specified parameters for sequential analysis. In this situation, thus, Step 2 should use following syntax:

DoOSA(Fleiss1993bin, 
      source = study, 
      time = year,
      r1 = d.asp, 
      n1 = n.asp, 
      r2 = d.plac, 
      n2 = n.plac, 
      measure = "RR",
      group = c("Aspirin", "Placebo"))
#> Summary of observed sequential analysis (main information)
#>  Acquired sample size: 28003
#>  Optimal sample size (heterogeneity adjusted): 36197
#>  Cumulative z score: -2.035
#>  Alpha-spending boundary: 2.228 and -2.228
#>  Adjusted confidence interval is suggested to be performed.
#> 
#> Adjusted confidence interval based on type I error 0.0129289672426074: 
#> -0.252 to 0.025
#> 
#> Summary of observed sequential analysis (additional information)
#>  1. Observed information
#>  1.1. Defined type I error: 0.05
#>  1.2. Defined type II error: 0.2
#>  1.3. Defined power: 0.8
#>  1.4. Observed effect size 0.019
#>       (risks in group 1 and 2 were 10.44608758%, and 12.34144781% respectively; RRR = 0.181)
#>  1.5. Observed variance: 0.101
#> 
#>  2. Meta-analysis
#>  2.1. Setting of the meta-analysis
#>  Data were pooled using inverse variance approach in random-effects model with DL method.
#>  2.2. Result of the meta-analysis 
#>  Log RR: -0.113 (95% CI: -0.222 to -0.004)
#>  
#>  3. Adjustment factor 
#>  The optimal information size is calculated with adjustment factor based on diversity (D-squared). Relevant parameters are listed as follows.
#>  3.1. Heterogeneity (I-squared): 39.6%
#>  3.2. Diversity (D-squared): 76%
#>  3.3. Adjustment factor: 4.103


Observed sequential analysis is illustrated in using the same function (DoOSA()) with argument TRUE for the parameter plot, and plot of sequential-adjusted power could be an alternative graphic of observed sequential analysis. These analyses and graphics can be carried out by the following two steps with syntax.

Step 1. Conduct observed sequential analysis (example of the study by Fleiss 1993).

output <- DoOSA(Fleiss1993bin, 
                source = study,
                time = year,
                r1 = d.asp, 
                n1 = n.asp, 
                r2 = d.plac, 
                n2 = n.plac, 
                measure = "RR",
                group = c("Aspirin", "Placebo"),
                plot = TRUE)

Step 2. Visualize sequential-adjusted power

PlotPower(output)
An example for illustrating sequential-adjusted power

Figure 3.6: An example for illustrating sequential-adjusted power

An example for illustrating sequential-adjusted power

Figure 3.7: An example for illustrating sequential-adjusted power