Briey introduction

The R package abess implement a polynomial algorithm in the for best-subset selection problem: $\min_{\beta \in \mathbb{R}^p} \frac{1}{2n} \| y - X\beta\|_2^2, \text{ subject to } \|\beta\|_0 \leq s,$ where $$\| \cdot \|_2$$ is the $$\ell_2$$ norm, $$\|\beta\|_0=\sum_{i=1}^pI( \beta_i\neq 0)$$ is the $$\ell_0$$ norm of $$\beta$$, and the sparsity level $$s$$ is usually an unknown non-negative integer. Next, we present an example to show how to use the abess package to solve a simple problem.

Quick example

Fixed support size best subset selection

We generate a design matrix $$X$$ containing 300 observation and each observation has 1000 predictors. The response variable $$y$$ is linearly related to the first, second, and fifth predictors in $$X$$: $y = 3X_1 + 1.5X_2 + 2X_5 + \epsilon,$ where $$\epsilon$$ is a standard normal random variable.

library(abess)
synthetic_data <- generate.data(n = 300, p = 1000,
beta = c(3, 1.5, 0, 0, 2, rep(0, 995)))
dim(synthetic_data[["x"]])
## [1]  300 1000
head(synthetic_data[["y"]])
##           [,1]
## [1,] -4.063922
## [2,]  3.855246
## [3,] -3.041391
## [4,] -1.081257
## [5,]  4.986772
## [6,]  4.470901
dat <- cbind.data.frame("y" = synthetic_data[["y"]],
synthetic_data[["x"]])

Then, we use the main function abess in the package to fit this dataset. By setting the arguments support.size = s, abess function conducts Algorithm 1 in the for best-subset selection with a sparsity level s. In our example, we set the options: support.size = 3, and we run Algorithm 1 with the following command:

abess_fit <- abess(y ~ ., data = dat, support.size = 3)

The output of abess comprises the selected best model:

head(coef(abess_fit, sparse = FALSE))
##                       3
## (intercept) -0.01802179
## x1           2.96418205
## x2           1.45090693
## x3           0.00000000
## x4           0.00000000
## x5           1.90592036

The best model’s support set is identical to the ground truth, and the coefficient estimation is the same as the oracle estimator given by lm function:

lm(y ~ ., data = dat[, c(1, c(1, 2, 5) + 1)])
##
## Call:
## lm(formula = y ~ ., data = dat[, c(1, c(1, 2, 5) + 1)])
##
## Coefficients:
## (Intercept)           x1           x2           x5
##    -0.01802      2.96418      1.45091      1.90592

Adaptive best subset selection

Imaging we are unknown about the true sparsity level in real world data, and thus, we need to determine the most proper one. The Algorithm 3 in the is designed for this scenario. abess is capable of performing this algorithm:

abess_fit <- abess(y ~ ., data = dat)

The output of abess also comprises the selected best model:

best_size <- abess_fit[["best.size"]]
print(best_size)
## [1] 3
head(coef(abess_fit, support.size = best_size, sparse = FALSE))
##                       3
## (intercept) -0.01802179
## x1           2.96418205
## x2           1.45090693
## x3           0.00000000
## x4           0.00000000
## x5           1.90592036

The output model accurately detect the true model size, which implies the Algorithm 3 efficiently find both the optimal sparsity level and true effective predictors.

Real data example

Hitters Dataset

In this tutorial, we are going to demonstrate how to use the abess package to carry out best subset selection on the Hitters dataset. We hope to use several predictors related to the performance of the baseball athletes last year to predict their salary. First, let’s have a look at this dataset. There are 19 variables except Salary and 322 observations.

Hitters <- read.csv("Hitters.csv", header = TRUE)
head(Hitters)
##   AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun CRuns CRBI CWalks
## 1   293   66     1   30  29    14     1    293    66      1    30   29     14
## 2   315   81     7   24  38    39    14   3449   835     69   321  414    375
## 3   479  130    18   66  72    76     3   1624   457     63   224  266    263
## 4   496  141    20   65  78    37    11   5628  1575    225   828  838    354
## 5   321   87    10   39  42    30     2    396   101     12    48   46     33
## 6   594  169     4   74  51    35    11   4408  1133     19   501  336    194
##   League Division PutOuts Assists Errors Salary NewLeague
## 1      A        E     446      33     20     NA         A
## 2      N        W     632      43     10  475.0         N
## 3      A        W     880      82     14  480.0         A
## 4      N        E     200      11      3  500.0         N
## 5      N        E     805      40      4   91.5         N
## 6      A        W     282     421     25  750.0         A
dim(Hitters)
## [1] 322  20
sum(is.na(Hitters))
## [1] 59

Note that this dataset contains some missing data. So we use the na.omit() function to delete rows that have incomplete information. After that, we have 263 observations remains.

Hitters <- na.omit(Hitters)
dim(Hitters)
## [1] 263  20
sum(is.na(Hitters))
## [1] 0

Then we change the factors into dummy variables with the model.matrix() function. Note that the abess function will automatically include the intercept.

Hitters <- model.matrix(~., Hitters)[, -1]
Hitters <- as.data.frame(Hitters)

Running ABESS

The abess() function in the abess package allows you to perform best subset selection in a highly efficient way. You can call the abess() function using formula just like what you do with lm(). Or you can specify the design matrix x and the response y. The system.time function records the run time.

library(abess)
abess_fit <- abess(Salary ~ ., Hitters)
abess_fit <- abess(Hitters[, -which(colnames(Hitters) == "Salary")], Hitters$Salary) class(abess_fit) ## [1] "abess" Interpret the Result After get the estimator, we can further do more exploring work. The output of abess() function contains the best model for all the candidate support size in the support.size. You can use some generic function to quickly draw some information of those estimators. # draw the estimated coefficients on all candidate support size coef(abess_fit) ## 20 x 20 sparse Matrix of class "dgCMatrix" ## ## (intercept) 535.9259 274.5803864 -47.9559022 -71.4592204 13.9231044 ## AtBat . . . . . ## Hits . . 3.3008446 2.8038162 2.6757978 ## HmRun . . . . . ## Runs . . . . . ## RBI . . . . . ## Walks . . . . . ## Years . . . . . ## CAtBat . . . . . ## CHits . . . . . ## CHmRun . . . . . ## CRuns . . . . . ## CRBI . 0.7909536 0.6898994 0.6825275 0.6817790 ## CWalks . . . . . ## LeagueN . . . . . ## DivisionW . . . . -139.9538855 ## PutOuts . . . 0.2735814 0.2735002 ## Assists . . . . . ## Errors . . . . . ## NewLeagueN . . . . . ## ## (intercept) 25.2819915 140.7461378 85.2423652 117.1520434 197.6616396 ## AtBat -2.0349977 -1.7074618 -1.8483141 -2.0339209 -2.0803280 ## Hits 8.1842739 7.5589177 7.5099111 6.8549136 6.8263359 ## HmRun . . . . . ## Runs . . . . . ## RBI . . . . . ## Walks 3.9059431 . 3.5679937 6.4406642 5.9761215 ## Years . -23.1434527 . . -15.9414459 ## CAtBat . . . . . ## CHits . . . . . ## CHmRun . . . . . ## CRuns . 0.4992573 0.1950761 0.7045391 0.8143029 ## CRBI 0.6417565 0.5199978 0.4568198 0.5273238 0.6000624 ## CWalks . . . -0.8066062 -0.7503761 ## LeagueN . . . . . ## DivisionW . . -120.1298286 -123.7798366 -123.4936780 ## PutOuts 0.2645828 0.2971180 0.2753986 0.2753892 0.2702288 ## Assists . . . . . ## Errors . . . . . ## NewLeagueN . . . . . ## ## (intercept) 206.5672285 159.3284463 135.5194919 142.9090129 144.6793182 ## AtBat -2.2556858 -2.0803658 -2.0563475 -2.0120568 -2.0883279 ## Hits 7.0378766 7.6025575 7.5064072 7.3751935 7.6436454 ## HmRun . . . . 2.3406524 ## Runs . -2.1086040 -1.7965622 -1.7130320 -2.3580478 ## RBI . . . . . ## Walks 6.2793246 6.2748850 6.0619776 5.9906173 6.1794713 ## Years -16.7414858 . . . . ## CAtBat . -0.1468464 -0.1524448 -0.1527096 -0.1488074 ## CHits . . . . . ## CHmRun . . . . . ## CRuns 0.8132079 1.5349917 1.5589219 1.5535444 1.5931621 ## CRBI 0.6508515 0.7665071 0.7775813 0.7850103 0.7170767 ## CWalks -0.7882990 -0.8442911 -0.8350722 -0.8404419 -0.8565844 ## LeagueN . . 39.0865444 41.9165343 44.2352269 ## DivisionW -123.2261893 -114.0032832 -112.6442519 -112.3809790 -112.8079905 ## PutOuts 0.2824819 0.2904255 0.2842332 0.2896964 0.2876182 ## Assists 0.1872292 0.2518043 0.2434442 0.3312276 0.3677311 ## Errors . . . -2.8685826 -3.1271251 ## NewLeagueN . . . . . ## ## (intercept) 163.3275824 148.5836248 148.4333315 148.2187229 163.1035878 ## AtBat -2.1085651 -1.9709735 -1.9509056 -1.9508802 -1.9798729 ## Hits 7.6501026 7.4357720 7.3914057 7.4394969 7.5007675 ## HmRun 2.3654025 4.1120447 4.0827974 4.3449102 4.3308829 ## Runs -2.3535049 -2.2236760 -2.2396702 -2.3312443 -2.3762100 ## RBI . -0.9880849 -0.9940157 -1.0669853 -1.0449620 ## Walks 6.1730276 6.1712241 6.1970575 6.2195518 6.2312863 ## Years -4.2321550 . . . -3.4890543 ## CAtBat -0.1341737 -0.1892782 -0.1913281 -0.1887222 -0.1713405 ## CHits . 0.1915402 0.2067327 0.1635828 0.1339910 ## CHmRun . . . -0.1516681 -0.1728611 ## CRuns 1.5426322 1.4377384 1.4249652 1.4715814 1.4543049 ## CRBI 0.7144063 0.7444649 0.7414741 0.8020666 0.8077088 ## CWalks -0.8446970 -0.8035045 -0.8037607 -0.8124337 -0.8115709 ## LeagueN 42.2835360 42.9252666 64.1928201 63.7503250 62.5994230 ## DivisionW -113.9853363 -115.8457351 -116.0617552 -116.0404251 -116.8492456 ## PutOuts 0.2859836 0.2831001 0.2830325 0.2827381 0.2818925 ## Assists 0.3643305 0.3785769 0.3773191 0.3755259 0.3710692 ## Errors -3.2379385 -3.2309336 -3.3199889 -3.2940025 -3.3607605 ## NewLeagueN . . -24.8892236 -24.3988530 -24.7623251 # get the deviance of the estimated model on all candidate support size deviance(abess_fit) ## [1] 202734.27 137565.32 116526.84 111214.06 106353.05 103309.59 104199.43 ## [8] 99179.50 95662.49 94654.62 94081.77 92868.83 92521.80 92354.17 ## [15] 92200.23 92154.67 92088.89 92051.13 92047.79 92017.87 # print the fitted model print(abess_fit) ## Call: ## abess.default(x = Hitters[, -which(colnames(Hitters) == "Salary")], ## y = Hitters$Salary)
##
##    support.size       dev      GIC
## 1             0 202734.27 3213.768
## 2             1 137565.32 3116.836
## 3             2 116526.84 3078.241
## 4             3 111214.06 3071.026
## 5             4 106353.05 3064.330
## 6             5 103309.59 3061.752
## 7             6 104199.43 3069.066
## 8             7  99179.50 3061.138
## 9             8  95662.49 3056.700
## 10            9  94654.62 3058.972
## 11           10  94081.77 3062.434
## 12           11  92868.83 3064.079
## 13           12  92521.80 3068.152
## 14           13  92354.17 3072.733
## 15           14  92200.23 3077.352
## 16           15  92154.67 3082.280
## 17           16  92088.89 3087.150
## 18           17  92051.13 3092.101
## 19           18  92047.79 3097.149
## 20           19  92017.87 3102.121

Prediction is allowed for all the estimated model. Just call predict.abess() function with the support.size set to the size of model you are interested in. If a support.size is not provided, prediction will be made on the model with best tuning value.

hitters_pred <- predict(abess_fit,
newx = Hitters[, -which(colnames(Hitters) == "Salary")],
support.size = c(3, 4))
head(hitters_pred)
##           3         4
## 2 611.11976  545.8175
## 3 715.34087  643.8563
## 4 950.55323 1017.2414
## 5 424.10211  498.2470
## 6 708.86493  632.3839
## 7  59.21692  139.8497

The plot.abess() function helps to visualize the change of models with the change of support size. There are 5 types of graph you can generate, including coef for the coefficient value, l2norm for the L2-norm of the coefficients, dev for the deviance and tune for the tuning value. Default if coef.

plot(abess_fit, label = TRUE)

The graph shows that, beginning from the most dense model, the 15th variable (Division, A factor with levels E and W indicating player’s division at the end of 1986) is included in the active set until the support size reaches 3.

We can also generate a graph about the tuning value. Remember that we used the default GIC to tune the support size.

plot(abess_fit, type = "tune")

The tuning value reaches the lowest point at 6. And We might choose the estimated model with support size equals 6 as our final model. In fact, the tuning values of different model sizes are provided in tune.value of the abess object. You can get the best model size through the following call.

extract(abess_fit)[["support.size"]]
## [1] 8

To extract any model from the abess object, we can call the extract() function with a given support.size. If support.size is not provided, the model with the best tuning value will be returned. Here we extract the model with support size equals 6.

best.model <- extract(abess_fit, support.size = 6)
str(best.model)
## List of 7
##  $beta :Formal class 'dgCMatrix' [package "Matrix"] with 6 slots ## .. ..@ i : int [1:6] 0 1 6 10 11 15 ## .. ..@ p : int [1:2] 0 6 ## .. ..@ Dim : int [1:2] 19 1 ## .. ..@ Dimnames:List of 2 ## .. .. ..$ : chr [1:19] "AtBat" "Hits" "HmRun" "Runs" ...
##   .. .. ..$: chr "6" ## .. ..@ x : num [1:6] -1.707 7.559 -23.143 0.499 0.52 ... ## .. ..@ factors : list() ##$ intercept   : num 141
##  $support.size: num 6 ##$ support.vars: chr [1:6] "AtBat" "Hits" "Years" "CRuns" ...
##  $support.beta: num [1:6] -1.707 7.559 -23.143 0.499 0.52 ... ##$ dev         : num 104199
##  $tune.value : num 3069 The return is a list containing the basic information of the estimated model. Advanced features Feature screening for ultra-high dimensional dataset The consists of 18 variables about crime from the 1995 FBI UCR (e.g., per capita arson crimes and per capita violent crimes), communities information in the U.S. (e.g., the percent of the population considered urban), socio-economic data from the 90s census (e.g., the median family income), and law enforcement data from the 1990 law enforcement management and admin stats survey (e.g., per capita number of police officers). It would be appropriate if any of the crime state in community can be modeled by the basic community information, socio-economic and law enforcement state in community. Here, without the loss of generality, per capita violent crimes is chosen as the response variable, and 102 numerical variables as well as their pairwise interactions is considered as predictors. The pre-processed dataset for statistical modeling has 200 observations and 5253 predictors, and the code for pre-processing are openly shared in . The pre-processed dataset can be freely downloaded by running: working_directory <- getwd() if (file.exists("crime.rda")) { load("crime.rda") } else { crime_data_url <- "https://github.com/abess-team/abess/raw/master/R-package/data-raw/crime.rda" download.file(crime_data_url, "crime.rda") load(file.path(working_directory, "crime.rda")) } As mentioned before, this dataset comprises 5000+ features, much larger than the number of observations: dim(crime) ## [1] 500 5254 And thus, it would be better to first perform feature screening, which is also supported by the abess function. Suppose we are interested in retaining 1000 variables with the largest marginal utility, then we can conduct the command: abess_fit <- abess(y ~ ., data = crime, screening.num = 1000) str(abess_fit) ## List of 14 ##$ beta          :Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
##   .. ..@ i       : int [1:528] 442 442 3248 442 3248 4900 442 1005 3248 4899 ...
##   .. ..@ p       : int [1:34] 0 0 1 3 6 10 15 21 28 36 ...
##   .. ..@ Dim     : int [1:2] 5253 33
##   .. ..@ Dimnames:List of 2
##   .. .. ..$: chr [1:5253] "pop" "perHoush" "pctBlack" "pctWhite" ... ## .. .. ..$ : chr [1:33] "0" "1" "2" "3" ...
##   .. ..@ x       : num [1:528] -0.251 -0.143 6.335 -0.153 4.802 ...
##   .. ..@ factors : list()
##  $intercept : num [1:33] 599 2115 1257 1337 1000 ... ##$ dev           : num [1:33] 381863 172847 156992 151887 146356 ...
##  $tune.value : num [1:33] 6426 6043 6007 6003 5997 ... ##$ nobs          : int 500
##  $nvars : int 5253 ##$ family        : chr "gaussian"
##  $tune.path : chr "sequence" ##$ tune.type     : chr "GIC"
##  $support.size : int [1:33] 0 1 2 3 4 5 6 7 8 9 ... ##$ edf           : num [1:33] 0 1 2 3 4 5 6 7 8 9 ...
##  $best.size : int 9 ##$ screening.vars: chr [1:1000] "pctBlack" "pctWhite" "medIncome" "pctWdiv" ...
##  $call : language abess.formula(formula = y ~ ., data = crime, screening.num = 1000) ## - attr(*, "class")= chr "abess" The returned object of abess includes the features selected by screening. We exhibit six variables of them: head(abess_fit[["screening.vars"]]) ## [1] "pctBlack" "pctWhite" "medIncome" "pctWdiv" "pctPubAsst" ## [6] "medFamIncome" Then, by the generic extract function, we can obtain the best model detected by ABESS algorithm, and get the variables in the best model: best_model <- extract(abess_fit) str(best_model) ## List of 7 ##$ beta        :Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
##   .. ..@ i       : int [1:9] 303 368 1005 1178 1745 1759 2378 3701 3950
##   .. ..@ p       : int [1:2] 0 9
##   .. ..@ Dim     : int [1:2] 5253 1
##   .. ..@ Dimnames:List of 2
##   .. .. ..$: chr [1:5253] "pop" "perHoush" "pctBlack" "pctWhite" ... ## .. .. ..$ : chr "9"
##   .. ..@ x       : num [1:9] 0.227547 0.517409 1.323939 0.201385 0.000313 ...
##   .. ..@ factors : list()
##  $intercept : num 424 ##$ support.size: int 9
##  $support.vars: chr [1:9] "pctBlack:pctWhite" "pctBlack:pctPopDenseHous" "pct65up:pctFemDivorc" "pctUrban:pctUnemploy" ... ##$ support.beta: num [1:9] 0.227547 0.517409 1.323939 0.201385 0.000313 ...
##  $dev : num 120282 ##$ tune.value  : num 5962
best_vars <- best_model[["support.vars"]]
best_vars
## [1] "pctBlack:pctWhite"
## [2] "pctBlack:pctPopDenseHous"
## [3] "pct65up:pctFemDivorc"
## [4] "pctUrban:pctUnemploy"
## [5] "pctPubAsst:ownHousMed"
## [6] "pctPubAsst:pctBornStateResid"
## [7] "otherPerCap:pctHousWOphone"
## [8] "pctKids:medOwnCostPctWO"
## [9] "pctKidsBornNevrMarr:pctVacantBoarded"

More features

There are plenty features provided by abess packages such as logistic regression and group selection. Please the other articles in our website for more details.