SuperML R package is designed to unify the model training process in R like Python. Generally, it’s seen that people spend lot of time in searching for packages, figuring out the syntax for training machine learning models in R. This behaviour is highly apparent in users who frequently switch between R and Python. This package provides a python´s scikit-learn interface (fit
, predict
) to train models faster.
In addition to building machine learning models, there are handy functionalities to do feature engineering
This ambitious package is my ongoing effort to help the r-community build ML models easily and faster in R.
You can install latest cran version using (recommended):
You can install the developmemt version directly from github using:
For machine learning, superml is based on the existing R packages. Hence, while installing the package, we don’t install all the dependencies. However, while training any model, superml will automatically install the package if its not found. Still, if you want to install all dependencies at once, you can simply do:
This package uses existing r-packages to build machine learning model. In this tutorial, we’ll use data.table R package to do all tasks related to data manipulation.
We’ll quickly prepare the data set to be ready to served for model training.
load("../data/reg_train.rda")
# if the above doesn't work, you can try: load("reg_train.rda")
library(data.table)
library(caret)
#> Loading required package: lattice
#> Loading required package: ggplot2
library(superml)
library(Metrics)
#>
#> Attaching package: 'Metrics'
#> The following objects are masked from 'package:caret':
#>
#> precision, recall
head(reg_train)
#> Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape LandContour
#> 1: 1 60 RL 65 8450 Pave <NA> Reg Lvl
#> 2: 2 20 RL 80 9600 Pave <NA> Reg Lvl
#> 3: 3 60 RL 68 11250 Pave <NA> IR1 Lvl
#> 4: 4 70 RL 60 9550 Pave <NA> IR1 Lvl
#> 5: 5 60 RL 84 14260 Pave <NA> IR1 Lvl
#> 6: 6 50 RL 85 14115 Pave <NA> IR1 Lvl
#> Utilities LotConfig LandSlope Neighborhood Condition1 Condition2 BldgType
#> 1: AllPub Inside Gtl CollgCr Norm Norm 1Fam
#> 2: AllPub FR2 Gtl Veenker Feedr Norm 1Fam
#> 3: AllPub Inside Gtl CollgCr Norm Norm 1Fam
#> 4: AllPub Corner Gtl Crawfor Norm Norm 1Fam
#> 5: AllPub FR2 Gtl NoRidge Norm Norm 1Fam
#> 6: AllPub Inside Gtl Mitchel Norm Norm 1Fam
#> HouseStyle OverallQual OverallCond YearBuilt YearRemodAdd RoofStyle RoofMatl
#> 1: 2Story 7 5 2003 2003 Gable CompShg
#> 2: 1Story 6 8 1976 1976 Gable CompShg
#> 3: 2Story 7 5 2001 2002 Gable CompShg
#> 4: 2Story 7 5 1915 1970 Gable CompShg
#> 5: 2Story 8 5 2000 2000 Gable CompShg
#> 6: 1.5Fin 5 5 1993 1995 Gable CompShg
#> Exterior1st Exterior2nd MasVnrType MasVnrArea ExterQual ExterCond Foundation
#> 1: VinylSd VinylSd BrkFace 196 Gd TA PConc
#> 2: MetalSd MetalSd None 0 TA TA CBlock
#> 3: VinylSd VinylSd BrkFace 162 Gd TA PConc
#> 4: Wd Sdng Wd Shng None 0 TA TA BrkTil
#> 5: VinylSd VinylSd BrkFace 350 Gd TA PConc
#> 6: VinylSd VinylSd None 0 TA TA Wood
#> BsmtQual BsmtCond BsmtExposure BsmtFinType1 BsmtFinSF1 BsmtFinType2
#> 1: Gd TA No GLQ 706 Unf
#> 2: Gd TA Gd ALQ 978 Unf
#> 3: Gd TA Mn GLQ 486 Unf
#> 4: TA Gd No ALQ 216 Unf
#> 5: Gd TA Av GLQ 655 Unf
#> 6: Gd TA No GLQ 732 Unf
#> BsmtFinSF2 BsmtUnfSF TotalBsmtSF Heating HeatingQC CentralAir Electrical
#> 1: 0 150 856 GasA Ex Y SBrkr
#> 2: 0 284 1262 GasA Ex Y SBrkr
#> 3: 0 434 920 GasA Ex Y SBrkr
#> 4: 0 540 756 GasA Gd Y SBrkr
#> 5: 0 490 1145 GasA Ex Y SBrkr
#> 6: 0 64 796 GasA Ex Y SBrkr
#> 1stFlrSF 2ndFlrSF LowQualFinSF GrLivArea BsmtFullBath BsmtHalfBath FullBath
#> 1: 856 854 0 1710 1 0 2
#> 2: 1262 0 0 1262 0 1 2
#> 3: 920 866 0 1786 1 0 2
#> 4: 961 756 0 1717 1 0 1
#> 5: 1145 1053 0 2198 1 0 2
#> 6: 796 566 0 1362 1 0 1
#> HalfBath BedroomAbvGr KitchenAbvGr KitchenQual TotRmsAbvGrd Functional
#> 1: 1 3 1 Gd 8 Typ
#> 2: 0 3 1 TA 6 Typ
#> 3: 1 3 1 Gd 6 Typ
#> 4: 0 3 1 Gd 7 Typ
#> 5: 1 4 1 Gd 9 Typ
#> 6: 1 1 1 TA 5 Typ
#> Fireplaces FireplaceQu GarageType GarageYrBlt GarageFinish GarageCars
#> 1: 0 <NA> Attchd 2003 RFn 2
#> 2: 1 TA Attchd 1976 RFn 2
#> 3: 1 TA Attchd 2001 RFn 2
#> 4: 1 Gd Detchd 1998 Unf 3
#> 5: 1 TA Attchd 2000 RFn 3
#> 6: 0 <NA> Attchd 1993 Unf 2
#> GarageArea GarageQual GarageCond PavedDrive WoodDeckSF OpenPorchSF
#> 1: 548 TA TA Y 0 61
#> 2: 460 TA TA Y 298 0
#> 3: 608 TA TA Y 0 42
#> 4: 642 TA TA Y 0 35
#> 5: 836 TA TA Y 192 84
#> 6: 480 TA TA Y 40 30
#> EnclosedPorch 3SsnPorch ScreenPorch PoolArea PoolQC Fence MiscFeature
#> 1: 0 0 0 0 <NA> <NA> <NA>
#> 2: 0 0 0 0 <NA> <NA> <NA>
#> 3: 0 0 0 0 <NA> <NA> <NA>
#> 4: 272 0 0 0 <NA> <NA> <NA>
#> 5: 0 0 0 0 <NA> <NA> <NA>
#> 6: 0 320 0 0 <NA> MnPrv Shed
#> MiscVal MoSold YrSold SaleType SaleCondition SalePrice
#> 1: 0 2 2008 WD Normal 208500
#> 2: 0 5 2007 WD Normal 181500
#> 3: 0 9 2008 WD Normal 223500
#> 4: 0 2 2006 WD Abnorml 140000
#> 5: 0 12 2008 WD Normal 250000
#> 6: 700 10 2009 WD Normal 143000
split <- createDataPartition(y = reg_train$SalePrice, p = 0.7)
xtrain <- reg_train[split$Resample1]
xtest <- reg_train[!split$Resample1]
# remove features with 90% or more missing values
# we will also remove the Id column because it doesn't contain
# any useful information
na_cols <- colSums(is.na(xtrain)) / nrow(xtrain)
na_cols <- names(na_cols[which(na_cols > 0.9)])
xtrain[, c(na_cols, "Id") := NULL]
xtest[, c(na_cols, "Id") := NULL]
# encode categorical variables
cat_cols <- names(xtrain)[sapply(xtrain, is.character)]
for(c in cat_cols){
lbl <- LabelEncoder$new()
lbl$fit(c(xtrain[[c]], xtest[[c]]))
xtrain[[c]] <- lbl$transform(xtrain[[c]])
xtest[[c]] <- lbl$transform(xtest[[c]])
}
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
# removing noise column
noise <- c('GrLivArea','TotalBsmtSF')
xtrain[, c(noise) := NULL]
xtest[, c(noise) := NULL]
# fill missing value with -1
xtrain[is.na(xtrain)] <- -1
xtest[is.na(xtest)] <- -1
KNN Regression
knn <- KNNTrainer$new(k = 2,prob = T,type = 'reg')
knn$fit(train = xtrain, test = xtest, y = 'SalePrice')
probs <- knn$predict(type = 'prob')
labels <- knn$predict(type='raw')
rmse(actual = xtest$SalePrice, predicted=labels)
#> [1] 51106.66
SVM Regression
svm <- SVMTrainer$new()
svm$fit(xtrain, 'SalePrice')
pred <- svm$predict(xtest)
rmse(actual = xtest$SalePrice, predicted = pred)
Simple Regresison
lf <- LMTrainer$new(family="gaussian")
lf$fit(X = xtrain, y = "SalePrice")
summary(lf$model)
#>
#> Call:
#> stats::glm(formula = f, family = self$family, data = X, weights = self$weights)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -312721 -14153 -156 12115 244735
#>
#> Coefficients: (1 not defined because of singularities)
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -8.869e+05 1.547e+06 -0.573 0.566473
#> MSSubClass -1.067e+02 5.380e+01 -1.983 0.047704 *
#> MSZoning -6.443e+02 1.405e+03 -0.459 0.646618
#> LotFrontage -4.261e+01 3.312e+01 -1.287 0.198554
#> LotArea 5.286e-01 1.198e-01 4.412 1.14e-05 ***
#> Street -4.959e+04 1.965e+04 -2.524 0.011768 *
#> LotShape -5.552e+02 1.982e+03 -0.280 0.779415
#> LandContour 1.862e+03 1.809e+03 1.029 0.303795
#> Utilities NA NA NA NA
#> LotConfig 1.063e+03 1.050e+03 1.013 0.311359
#> LandSlope 7.696e+03 4.620e+03 1.666 0.096130 .
#> Neighborhood -7.277e+02 1.766e+02 -4.120 4.12e-05 ***
#> Condition1 -1.831e+03 8.433e+02 -2.171 0.030203 *
#> Condition2 8.107e+02 5.482e+03 0.148 0.882473
#> BldgType -1.563e+03 1.998e+03 -0.782 0.434228
#> HouseStyle 5.478e+02 8.798e+02 0.623 0.533653
#> OverallQual 1.450e+04 1.357e+03 10.689 < 2e-16 ***
#> OverallCond 4.953e+03 1.229e+03 4.030 6.03e-05 ***
#> YearBuilt 4.486e+02 8.102e+01 5.537 3.98e-08 ***
#> YearRemodAdd 8.126e+01 7.812e+01 1.040 0.298532
#> RoofStyle 6.332e+03 2.012e+03 3.147 0.001699 **
#> RoofMatl -2.530e+04 3.552e+03 -7.124 2.08e-12 ***
#> Exterior1st -2.487e+02 5.735e+02 -0.434 0.664661
#> Exterior2nd 3.682e+02 5.392e+02 0.683 0.494803
#> MasVnrType 4.476e+03 1.558e+03 2.872 0.004166 **
#> MasVnrArea 3.590e+01 6.962e+00 5.156 3.07e-07 ***
#> ExterQual 4.947e+03 2.381e+03 2.078 0.037969 *
#> ExterCond 2.716e+02 1.697e+03 0.160 0.872881
#> Foundation -2.724e+03 2.024e+03 -1.345 0.178800
#> BsmtQual 6.312e+03 1.439e+03 4.387 1.28e-05 ***
#> BsmtCond -3.555e+03 1.851e+03 -1.921 0.055091 .
#> BsmtExposure 9.860e+02 8.425e+02 1.170 0.242165
#> BsmtFinType1 -1.270e+03 7.624e+02 -1.666 0.095961 .
#> BsmtFinSF1 9.079e+00 5.589e+00 1.624 0.104604
#> BsmtFinType2 9.813e+02 1.284e+03 0.764 0.445024
#> BsmtFinSF2 -9.117e-01 1.107e+01 -0.082 0.934395
#> BsmtUnfSF 4.139e+00 5.277e+00 0.784 0.433040
#> Heating -6.717e+02 3.602e+03 -0.186 0.852124
#> HeatingQC -1.244e+03 1.400e+03 -0.889 0.374457
#> CentralAir 3.564e+03 5.220e+03 0.683 0.494967
#> Electrical 3.257e+03 1.994e+03 1.634 0.102676
#> `1stFlrSF` 5.827e+01 7.050e+00 8.264 4.70e-16 ***
#> `2ndFlrSF` 5.097e+01 5.866e+00 8.689 < 2e-16 ***
#> LowQualFinSF 5.024e+01 2.183e+01 2.301 0.021616 *
#> BsmtFullBath 1.043e+04 2.875e+03 3.629 0.000300 ***
#> BsmtHalfBath 7.068e+03 4.481e+03 1.577 0.115049
#> FullBath 5.511e+03 3.078e+03 1.791 0.073680 .
#> HalfBath 9.987e+02 2.931e+03 0.341 0.733366
#> BedroomAbvGr -9.788e+03 1.892e+03 -5.173 2.81e-07 ***
#> KitchenAbvGr -2.529e+04 5.869e+03 -4.309 1.81e-05 ***
#> KitchenQual 5.420e+03 1.793e+03 3.023 0.002567 **
#> TotRmsAbvGrd 4.951e+03 1.370e+03 3.613 0.000319 ***
#> Functional -6.199e+03 1.471e+03 -4.213 2.76e-05 ***
#> Fireplaces -3.105e+03 2.511e+03 -1.237 0.216538
#> FireplaceQu 4.681e+03 1.322e+03 3.542 0.000417 ***
#> GarageType 1.116e+03 1.224e+03 0.912 0.362168
#> GarageYrBlt -1.392e+00 5.058e+00 -0.275 0.783201
#> GarageFinish 2.139e+03 1.433e+03 1.493 0.135761
#> GarageCars 1.367e+04 3.397e+03 4.023 6.20e-05 ***
#> GarageArea -2.927e+00 1.150e+01 -0.254 0.799235
#> GarageQual 7.193e+03 3.239e+03 2.221 0.026584 *
#> GarageCond -5.070e+03 3.096e+03 -1.638 0.101799
#> PavedDrive -5.657e+02 3.208e+03 -0.176 0.860090
#> WoodDeckSF 2.764e+01 9.006e+00 3.069 0.002206 **
#> OpenPorchSF 2.246e+01 1.742e+01 1.289 0.197590
#> EnclosedPorch 1.691e+01 1.832e+01 0.923 0.356189
#> `3SsnPorch` 4.264e+01 3.189e+01 1.337 0.181589
#> ScreenPorch 7.476e+01 1.836e+01 4.072 5.04e-05 ***
#> PoolArea -1.420e+02 2.800e+01 -5.072 4.72e-07 ***
#> Fence -2.151e+03 1.208e+03 -1.781 0.075281 .
#> MiscVal 4.080e-01 1.928e+00 0.212 0.832457
#> MoSold -3.455e+01 3.606e+02 -0.096 0.923676
#> YrSold -1.068e+02 7.714e+02 -0.138 0.889971
#> SaleType 1.711e+03 1.123e+03 1.523 0.128123
#> SaleCondition 1.265e+03 1.417e+03 0.893 0.372115
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for gaussian family taken to be 924586571)
#>
#> Null deviance: 6.5130e+12 on 1023 degrees of freedom
#> Residual deviance: 8.7836e+11 on 950 degrees of freedom
#> AIC: 24120
#>
#> Number of Fisher Scoring iterations: 2
predictions <- lf$predict(df = xtest)
#> Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
#> prediction from a rank-deficient fit may be misleading
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 39199.97
Lasso Regression
lf <- LMTrainer$new(family = "gaussian", alpha = 1, lambda = 1000)
lf$fit(X = xtrain, y = "SalePrice")
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 42505.31
Ridge Regression
lf <- LMTrainer$new(family = "gaussian", alpha=0)
lf$fit(X = xtrain, y = "SalePrice")
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 42443.87
Logistic Regression with CV
lf <- LMTrainer$new(family = "gaussian")
lf$cv_model(X = xtrain, y = 'SalePrice', nfolds = 5, parallel = FALSE)
predictions <- lf$cv_predict(df = xtest)
coefs <- lf$get_importance()
rmse(actual = xtest$SalePrice, predicted = predictions)
Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 0)
rf$fit(X = xtrain, y = "SalePrice")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> OverallQual 851141093154
#> GarageCars 543432846752
#> GarageArea 516876871803
#> 1stFlrSF 455215579840
#> YearBuilt 353643428887
#> GarageYrBlt 270776166946
#> TotRmsAbvGrd 245291633779
#> FullBath 237648654877
#> BsmtFinSF1 228928144052
#> ExterQual 217239164448
#> 2ndFlrSF 200955475376
#> YearRemodAdd 185111643401
#> LotArea 183604542285
#> FireplaceQu 135943402607
#> MasVnrArea 133234231707
#> KitchenQual 132688819478
#> Fireplaces 131545119503
#> BsmtQual 106127952582
#> Foundation 91183755051
#> LotFrontage 86638860362
#> OpenPorchSF 78913428084
#> BsmtUnfSF 74068580543
#> WoodDeckSF 67589665696
#> BsmtFinType1 66763485071
#> HeatingQC 55961299176
#> Neighborhood 55202869365
#> BedroomAbvGr 44462438646
#> GarageType 44077677491
#> MoSold 40583559479
#> Exterior2nd 39556400578
#> MSSubClass 36117441873
#> OverallCond 32373395419
#> HalfBath 31499472769
#> GarageFinish 30757218169
#> RoofStyle 27971841689
#> Exterior1st 27891891648
#> HouseStyle 26139092758
#> BsmtFullBath 25741188958
#> SaleType 23237072645
#> LotShape 22866062392
#> SaleCondition 21972141745
#> MasVnrType 19581436553
#> YrSold 19498533543
#> LandContour 18776176249
#> BsmtExposure 18506415951
#> MSZoning 17924121768
#> RoofMatl 17518630037
#> LotConfig 13829433621
#> BsmtHalfBath 13598617391
#> LandSlope 13585472731
#> ScreenPorch 13573416011
#> BldgType 12944754332
#> GarageQual 11350048833
#> EnclosedPorch 10301069753
#> CentralAir 10263123802
#> Condition1 8673929149
#> BsmtCond 7316006842
#> GarageCond 7049152113
#> KitchenAbvGr 6879979650
#> BsmtFinSF2 6101127998
#> BsmtFinType2 5763938629
#> ExterCond 5686339958
#> Functional 5206294563
#> LowQualFinSF 4957845417
#> Fence 4656313900
#> PavedDrive 3454428095
#> Heating 3213052335
#> 3SsnPorch 2706887579
#> Electrical 2176546975
#> MiscVal 1742846900
#> PoolArea 1584610825
#> Condition2 442990801
#> Street 218735977
#> Utilities 0
rmse(actual = xtest$SalePrice, predicted = pred)
#> [1] 31742.83
Xgboost
xgb <- XGBTrainer$new(objective = "reg:linear"
, n_estimators = 500
, eval_metric = "rmse"
, maximize = F
, learning_rate = 0.1
,max_depth = 6)
xgb$fit(X = xtrain, y = "SalePrice", valid = xtest)
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:179146.078125 val-rmse:178301.421875
#> Multiple eval metrics are present. Will use val_rmse for early stopping.
#> Will train until val_rmse hasn't improved in 50 rounds.
#>
#> [51] train-rmse:8893.837891 val-rmse:31831.351562
#> [101] train-rmse:4995.479004 val-rmse:31214.177734
#> [151] train-rmse:3042.429688 val-rmse:30995.785156
#> [201] train-rmse:2068.007568 val-rmse:30937.535156
#> [251] train-rmse:1397.169556 val-rmse:30913.900391
#> [301] train-rmse:969.507324 val-rmse:30913.380859
#> Stopping. Best iteration:
#> [278] train-rmse:1171.304321 val-rmse:30902.488281
pred <- xgb$predict(xtest)
rmse(actual = xtest$SalePrice, predicted = pred)
#> [1] 30902.49
Grid Search
xgb <- XGBTrainer$new(objective ="reg:linear")
gst <-GridSearchCV$new(trainer = xgb,
parameters = list(n_estimators = c(10,50), max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'))
gst$fit(xtrain, "SalePrice")
#> [1] "entering grid search"
#> [1] "In total, 4 models will be trained"
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:139727.328125
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:15665.415039
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143033.781250
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:15812.692383
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143991.125000
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:16764.001953
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:139727.328125
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:3843.248779
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143033.781250
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:3932.498535
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143991.125000
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:4439.479980
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:140423.640625
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:29755.812500
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143791.296875
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:28921.111328
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:144911.234375
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [10] train-rmse:32092.230469
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:140423.640625
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:17377.691406
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:143791.296875
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:16180.614258
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-rmse:144911.234375
#> Will train until train_rmse hasn't improved in 50 rounds.
#>
#> [50] train-rmse:18942.558594
gst$best_iteration()
#> $n_estimators
#> [1] 10
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0
#>
#> $accuracy_sd
#> [1] 0
#>
#> $auc_avg
#> [1] NaN
#>
#> $auc_sd
#> [1] NA
Random Search
rf <- RFTrainer$new()
rst <- RandomSearchCV$new(trainer = rf,
parameters = list(n_estimators = c(5,10),
max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'),
n_iter = 3)
rst$fit(xtrain, "SalePrice")
#> [1] "In total, 3 models will be trained"
rst$best_iteration()
#> $n_estimators
#> [1] 10
#>
#> $max_depth
#> [1] 2
#>
#> $accuracy_avg
#> [1] 0.00683697
#>
#> $accuracy_sd
#> [1] 0.001698197
#>
#> $auc_avg
#> [1] NaN
#>
#> $auc_sd
#> [1] NA
Here, we will solve a simple binary classification problem (predict people who survived on titanic ship). The idea here is to demonstrate how to use this package to solve classification problems.
Data Preparation
# load class
load('../data/cla_train.rda')
# if the above doesn't work, you can try: load("cla_train.rda")
head(cla_train)
#> PassengerId Survived Pclass
#> 1: 1 0 3
#> 2: 2 1 1
#> 3: 3 1 3
#> 4: 4 1 1
#> 5: 5 0 3
#> 6: 6 0 3
#> Name Sex Age SibSp Parch
#> 1: Braund, Mr. Owen Harris male 22 1 0
#> 2: Cumings, Mrs. John Bradley (Florence Briggs Thayer) female 38 1 0
#> 3: Heikkinen, Miss. Laina female 26 0 0
#> 4: Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35 1 0
#> 5: Allen, Mr. William Henry male 35 0 0
#> 6: Moran, Mr. James male NA 0 0
#> Ticket Fare Cabin Embarked
#> 1: A/5 21171 7.2500 S
#> 2: PC 17599 71.2833 C85 C
#> 3: STON/O2. 3101282 7.9250 S
#> 4: 113803 53.1000 C123 S
#> 5: 373450 8.0500 S
#> 6: 330877 8.4583 Q
# split the data
split <- createDataPartition(y = cla_train$Survived,p = 0.7)
xtrain <- cla_train[split$Resample1]
xtest <- cla_train[!split$Resample1]
# encode categorical variables - shorter way
for(c in c('Embarked','Sex','Cabin')){
lbl <- LabelEncoder$new()
lbl$fit(c(xtrain[[c]], xtest[[c]]))
xtrain[[c]] <- lbl$transform(xtrain[[c]])
xtest[[c]] <- lbl$transform(xtest[[c]])
}
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
# impute missing values
xtrain[, Age := replace(Age, is.na(Age), median(Age, na.rm = T))]
xtest[, Age := replace(Age, is.na(Age), median(Age, na.rm = T))]
# drop these features
to_drop <- c('PassengerId','Ticket','Name')
xtrain <- xtrain[,-c(to_drop), with=F]
xtest <- xtest[,-c(to_drop), with=F]
Now, our data is ready to be served for model training. Let’s do it.
KNN Classification
knn <- KNNTrainer$new(k = 2,prob = T,type = 'class')
knn$fit(train = xtrain, test = xtest, y = 'Survived')
probs <- knn$predict(type = 'prob')
labels <- knn$predict(type='raw')
auc(actual = xtest$Survived, predicted=labels)
#> [1] 0.6385027
Naive Bayes Classification
nb <- NBTrainer$new()
nb$fit(xtrain, 'Survived')
pred <- nb$predict(xtest)
#> Warning: predict.naive_bayes(): More features in the newdata are provided as
#> there are probability tables in the object. Calculation is performed based on
#> features to be found in the tables.
auc(actual = xtest$Survived, predicted=pred)
#> [1] 0.7771836
SVM Classification
#predicts labels
svm <- SVMTrainer$new()
svm$fit(xtrain, 'Survived')
pred <- svm$predict(xtest)
auc(actual = xtest$Survived, predicted=pred)
Logistic Regression
lf <- LMTrainer$new(family="binomial")
lf$fit(X = xtrain, y = "Survived")
summary(lf$model)
#>
#> Call:
#> stats::glm(formula = f, family = self$family, data = X, weights = self$weights)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.6102 -0.6018 -0.4367 0.7038 2.4493
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 1.830070 0.616894 2.967 0.00301 **
#> Pclass -0.980785 0.192493 -5.095 3.48e-07 ***
#> Sex 2.508241 0.230374 10.888 < 2e-16 ***
#> Age -0.041034 0.009309 -4.408 1.04e-05 ***
#> SibSp -0.235520 0.117715 -2.001 0.04542 *
#> Parch -0.098742 0.137791 -0.717 0.47361
#> Fare 0.001281 0.002842 0.451 0.65230
#> Cabin 0.008408 0.004786 1.757 0.07899 .
#> Embarked 0.248088 0.166616 1.489 0.13649
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 831.52 on 623 degrees of freedom
#> Residual deviance: 564.76 on 615 degrees of freedom
#> AIC: 582.76
#>
#> Number of Fisher Scoring iterations: 5
predictions <- lf$predict(df = xtest)
auc(actual = xtest$Survived, predicted = predictions)
#> [1] 0.8832145
Lasso Logistic Regression
lf <- LMTrainer$new(family="binomial", alpha=1)
lf$cv_model(X = xtrain, y = "Survived", nfolds = 5, parallel = FALSE)
pred <- lf$cv_predict(df = xtest)
auc(actual = xtest$Survived, predicted = pred)
Ridge Logistic Regression
lf <- LMTrainer$new(family="binomial", alpha=0)
lf$cv_model(X = xtrain, y = "Survived", nfolds = 5, parallel = FALSE)
pred <- lf$cv_predict(df = xtest)
auc(actual = xtest$Survived, predicted = pred)
Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 1, max_features = 3)
rf$fit(X = xtrain, y = "Survived")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> Sex 67.80128
#> Fare 57.97193
#> Age 48.37045
#> Pclass 24.64915
#> Cabin 21.45972
#> SibSp 13.51637
#> Parch 10.45743
#> Embarked 10.23844
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.7976827
Xgboost
xgb <- XGBTrainer$new(objective = "binary:logistic"
, n_estimators = 500
, eval_metric = "auc"
, maximize = T
, learning_rate = 0.1
,max_depth = 6)
xgb$fit(X = xtrain, y = "Survived", valid = xtest)
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-auc:0.886258 val-auc:0.879085
#> Multiple eval metrics are present. Will use val_auc for early stopping.
#> Will train until val_auc hasn't improved in 50 rounds.
#>
#> [51] train-auc:0.972938 val-auc:0.866370
#> Stopping. Best iteration:
#> [1] train-auc:0.886258 val-auc:0.879085
pred <- xgb$predict(xtest)
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.879085
Grid Search
xgb <- XGBTrainer$new(objective="binary:logistic")
gst <-GridSearchCV$new(trainer = xgb,
parameters = list(n_estimators = c(10,50),
max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'))
gst$fit(xtrain, "Survived")
#> [1] "entering grid search"
#> [1] "In total, 4 models will be trained"
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.144231
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.108173
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.134615
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.112981
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.115385
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.084135
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.144231
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.045673
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.134615
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.045673
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.115385
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.038462
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.211538
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.158654
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.201923
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.168269
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.206731
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [10] train-error:0.141827
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.211538
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.127404
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.201923
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.132212
#> converting the data into xgboost format..
#> starting with training...
#> [1] train-error:0.206731
#> Will train until train_error hasn't improved in 50 rounds.
#>
#> [50] train-error:0.108173
gst$best_iteration()
#> $n_estimators
#> [1] 10
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0
#>
#> $accuracy_sd
#> [1] 0
#>
#> $auc_avg
#> [1] 0.8619512
#>
#> $auc_sd
#> [1] 0.02280628
Random Search
rf <- RFTrainer$new()
rst <- RandomSearchCV$new(trainer = rf,
parameters = list(n_estimators = c(10,50), max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'),
n_iter = 3)
rst$fit(xtrain, "Survived")
#> [1] "In total, 3 models will be trained"
rst$best_iteration()
#> $n_estimators
#> [1] 50
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0.7964744
#>
#> $accuracy_sd
#> [1] 0.03090914
#>
#> $auc_avg
#> [1] 0.7729436
#>
#> $auc_sd
#> [1] 0.04283084
Let’s create some new feature based on target variable using target encoding and test a model.
# add target encoding features
xtrain[, feat_01 := smoothMean(train_df = xtrain,
test_df = xtest,
colname = "Embarked",
target = "Survived")$train[[2]]]
xtest[, feat_01 := smoothMean(train_df = xtrain,
test_df = xtest,
colname = "Embarked",
target = "Survived")$test[[2]]]
# train a random forest
# Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 1, max_features = 4)
rf$fit(X = xtrain, y = "Survived")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> Sex 69.787235
#> Fare 60.832089
#> Age 52.982604
#> Pclass 24.419818
#> Cabin 21.419274
#> SibSp 13.112177
#> Parch 10.175269
#> feat_01 6.675399
#> Embarked 6.450819
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.8018717