Introduction to quickSentiment

— 1. SETUP: LOAD LIBRARIES —

——————————————————————-

library(doParallel)
## Warning: package 'doParallel' was built under R version 4.4.3
## Loading required package: foreach
## Loading required package: iterators
## Loading required package: parallel
# CRAN limits the number of cores used during package checks
cores <- min(2, parallel::detectCores())
registerDoParallel(cores = cores)

— 2. LOAD AND PREPARE TRAINING DATA —

# Look for the file in the installed package first
csv_path <- system.file("extdata", "tweets.csv", package = "quickSentiment")

# Fallback for when you are building the package locally
if (csv_path == "") {
  csv_path <- "../inst/extdata/tweets.csv"
}
tweets <- read.csv(csv_path)
set.seed(123)

— 3. PREPROCESS THE TEXT —

——————————————————————-

Use the pre_process() function from our package to clean the raw text.

This step is done externally to the main pipeline, allowing you to reuse

the same cleaned text for multiple different models or analyses in the future.

tweets$cleaned_text <- pre_process(tweets$Tweet)
## quickSentiment: Retaining negation words (e.g., 'not', 'no', 'never') to preserve sentiment polarity. To apply the strict stopword list instead, set `retain_negations = FALSE`. View qs_negations for more
tweets$sentiment = ifelse(tweets$Avg>0,'P','N')

— 4. RUN THE MAIN TRAINING PIPELINE —

——————————————————————-

This is the core of the package. We call the main pipeline() function

to handle the train/test split, vectorization, model training, and evaluation.

result <- pipeline(
  # --- Define the vectorization method ---
  # Options: "bow" (raw counts), "tf" (term frequency), "tfidf", "binary"
  vect_method = "tf",
  
  # --- Define the model to train ---
  # Options: "logit", "rf", "xgb","nb"
  model_name = "rf",
  
  # --- Specify the data and column names ---
  text_vector = tweets$cleaned_text  ,   # The column with our preprocessed text
  sentiment_vector = tweets$sentiment,    # The column with the target variable
  
  # --- Set vectorization options ---
  # Use n_gram = 2 for unigrams + bigrams, or 1 for just unigrams
  n_gram = 1,
  parallel = cores
)
## --- Running Pipeline: TERM_FREQUENCY + RANDOM_FOREST ---
## Data split: 944 training elements, 237 test elements.
## Vectorizing with TERM_FREQUENCY (ngram=1)...
##   - Fitting BoW model (term_frequency) on training data...
##   - Applying BoW transformation (term_frequency) to new data...
## 
## --- Training Random Forest Model (ranger) ---
## --- Random Forest complete. Returning results. ---
## 
## ======================================================
##  PIPELINE COMPLETE: TERM_FREQUENCY + RANDOM_FOREST
##  Model AUC: 0.690
##  Recommended ROC Threshold: 0.279
## ======================================================

===================================================================

— 5. PREDICTION ON NEW, UNSEEN DATA —

===================================================================

The training is complete. The ‘result’ object now contains our trained

model and all the necessary “artifacts” for prediction.

predicted_tweets <- predict_sentiment(
  pipeline_object = result,
  tweets$cleaned_text
)
## --- Preparing new data for prediction ---
##   - Applying BoW transformation (term_frequency) to new data...
## Using optimized threshold: 0.279
## --- Making Predictions ---
## --- Prediction Complete ---
head(predicted_tweets)
##   predicted_class    prob_N    prob_P
## 1               P 0.4664830 0.5335170
## 2               P 0.3152195 0.6847805
## 3               P 0.3464905 0.6535095
## 4               P 0.3411345 0.6588655
## 5               P 0.3740126 0.6259874
## 6               P 0.2542101 0.7457899