This document provides worked examples of Kalman filtering using the ‘fkf.SP’ function of the ‘FKF.SP’ package. The ‘fkf’ function of the package ‘FKF’ (Fast Kalman Filter) is a well-established function call of the Kalman filter algorithm that is designed to maximize computational efficiency of the filtering process. The ‘fkf.SP’ function builds from the ‘fkf’ function by taking the additional assumption that the variance of disturbances of the measurement equation are independent. This allows filtering to be performed through a sequential processing method (i.e. a univariate treatment of the multivariate process) - increasing computational efficiency in the general case. This vignette provides four worked examples, comparing the computational efficiencies of the ‘fkf’ and ‘fkf.SP’ functions for maximum likelihood estimation (MLE). The first three examples were first presented within the associated vignette of the ‘FKF’ package, with the fourth being unique to the vignette. As well as the increase in processing time generated by the ‘fkf.SP’ function, this vignette further presents and explains the difference in log-likelihood values returned by the ‘fkf’ and ‘fkf.SP’ functions when there are missing observations (i.e. NA’s are present within argument ‘yt’).
##The packages 'FKF', 'stats' and 'NFCP' are required for this Vignette:
library(FKF.SP)
library(FKF)
library(stats)
library(NFCP)
Autoregression moving average models can be estimated through Kalman filtering. See also help(makeARIMA) and help(KalmanRun).
Step 1 - Sample from an ARMA(2, 1) process through the ‘stats’ package to generate observations:
# Set constants:
## Length of series
<- 10000
n
## AR parameters
<- c(ar1 = 0.6, ar2 = 0.2, ma1 = -0.2, sigma = sqrt(0.2))
AR
# Generate observations:
set.seed(1)
<- stats::arima.sim(model = list(ar = AR[c("ar1", "ar2")], ma = AR["ma1"]), n = n,
a innov = rnorm(n) * AR["sigma"])
Step 2 - Create a state space representation of the four ARMA parameters:
<- function(ar1, ar2, ma1, sigma) {
arma21ss <- matrix(c(ar1, ar2, 1, 0), ncol = 2)
Tt <- matrix(c(1, 0), ncol = 2)
Zt <- matrix(0)
ct <- matrix(0, nrow = 2)
dt <- matrix(0)
GGt <- matrix(c(1, ma1), nrow = 2) * sigma
H <- H %*% t(H)
HHt <- c(0, 0)
a0 ## Diffuse assumption
<- matrix(1e6, nrow = 2, ncol = 2)
P0 return(list(a0 = a0, P0 = P0, ct = ct, dt = dt, Zt = Zt, Tt = Tt, GGt = GGt,
HHt = HHt))}
Parameter estimation is performed through MLE, which involves optimizing the log-likelihood returned by the Kalman filter through the ‘optim’ function.
# The objective function passed to 'optim'
<- function(theta, yt, SP) {
objective <- arma21ss(theta["ar1"], theta["ar2"], theta["ma1"], theta["sigma"])
param # Kalman filtering through the 'fkf.SP' function:
if(SP){
<- - fkf.SP(a0 = param$a0, P0 = param$P0, dt = param$dt, ct = param$ct,
ans Tt = param$Tt, Zt = param$Zt, HHt = param$HHt, GGt = param$GGt,
yt = yt)
}# Kalman filtering through the 'fkf' function:
else{
<- - fkf(a0 = param$a0, P0 = param$P0, dt = param$dt, ct = param$ct, Tt = param$Tt,
ans Zt = param$Zt, HHt = param$HHt, GGt = param$GGt, yt = yt)$logLik
}return(ans)
}##Optim minimizes functions by default, so the negative is returned
Step 3 - Estimate parameters through MLE:
#This test estimates parameters through 'optim'.
#Please run the complete chunk for a fair comparison:
#Initial values:
<- c(ar = c(0, 0), ma1 = 0, sigma = 1)
theta
###MLE through the 'fkf' function:
<- Sys.time()
start set.seed(1)
<- optim(theta, objective, yt = rbind(a), hessian = TRUE, SP = F)
FKF_estimation <- Sys.time() - start
FKF_runtime
###MLE through the 'fkf.SP' function:
<- Sys.time()
start set.seed(1)
<- optim(theta, objective, yt = rbind(a), hessian = TRUE, SP = T)
FKF.SP_estimation <- Sys.time() - start FKF.SP_runtime
The MLE process applying both functions has returned identical estimated parameters:
print(rbind(FKF.SP = FKF.SP_estimation$par, FKF = FKF_estimation$par))
#> ar1 ar2 ma1 sigma
#> FKF.SP 0.5534615 0.2276404 -0.1413417 0.4525427
#> FKF 0.5534615 0.2276404 -0.1413417 0.4525427
As well as an identical call count number for both functions:
print(c(FKF.SP = FKF.SP_estimation$counts[1], FKF = FKF_estimation$counts[1]))
#> FKF.SP.function FKF.function
#> 265 265
Utilizing Sequential Processing however, we’ve decreased processing time:
print(c(FKF.SP = FKF.SP_runtime, FKF = FKF_runtime))
#> Time differences in secs
#> FKF.SP FKF
#> 1.657572 2.714080
The vignette of ‘FKF’ shows how to filter the series with estimated parameter values and develop some plots for analysis purposes. ‘fkf.SP’ is only appropriate for efficient parameter estimation, rather than the filtering under estimated parameters.
This example presents differences in the computational time of the ‘fkf.SP’ and ‘fkf’ functions to the Nile dataset. It also shows the difference in log-likelihood values returned by the two functions that occurs when NAs are within observations.
## Transition equation:
## alpha[t+1] = alpha[t] + eta[t], eta[t] ~ N(0, HHt)
## Measurement equation:
## y[t] = alpha[t] + eps[t], eps[t] ~ N(0, GGt)
##Complete Nile Data - no NA's
<- y_incomplete <- Nile
y_complete ##Incomplete Nile Data - two NA's are present:
c(3, 10)] <- NA
y_incomplete[
## Set constant parameters:
<- ct <- matrix(0)
dt <- Tt <- matrix(1)
Zt <- y_incomplete[1] # Estimation of the first year flow
a0 <- matrix(100) # Variance of 'a0'
P0
## Parameter estimation - maximum likelihood estimation:
<- function(yt, SP){
Nile_MLE ##Unknown parameters initial estimates:
<- HHt <- var(yt, na.rm = TRUE) * .5
GGt set.seed(1)
# Kalman filtering through the 'fkf.SP' function:
if(SP){
return(optim(c(HHt = HHt, GGt = GGt),
fn = function(par, ...)
-fkf.SP(HHt = matrix(par[1]), GGt = matrix(par[2]), ...),
yt = rbind(yt), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt))
else {
} # Kalman filtering through the 'fkf' function:
return(optim(c(HHt = HHt, GGt = GGt),
fn = function(par, ...)
-fkf(HHt = matrix(par[1]), GGt = matrix(par[2]), ...)$logLik,
yt = rbind(yt), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt))
}}
Performing parameter estimation using complete data, the fkf and fkf.SP functions return identical results:
<- Nile_MLE(y_complete, SP = T)
fkf.SP_MLE_complete <- Nile_MLE(y_complete, SP = F) fkf_MLE_complete
fkf.SP:
print(fkf.SP_MLE_complete[1:3])
#> $par
#> HHt GGt
#> 1300.777 15247.773
#>
#> $value
#> [1] 637.626
#>
#> $counts
#> function gradient
#> 57 NA
fkf:
print(fkf_MLE_complete[1:3])
#> $par
#> HHt GGt
#> 1300.777 15247.773
#>
#> $value
#> [1] 637.626
#>
#> $counts
#> function gradient
#> 57 NA
Performing parameter estimation using incomplete data returns identical estimated parameters, but different log-likelihood values:
<- Nile_MLE(y_incomplete, SP = T)
fkf.SP_MLE_incomplete <- Nile_MLE(y_incomplete, SP = F) fkf_MLE_incomplete
‘fkf.SP’:
print(fkf.SP_MLE_incomplete[1:3])
#> $par
#> HHt GGt
#> 1385.066 15124.131
#>
#> $value
#> [1] 625.1676
#>
#> $counts
#> function gradient
#> 53 NA
‘fkf’:
print(fkf_MLE_incomplete[1:3])
#> $par
#> HHt GGt
#> 1385.066 15124.131
#>
#> $value
#> [1] 627.0055
#>
#> $counts
#> function gradient
#> 53 NA
The difference in log-likelihood values is equal to 1.8378771. This difference is equal to:
#Number of NA values:
<- length(which(is.na(y_incomplete)))
NA_values
print( 0.5 * NA_values * log(2 * pi))
#> [1] 1.837877
The log-likelihood score for the Kalman filter is given by:
\[ - \frac{1}{2}(n \times d \times log(2\pi)) - \frac{1}{2}\sum_{t=1}^{n}(log|F_t| + v'F^{-1}v)\] where \(n\) is the number of discrete time-steps (i.e. the number of columns of object ‘yt’) and \(d\) is the number of observations at each time point (i.e. the number of rows of object ‘yt’). \(v\) and \(F_t\) are the measurement error and function of the covariance matrix at time \(t\) respectively. The ‘fkf’ function instantiates its log-likelihood score by calculating \(- 0.5 \times n \times d \times log(2\pi)\). Under the scenario where there are missing observations, however, \(d\) would instead become \(d_t\) where \(d_t \leq d \forall t\). The instantiated log-likelihood term would instead be \(- 0.5 ((n \times d)-2) \times log(2\pi)\), explaining this difference in log-likelihood scores. The ‘fkf’ function therefore instantiates the log-likelihood score of two observations that are not actually observed.
#This test uses estimated parameters of complete data.
#Please run the complete chunk for a fair comparison:
#'fkf'
set.seed(1)
<- Sys.time()
start for(i in 1:1e4) fkf(a0, P0, dt, ct, Tt, Zt, HHt = matrix(fkf_MLE_complete$par[1]),
GGt = matrix(fkf_MLE_complete$par[2]), yt = rbind(y_complete))
<- Sys.time() - start
FKF_runtime
#'fkf.SP'
set.seed(1)
= Sys.time()
start for(i in 1:1e4) fkf.SP(a0, P0, dt, ct, Tt, Zt, HHt = matrix(fkf.SP_MLE_complete$par[1]),
GGt = matrix(fkf.SP_MLE_complete$par[2]), yt = rbind(y_complete))
<- Sys.time() - start
fkf.SP_runtime
print(c(FKF.SP = fkf.SP_runtime, FKF = FKF_runtime))
#> Time differences in secs
#> FKF.SP FKF
#> 1.322105 2.282939
Utilizing Sequential Processing has decreased processing time.
#This test estimates parameters 10 times through 'optim'.
#Please run the complete chunk for a fair comparison:
## Transition equation:
## alpha[t+1] = alpha[t] + eta[t], eta[t] ~ N(0, HHt)
## Measurement equation:
## y[t] = alpha[t] + eps[t], eps[t] ~ N(0, GGt)
## tree-ring widths in dimensionless units
<- treering
y
## Set constant parameters:
<- ct <- matrix(0)
dt <- Tt <- matrix(1)
Zt <- y[1] # Estimation of the first width
a0 <- matrix(100) # Variance of 'a0'
P0
##Time comparison - Estimate parameters 10 times:
###MLE through the 'fkf' function:
= Sys.time()
start set.seed(1)
for(i in 1:10) fit_fkf <- optim(c(HHt = var(y, na.rm = TRUE) * .5,
GGt = var(y, na.rm = TRUE) * .5),
fn = function(par, ...)
-fkf(HHt = array(par[1],c(1,1,1)), GGt = array(par[2],c(1,1,1)), ...)$logLik,
yt = rbind(y), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt)
= Sys.time() - start
run_time_FKF
###MLE through the 'fkf.SP' function:
= Sys.time()
start set.seed(1)
for(i in 1:10) fit_fkf.SP <- optim(c(HHt = var(y, na.rm = TRUE) * .5,
GGt = var(y, na.rm = TRUE) * .5),
fn = function(par, ...)
-fkf.SP(HHt = array(par[1],c(1,1,1)), GGt = matrix(par[2]), ...),
yt = rbind(y), a0 = a0, P0 = P0, dt = dt, ct = ct,
Zt = Zt, Tt = Tt)
= Sys.time() - start
run_time_FKF.SP
print(c(fkf.SP = run_time_FKF.SP, fkf = run_time_FKF))
#> Time differences in secs
#> fkf.SP fkf
#> 2.480488 3.988826
## Filter tree ring data with estimated parameters using 'fkf':
<- fkf(a0, P0, dt, ct, Tt, Zt, HHt = array(fit_fkf$par[1],c(1,1,1)),
fkf.obj GGt = array(fit_fkf$par[2],c(1,1,1)), yt = rbind(y))
Utilizing Sequential Processing has decreased processing time.
The Kalman filter can be used to fit stochastic models to time-series data of quoted prices of futures contracts of commodities. The following example estimates the parameters of a random walk (i.e. Geometric Brownian Motion) model for crude oil through MLE. Quoted futures contracts are available in the ‘NFCP’ package. See the ‘NFCP’ documentation for more details on fitting commodity pricing models to term structure data.
Step 1 - develop the objective function:
= t(log(NFCP::SS_oil$contracts)) # quoted log futures prices
yt <- NFCP::SS_oil$dt # Discrete time step
delta_t ##time to maturity of quoted futures contracts:
<- t(NFCP::SS_oil$contract_maturities)
TTM
<- yt[1,1] # initial estimate
a0 <- matrix(100) # Variance of 'a0'
P0
## GBM Function
<- function(theta, SP){
gbm_mle
<- theta["alpha_rn"] * TTM
ct <- (theta["alpha"] - 0.5 * theta["sigma"]^2) * delta_t
dt <- matrix(1, nrow(yt))
Zt <- matrix(theta["sigma"]^2 * delta_t)
HHt <- matrix(1)
Tt
##'fkf.SP' requires a vector of the diagonal elements of the variances of the measurement error
if(SP){
= rep(theta["ME_1"]^2, nrow(yt))
GGt else {
} ##'fkf' instead requires a matrix of the elements of the variances of the measurement error
= diag(theta["ME_1"]^2, nrow(yt))
GGt
}
##'fkf.SP' returns only the log-likelihood numeric value, whilst 'fkf' returns a list of filtered values
= ifelse(SP,
logLik - fkf.SP(a0 = a0, P0 = P0, dt = dt, ct = ct, Tt = Tt, Zt = Zt, HHt = HHt, GGt = GGt, yt = yt),
- fkf(a0 = a0, P0 = P0, dt = dt, ct = ct, Tt = Tt, Zt = Zt, HHt = HHt, GGt = GGt, yt = yt)$logLik
)return(logLik)
}
Step 3 - Perform MLE:
#This test estimates parameters through 'optim'.
#Please run the complete chunk for a fair comparison:
#Initial estimates
<- c(alpha = 0, alpha_rn = 0.01, sigma = 0.1, ME_1 = 0.05)
gbm_par
###MLE through the 'fkf.SP' function:
set.seed(1)
= Sys.time()
start = optim(par = gbm_par, fn = gbm_mle, SP = T)
fkf.SP.gbm <- Sys.time() - start
fkf.SP_runtime
###MLE through the 'fkf' function:
set.seed(1)
= Sys.time()
start = optim(par = gbm_par, fn = gbm_mle, SP = F)
fkf.gbm <- Sys.time() - start fkf_runtime
The presence of a large number of NA’s in the observation matrix (i.e. object ‘yt’) has resulted in significantly different MLE scores of both functions (see Example 3 for more details):
print(rbind(FKF.SP = - fkf.SP.gbm$value, FKF = - fkf.gbm$value))
#> [,1]
#> FKF.SP 10221.345
#> FKF -4778.489
Regardless, The MLE process applying both functions has returned nearly identical estimated parameters:
print(rbind(FKF.SP = fkf.SP.gbm$par, FKF = fkf.gbm$par))
#> alpha alpha_rn sigma ME_1
#> FKF.SP -0.02283278 0.001236720 0.2070780 0.03721549
#> FKF -0.02277886 0.001234892 0.2071917 0.03721757
As well as a nearly identical call count number for both functions:
print(c(FKF.SP = fkf.SP.gbm$counts[1], FKF = fkf.gbm$counts[1]))
#> FKF.SP.function FKF.function
#> 145 153
A sequential processing approach, however, has significantly decreased processing time:
print(c(FKF.SP = fkf.SP_runtime, FKF = fkf_runtime))
#> Time differences in secs
#> FKF.SP FKF
#> 0.298454 2.604848
Sequential processing is a significantly faster Kalman filtering approach for this particular example due to the large number of observations at each time point, the assumption that the variance of the disturbances are independent, the large number of NA’s that are observed as contracts expired or are made available and the dimensionality of argument ‘GGt’ being significantly reduced.