Progress
y <- unlist(read.csv('return.csv', header = F)) # log difference in returns ?...!
x <- unlist(read.csv('x1.csv', header = F)) # don't knwo which data are those
y <- as.matrix(y)
x <- as.matrix(x)
T <- length(y)
plot(1:T, y, type = 'l', xlab = 'time', ylab = 'log diff return')
plot(x, type = 'l', main = 'need still to figure this out')
The eventual interest lies in the development of \[\Sigma_{r}(k) \equiv \frac{1}{k} Var(r_{t, t+k}| \mathcal{I}_{t})\] where \(r_{t, t+ k} = \sum_{i = 1}^{k}r_{t+i}\)
A typical model goes \[ z_{t} = \begin{bmatrix} r_{t} - \mathbb{E}[r]\\x_{t} - \mathbb{E}[x] \end{bmatrix} = \begin{bmatrix} 0 & \phi_{1, 2}\\ 0 & \phi_{2, 2} \end{bmatrix} \begin{bmatrix} r_{t-1} - \mathbb{E}[r]\\x_{t-1} - \mathbb{E}[x] \end{bmatrix} + \nu_{t} \text{ , } \nu_{t} \sim \mathcal{N} (0, \Sigma)\] We express the variance: \[ V(z_{t + 1} = z_{t + 2}... z_{t + k}|D_{t}) = \Sigma + (I + \Phi) \Sigma (I + \Phi)' + ... + (I + \Phi + ... + \Phi^{k}) \Sigma (I + \Phi + ... + \Phi^{k})'\]
Retrieving the nesessary matrix coefficients, we get the expression of interest: \[ \sigma_{r}^{2}(k) = \sigma_{1, 1}^{2} + 2 \phi_{1, 2} \sigma_{1, 2} \psi_{1}(k) + \phi_{1, 2}^{2} \sigma_{2, 2}^{2} \psi_{2}(k)\] where both \(\psi_{1}(k)\) and \(\psi_{2}(k)\) are functions of \(\phi_{2, 2}\). The decomposition of the variance is as follows:\(\sigma_{1, 1}^{2}\) is the i.i.d component, \(2 \phi_{1, 2} \sigma_{1, 2} \psi_{1}(k)\) is the mean reversion component (relying on \(\sigma_{1, 2}\) being negative) and \(\phi_{1, 2}^{2} \sigma_{2, 2}^{2} \psi_{2}(k)\) is the uncertainty about future predictors.
Now, we run this toy VAR. for those curious to estimate the restricted VAR by hands