Introduction to Time Series Analysis - 04
This note is for course MATH 545 at McGill University.
Lecture 13 - Lecture 15
Durbin-Levinson Algorithm
In this algorithm, coefficients can be computed recursively.
Assume {Xt} is stationary and E(Xt)=0.
Algorithm
Compute coefficients ϕn1,⋯,ϕnn at step n of the algorithm
Goal is to predict Xn+1
At step n, best linear predictor is of the form PnXn+1=∑j=1najXN−j+1
ϕnn=[γ(n)−∑j=1n−1ϕn−1,jγ(n−j)]−1νn−1
⎣⎡ϕn1⋯ϕn,n−1⎦⎤=⎣⎡ϕn−1,1⋯ϕn−1,n−1⎦⎤−ϕnn⎣⎡ϕn−1,n−1⋯ϕn−1,1⎦⎤
and νn=νn−1[1−ϕnn2]
ϕ11=γ(0)γ(1)=ρ(1), ν(0)=γ(0)
Now let Rn=γ(0)1Γn, ∼ρ1:n=(ρ(1),⋯,ρ(n)), and ∼ρn:1=∼ρ1:nR=(ρ(n),⋯,ρ(1)). (Note: superscript R here means ‘reverse’)
If Γn∼ϕn=∼γ(n), then dividing by γ(0), we have Rn∼ϕn=∼ρ1:n(n)
(Using induction to prove)
R1ϕ11=ρ(1) because R1=1
Assume Rk∼ϕk=∼ρ1:k holds, we need to show Rk+1∼ϕk+1=∼ρ1:(k+1)
\begin{align} R_{k+1}\underset{\sim}{\phi_{k+1}} &= \begin{bmatrix} R_k & \underset{\sim}{\rho_{k:1}} \\ \underset{\sim}{\rho_{k:1}^T} & 1 \end{bmatrix} \begin{bmatrix} \underset{\sim}{\phi_{k}^R} - \phi_{k+1,k+1}\underset{\sim}{\phi_{k}} \\ \phi_{k+1, k+1} \end{bmatrix}\\ &= \begin{bmatrix} R_k(\underset{\sim}{\phi_{k}} - \phi_{k+1,k+1}\underset{\sim}{\phi_{k}^R}) + \underset{\sim}{\rho_{k:1}}\phi_{k+1,k+1} \\ \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} - \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}^R}\phi_{k+1, k+1} + \phi_{k+1, k+1} \end{bmatrix} \\ &= \begin{bmatrix} R_k(\underset{\sim}{\phi_{k}} - \phi_{k+1,k+1}\underset{\sim}{\phi_{k}^R}) + \underset{\sim}{\rho_{k:1}}\phi_{k+1,k+1} \\ \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} + \phi_{k+1, k+1} (1-\underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}^R}) \end{bmatrix} \\ &= \begin{bmatrix} \underset{\sim}{\rho_{1:k}} + \phi_{k+1,k+1}(\underset{\sim}{k:1} - R_k\underset{\sim}{\phi_k^R}) \\ ? \end{bmatrix} \\ &= \begin{bmatrix} \underset{\sim}{\rho_{1:k}} \\ ? \end{bmatrix}\end{align}
To calculate ?, we know that:
\begin{align} \nu_k &= E((X_{n+1} - X_{n:(n-k)})^2) \\ &= \gamma(0) - \underset{\sim}{\phi_k^T}\underset{\sim}{\gamma_{k:1}} \\ &=\gamma(0)(1-\underset{\sim}{\phi_k^T}\underset{\sim}{\rho_{1:k}}) \end{align}
So that ϕk+1,k+1=νk(γ(k+1)−∼γk:1T∼ϕk)=γ(0)(1−∼ϕkT∼ρ1:k)(γ(k+1)−∼γk:1T∼ϕk)
Therefore, we can calculate ? by:
\begin{align} ? &= \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} + \frac{(\gamma(k+1)-\underset{\sim}{\gamma_{k:1}^T}\underset{\sim}{\phi_k})(1-\underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}^R})}{\gamma(0)(1-\underset{\sim}{\phi_k^T}\underset{\sim}{\rho_{1:k}})} \\ &= \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} + \frac{\gamma(k+1)}{\gamma(0)} - \underset{\sim}{\rho_{k:1}^T}\underset{\sim}{\phi_{k}} \\ &= \rho(k+1) \end{align}
Then we finish the proof.
Revisit of last lecture’s formula for νn
\begin{align} \nu_n &= E((X_{n+1} - \sum_{j=1}^n\phi_{nj}X_{n-j+1})^2) \\ &= \gamma(0) - \underset{\sim}{\phi_n^T}\underset{\sim}{\gamma_{n}} \\ &=\gamma(0) - \underset{\sim}{\phi_{n-1}^T}\underset{\sim}{\gamma_{n-1}} + \underbrace{\phi_{nn}\underset{\sim}{\phi_{n:1}^{RT}}\underset{\sim}{\gamma_{n-1}} - \phi_{nn}\gamma(n)}_{\text{recursive formula}} \\ &=\gamma(0) - \underset{\sim}{\phi_{n-1}^T}\underset{\sim}{\gamma_{n-1}} - \phi_{nn}(\gamma(n) - \phi_{1}^{RT}\underset{\sim}{\gamma_{n-1}}) \\ &=\nu_{n-1} - \phi_{nn}(\phi_nn\nu_{n-1}) \\ &=\nu_{n-1}(1-\phi_{nn}^2) \end{align}
Innovations Algorithm
Define Best Linear Predictor for Xn as:
X^n={0,n=1Pn−1Xn,n=2,3,⋯
Vn=E[(Xn+1−PnXn+1)2]
Define the one-step innovations (prediction errors) in the following way:
Un=Xn−X^n
∼Un=(U1,⋯,Un)T
∼Xn=(X1,⋯,Xn)T
∼Un=An∼Xn
An=⎣⎢⎢⎢⎢⎡1a11a22a33⋯01a21a32⋯001a31⋯0001⋯0000⋯⋯⋯⋯⋯⋯⎦⎥⎥⎥⎥⎤
Then we have
U1=X1 for non-zero sequence
U2=X2+a11X1
U3=X3+(a21X2+a22X1)
Then An will be nonsingular with inverse Cn=An−1 where
Cn=⎣⎢⎢⎢⎢⎡1θ11θ22θ33⋯01θ21θ32⋯001θ31⋯0001⋯0000⋯⋯⋯⋯⋯⋯⎦⎥⎥⎥⎥⎤
Up to now, we have ∼Un=An∼Xn, Cn∼Un=∼Xn and X^n=∼Xn−∼Un, then we can calculate X^n by
\begin{align}\hat{X}_n &= C_n\underset{\sim}{U_n} - \underset{\sim}{U_n} \\ &= (C_n - I_n)\underset{\sim}{U_n} \\ &= (C_n - I_n)(\underset{\sim}{X_n} - \underset{\sim}{\hat{X}_n}) &=H_n(\underset{\sim}{X_n} - \underset{\sim}{\hat{X}_n}) \end{align}
where Hn=⎣⎢⎢⎢⎢⎡0θ11θ22θ33⋯00θ21θ32⋯000θ31⋯0000⋯0000⋯⋯⋯⋯⋯⋯⎦⎥⎥⎥⎥⎤
Then we calculate from the beginning
X^1=(0,⋯,0)T(∼Xn−∼X^n)=0
X^2=θ11(∼X1−∼X^1)=θ11X1
X^3=θ22(∼X1−∼X^1)+θ21(∼X2−∼X^2)
X^n+1={0,if n=0∑j=1nθnj(Xn+1−j−X^n+1−j)
Coefficients
V0=K(1,1)
θn,n−k=Vk−1(K(n+1,n+1)−∑j=0k+1θk,k−jθn,n−jVj) for 0≤k≤n−1
Vn=K(n+1.n+1)−∑j=0n−1θn,n−j2Vj where K(i,j)=Cov(Xi,Xj)