Logo

Generalized Least SquaresΒΆ

In [1]: import statsmodels.api as sm

In [2]: data = sm.datasets.longley.load()

In [3]: data.exog = sm.add_constant(data.exog)

The Longley dataset is a time series dataset Let’s assume that the data is heteroskedastic and that we know the nature of the heteroskedasticity. We can then define sigma and use it to give us a GLS model

First we will obtain the residuals from an OLS fit

In [4]: ols_resid = sm.OLS(data.endog, data.exog).fit().resid

Assume that the error terms follow an AR(1) process with a trend resid[i] = beta_0 + rho*resid[i-1] + e[i] where e ~ N(0,some_sigma**2) and that rho is simply the correlation of the residuals a consistent estimator for rho is to regress the residuals on the lagged residuals

In [5]: resid_fit = sm.OLS(ols_resid[1:], sm.add_constant(ols_resid[:-1])).fit()

In [6]: print resid_fit.tvalues[1]
-1.43902298397

In [7]: print resid_fit.pvalues[1]
0.173784447888

While we don’t have strong evidence that the errors follow an AR(1) process we continue

In [8]: rho = resid_fit.params[1]

As we know, an AR(1) process means that near-neighbors have a stronger relation so we can give this structure by using a toeplitz matrix

In [9]: from scipy.linalg import toeplitz

In [10]: toeplitz(range(5))
Out[10]: 
array([[0, 1, 2, 3, 4],
       [1, 0, 1, 2, 3],
       [2, 1, 0, 1, 2],
       [3, 2, 1, 0, 1],
       [4, 3, 2, 1, 0]])
In [11]: order = toeplitz(range(len(ols_resid)))

so that our error covariance structure is actually rho**order which defines an autocorrelation structure

In [12]: sigma = rho**order

In [13]: gls_model = sm.GLS(data.endog, data.exog, sigma=sigma)

In [14]: gls_results = gls_model.fit()

of course, the exact rho in this instance is not known so it it might make more sense to use feasible gls, which currently only has experimental support

We can use the GLSAR model with one lag, to get to a similar result

In [15]: glsar_model = sm.GLSAR(data.endog, data.exog, 1)

In [16]: glsar_results = glsar_model.iterative_fit(1)

comparing gls and glsar results, we see that there are some small differences in the parameter estimates and the resulting standard errors of the parameter estimate. This might be do to the numerical differences in the algorithm, e.g. the treatment of initial conditions, because of the small number of observations in the longley dataset.

In [17]: print gls_results.params
[-3797854.9015      -12.7656       -0.038        -2.1869       -1.1518
       -0.0681     1993.9529]

In [18]: print glsar_results.params
[-3467960.6325       34.5568       -0.0343       -1.9621       -1.002
       -0.0978     1823.1829]

In [19]: print gls_results.bse
[ 670688.6993      69.4308       0.0262       0.3824       0.1653
       0.1764     342.6346]

In [20]: print glsar_results.bse
[ 871584.0517      84.7337       0.0328       0.4805       0.2114
       0.2248     445.8287]

Previous topic

Weighted Least Squares

Next topic

statsmodels.regression.linear_model.OLS

This Page