Asymptotic Behavior for Partial Autocorrelation Functions of Fractional ARIMA Processes
1. Introduction. Let {X, :n E Z} be a real, zero-mean, weakly stationary
process, which we shall simply call a stationary process. We write y (.) for the
autocovariance function of {X, }:
The partial autocorrelation a(k) of {X,} is the correlation coefficient between
Xo and Xk eliminating linear regressions on XI, . . .,Xk-1 [see (4.2) for precise
definition]. One can calculate the value of a(k) easily, at least numerically,
from the values of y (0), y (1), . . . ,y (k) via, for example, the Durbin-Levinson
algorithm [cf. Brockwell and Davis (1991), Sections 3.4 and 5.21. The partial
autocorrelation function a(.) thus obtained is a real sequence of modulus less than
or equal to 1 which is free from restrictions such as nonnegative definiteness [see
Ramsey (1 974)], unlike the autocovariance function. By virtue of their flexibility,
partial autocorrelation functions play a significant role in time series analysis.
The definition of a(k) says that it is a kind of "pure" correlation coefficient
between Xo and Xk. Thus we think that the partial autocorrelation function a(-)
closely reflects the dependence structure of {X,}. However, in what concrete
sense does it do so? More specifically, what does a(n) look like for n large,
especially, when {X,} is a long-memoryprocess [cf. Brockwell and Davis (1991),
Section 13.2]? We dealt with this specific problem in Inoue (2000) and showed that
under appropriate conditions there exists a simple asymptotic formula for a(.).
However, the main results of Inoue (2000) do not cover an important class of
long-memory processes, that is, the fractional ARIMA (autoregressive integrated
moving-average) model. This model was independently introduced by Granger
and Joyeux (1 980) and Hosking (198 1) and has been widely used as a parametric
model describing long-memory processes. The purpose of this paper is to extend
the asymptotic formula to fractional ARIMA processes. |