Prediction optimization

The set of predictor coefficients may be fixed for all images (global prediction), or may vary from image to image (local prediction), or may be even vary within an image (adaptive prediction). If only pixels from the current scan line are used in forming the prediction, the predictor is referred to as one-dimensional (1-D). If pixels from the previous lines are included, the predictor is two-dimensional (2-D). Quantitatively, 2-D predictior results in SNR improvements of around 3 dB as compared to 1-D prediction, but the subjective quality improvement is even more substantial than this number would suggest. This is mainly due to the elimination of the jaggedness around nonhorizontal edges. The disadvantage of 2-D prediction is that it requires the buffering of the previous line. The number of pixels employed in the prediction is called the order of the predictor. In general, a higher order predictor outperforms a lower order one, but studies performed on television images and radiographs have demonstrated that there is only a marginal gain beyond a third-order predictor.

We now address the problem of finding the optimum local (single image) predictor coefficients. A widely used criterion is the minimization of the mean-squared prediction error. Under this criterion, the best linear estimate of xm is the value that minimizes the expected value of the squared prediction error; i., e.., it minimizes

                       (9.3) 

This is realized by making the prediction error orthogonal to all available data, and the m optimal coefficients can thus be found by solving the following set of linear equations:

   i = 0,1,...m-1.              (9.4)  

Expanding this set of equations result in terms involving the image auto-correlation values. Assuming that that the image is 2-D stationary random field, the auto-correlation value Rk,l is defined as

                                                  (9.5)  

where x(i,j) is the pixel value at locaiton (i,j).The need to compute auto-correlation values for each image makes local prediction impractical for many real time applications. Furthermore, the performance gain achieved by a local predictor over a global predictor (one that is fixed for all images) is typically only a few percent. Thus, global prediction is a more attractive choice for most applications.

 

The selection of a robust set of global predictor coefficients for typical imagery can be approached in a number of ways. One method is to assume a simple image model and then solve the corresponding set of equations given in (9.4). A Markov model with a separable auto-correlation function has been widely used for typical imagery.

                    (9.6)

where and are the mean and the variance of the image, k and l denote the vertical and horizontal displacements, and and denote vertical and horizontal correlation coefficients, respectively. For most imagery, and are typically greater than 0.9. As an example, consider a fourth-order predictor with predictor

coefficients , , , and, corresponding to the neighboring pixels A, B, C, and D in Fig7.1. Also assume that the image mean has been subtracted from every pixel value, so that =0.

           = ,     = -,     = ,    = 0,       (9.7) 

and the resulting predictor is

           (9.8)

It is interesting to note that this optimal fourth-order predictior has only three nonzero coefficients rather than four. This is because pixel D contributes no additional information over that already provided by pixels A through C for the particular image model given in Eq. (9.6).

In computing the optimal predictor, the image mean is assumed to be zero. In general, the mean value of the image is not known a priori, and its computation results in a two-pass algorithm that is obviously undersirable in a practical implementation. The problem with encoding a nonzero mean image is that the prediction estimate becomes biased. To see this, consider the expected value of the prediction error, i.e., the expected value of the differential image:

                                          (9.9)     

When is not zero, the expected value of the differential image will vary from image to image .The obvious solution is to require that the prediction coefficients sum to one; i.e.,

                                           (9.10)

which forces the expected value of the differential image to zero regardless of the image mean value. A potential disadvantage of having the coefficients sum to one is that any channel errors are propagated throughout the remainder of the reconstructed image; that is, the reconstruction filter is unstable. Usually, the sum of the coefficients is made slightly less than one (leaky predictoin) to reduce the effects of channel errors. Using Eq. (9.7) as a guide and adjusting the coefficients to approximately satisfy the condition in Eq. (9.10), one can get a robust set of predictor coefficients that provide satisfactory performance for a wide variety of image. The following are some examples of typical predicors:

= 0.97A                                                           1st-order, 1-D predictor,  (9.11)
= 0.50A+0.50C                                              2nd-order, 2-D predictor, (9.12)
= 0.90A-0.81B+0.90C                                 3rd-order, 2-D predictor,  (9.13)
= 0.75A-0.50B+0.75C                                 3rd-order, 2-D predictor,  (9.14)
= A-B+C                                                         3rd-prder, 2-D predictor,  (9.15)

 

 

Reference:

Digit Image Compression Techniques

Majid Rabbani and Paul W.Jones