Adaptive DPCM

A major limitation of the DPCM system consider so far is that the predictor and the quantizer are both fixed throughout the image. DPCM schemes can be made adaptive in terms of the predictor or the quantizer or both. Adaptive prediction usually reduce the prediction error prior to quantization, and thus, for the same bit rate, the reduced dynamic range of the quantizer input signal results in less quantization error and better reconstructed image quanlity. On the other hand, adaptive quantization aims at reducing the quantization error directly by varying the decision and reconstruction levels according to the local image statistics. We now review several adaptive DPCM(ADPCM) prediction and quantization schemes to provide more insight into the implementation of the above concepts.

 

Adaptive prediction

Nonadaptive predictors generally perform poorly at edges where abrupt changes in pixel values occur. An adaptive scheme has been proposed to improve the prediction in such regions. The scheme is based on switching among a set of predictiors based on the most likely direction of the edge. In particular, the prediction is chosen to be one of the previous pixel values, A, B, C, or D, as in Fig7.1. In determining the edge direction, the reconstructed values of the neighboring pixels are used so that the decision is causal. Improved perceived image quality and SNR gains of approximately 4dB were reported for this method as compared to a third-order fixed predictor.

 

In another causal technique, the prediction value of a conventional 2-d predictor is multiplied by an adaptive coefficient k to generated a new prediction. The value of the coefficient is based on the previous quantizer reconstruction level. For example, if the previous reconstruction level has been a positive maximum, there is a high probability of a slope overload, and thus k is chosen to be greater than one to accommodate the larger positive difference. With this relatively simple adaptive technoque, the slope response in DPCM systems using 3 bits/pixel or less can improved.

 

 

Adaptive quantization

In the adaptive prediction technique just described, the prediction was scaled based on the previous reconstructed level. Using the same idea, an adaptive quantizer scheme can also be developed. In this approach, the qunatizer levels for a given pixel are found by scaling the levels used for the previous pixel by some factor. This factor depends on the reconstruction level used for the previous pixel, so no overhead information is required. It was reported that proportionately faster step size increases were needed as compared to the step size decrease. It was also found that with increasing number of quantizer levels, the performance improvement of this approach over a fixed quantizer scheme became less pronounced. The scheme is particularly effective with one-bit DPCM, also known as delta modulation (DM)

 

A more sophisticated approach to adaptive quantization is to make use of the visual masking a effects in HVS. It is well known that the luminance sensitivity of the HVS decreases in the picture areas of high-contrast detail. In these areas, large quantization errors can be masked by the HVS.A procedure was outlined for designing quantizers with the minimum number of output levels, subject to the constraint that the largest magnitude of the quantization error resulting from an arbitrary input is less than the visibility threshold. Using these types of quantizers, adaptivity can be introduced by considering the degree of noise masking possible around the current pixel(based on surrounding image detail)and then switching among a number of quantizers. The detail or activity in a neighborhood around the current pixel can be defined in a number of different ways. For example, the weighted average of several vertical and horizontal gradients can be used. If a noncausal neighborhood is used in determining the activity, overhead information must be transmitted to inform the receiver of the quantizer selection.

 

In another noncausal method, an estimate of the number of bits required to quantize the differential signal is made for each pixel. This estimate is based on the previously reconstructed differential values and can be tracked by the receiver. For each pixel, one bit of overhead information is transmitted denoting the validity of the estimate. For example, a '0' implies that the estimated number of bits was sufficient for encoding the differential image and is followed by the information needed to identify the selected quantizer level. A '1' indicates that more bits are required than estimated and is followed by a '0' for each skipped quantization level until it is terminated by '1' at the desired quantization level. The overhead information, if left uncompressed, adds at least 1 bit/pixel to the overall bit rate of the system. Fortunately, the entropy of the overhead signal is small and can be entropy encoded using adaptive arithmetic coding techniques.

 

A third scheme using noncausal adaptation is based on the observation that the distribution of the differential signal em is generally a function of the neighboring (past and future) pixel values. Nonadaptive quantizers assume that em has a Laplacian pdf with a variance equal to the global variance of the differential image. However, for a given set of neighboring values, the actual distribution of em may substantially differ from that assumption. For example, the variance of em in flat regions is much smaller than the global variance, whereas the variance in highly textured areas may be larger than the global value. Also, near contours or high-contrast edges, the distribution may not even be symmetric. As a result, instead of using a signal quantizer, the system switches among a set of quantizers designed to accommodate the varying local statistics. In a practical system, to reduce overhead bits and computational complexity, the selection of given quantizer can be made for a block of the image rather than each individual pixel. The following steps summarize the action of the encoder:

 

. Partition each scan line into blocks of k pixels.

. Encode the block using each the m available quantizers.

. Measure the distortion resulting from quantizer.

. Select the quantizer with minimum distortion.

. Transmit log2 m bits of overhead information per k-pixel block

to identify the quantizer to the receiver.

. Transmit the encoded signal for the block.

 

A block diagram of the encoder is shown in Fig. 9.4. It is evident from the above description that there are several parameters which need to be selected in the design and implementation of the switched quantizer scheme, namely the length of the image block, the number of quantizers, the structure of each individual quantizer, and the distortion measure.

 

A larger block size implies a smaller overhead penalty, but also reduces the advantages gained from the adaptivity. In our example, we found that k = 10 was a good compromise.

 

In general, choosing the number of quantizers is a trade-off between improving the reconstructed image quality and keeping the overhead bits at an acceptable level. With a fixed-length code, the value of m is restricted to a power of two, but with entropy coders such as an arithmetic coder, m can have any value. In our example, we used four quantizers and employed fixed-length codes to encode the overhead information.

 

Ideally, for a given m, it is desirable to design the quantizers to that the overall quantization distortion is minimizes. Due to the complexity of this problem, the quantizer design has usually been performed in an ad hoc manner. In some reference, the quantizers are symmetric and are scaled version of the Lloyd-Max quantizer for a Laplacian pdf with a variance equal to the global variance of the differential image. In some references, it was argued that nonsymmetric quantizers can encode edges more effectively. This is particularly true for small block sizes. Also, the number of output quantizers levels does not have to be fixed and can be allowed to vary for different quantizers. Varying the number of output quantizer levels generally results in superior image quality, but it also gives rise to a variable output rate, which may not be desirable in certain applications. Furthermore, in such a case, the selection of the optimum quantizer becomes a complicated task as a certain quantizer may result in higher distortion but also a lower bit rate. In our example, we used symmetric quantizers with eight reconstruction levels, which were all scaled version of the global Lloyd-Max quantizer.

 

The distortion measure used in selection the quantizer for each block should ideally be based on visual criteria. Implementing such a measure requires a good knowledge of HVS and is computationally intensive. As a result, simpler distortion measures such as MSE are commonly used. An alternative distortion measure is the sum of the absolute error. This measure has an advantage in hardware implementation as the absolute value operation requires less circuitry than the squaring operation. We have found that both distortion measures work well in practice for this application.

 

 

Reference:

Digit Image Compression Techniques

Majid Rabbani and Paul W.Jones