Chapter 9 Texture

9.1 Introduction
formal approach or precise definition of texture does not exist
texture discrimination techniques are for the most part ad hoc
principal characteristic of texture: repetition of basic pattern
texture: nonlocal property, characteristic of region larger than its size
=====/usr/local/vision/images/t_bark.im=====
=====/usr/local/vision/images/t_clouds.im=====
=====/usr/local/vision/images/t_fur_r.im=====
=====/usr/local/vision/images/t_pebbles.im=====
=====/usr/local/vision/images/t_straw.im=====
=====/usr/local/vision/images/t_water_r.im=====
visual cliff by William Vandivert, 1960
=====V. S. Nalwa, A Guided Tour of Computer Vision, Fig. 6.1=====
(a) water (b) beach pebbles (c) raffia weave (d) brick wall
=====V. S. Nalwa, A Guided Tour of Computer Vision, Fig. 6.2=====
texture: repeating patterns of local variations in image intensity which are
too fine to be distinguished as separate objects
at the observed resolution
for humans, texture is the abstraction of certain statistical homogeneities from a portion of the visual field that contains a quantity of information grossly in excess of the observer's perceptual capacity




texture analysis issues

  1. pattern recognition:
    given textured region, determine the class the region belongs to
  2. generative model:
    given textured region, determine a description or model for it
  3. texture segmentation:
    given image with many textured areas, determine boundaries
(a) multispectral image (b) printout of machine classification results
=====Gonzalez, Digital Image Processing, Fig. 9.13=====




texture: fineness, coarseness, contrast, directionality, roughness, regularity
statistical texture-feature approaches use




Given a generative model and the values of its parameters, one can synthesize homogeneous image texture samples associated with the model and the given value of its parameters.
image texture analysis: verification and estimation
verification: verify given image texture sample consistent with model
estimation: estimate values of model parameters based on observed sample
examples of model-based techniques




texel: texture element: basic textural unit of some textural primitives qualitatively evaluated image texture properties

For any textural surface there exists a scale at which, when the surface is examined, it appears smooth and textureless (e.g. see from infinite distance).
Then as resolution increases, the surface appears as a fine texture and then a coarse one, and for multiple-scale textural surfaces the cycle of smooth, fine, and coarse may repeat.
texture: cannot be analyzed without frame of reference on scale, resolution
texture: scale-dependent phenomenon
=====Fig. 9.1=====




to characterize texture

=====Oldie 33:20=====




9.2 Gray Level Co-Occurrence
first-order gray-level statistics: statistics of single pixels,
e.g. histogram, mean, median, variance
second-order gray-level statistics: the combined statistics of the gray levels
of pairs of pixels in which each two pixels in a pair have a
fixed relative position, e.g. co-occurrence
gray level spatial dependence: characterize texture by co-occurrence
The gray level co-occurrence can be specified in a matrix of relative frequencies $P_{ij}$ with which two neighboring pixels separated by distance \begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath} occur on the image, one with gray level ALT= and the other with gray level ALT=.
co-occurrence matrix: symmetric, function of angle and distance between pixels
=====Fig. 9.2=====




probability of horizontal, \begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath} pixels apart pixels

\begin{displaymath}
P(i,j,d, 0^\circ)=\char93 \{[(k,l),(m,n)]\vert k-m=0,\vert l-n\vert=d, I(k,l)=i, I(m,n)=j\}
\end{displaymath}

probability of $45^\circ$, \begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath} pixels apart pixels

\begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath}


\begin{displaymath}
l-n=d)], I(k,l)=i, I(m,n)=j\}
\end{displaymath}

probability of vertical, \begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath} pixels apart pixels

\begin{displaymath}
P(i,j,d, 90^\circ)=\char93 \{[(k,l),(m,n)]\vert\vert k-m\vert=d,l-n=0, I(k,l)=i, I(m,n)=j\}
\end{displaymath}

probability of $135^\circ$, \begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath} pixels apart pixels

\begin{displaymath}
P(i,j,d,135^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=d) {\rm\ or \ } (k-m=-d,
\end{displaymath}


\begin{displaymath}
l-n=-d)], I(k,l)=i, I(m,n)=j\}
\end{displaymath}

matrix symmetric: $P(i,j,d,a)=p(j,i,d,a)$
=====Fig. 9.3=====
entropy highest: when all entries in $P_{ij}$ are equal
entropy highest: image where no preferred gray-level pairs exist
features calculated from the co-occurrence matrix
=====Fig. 9.4=====
gray-level co-occurrence matrix: widely used in remote sensing applications
for land-use classification




9.2.1 Generalized Gray Level Spatial Dependence Models for Texture
simple generalization: consider more than two pixels at a time




9.3 Strong Texture Measures and Generalized Co-Occurrence
generalized co-occurrence: consider co-occurrence between texture primitives
texture primitive: connected set of pixels characterized by attribute list
simplest texture primitive: pixel with gray level attribute
more complicated primitive: connected set of pixels homogeneous in level
primitive: characterized by size, elongation, orientation, average intensity




9.3.1 Spatial Relationships
primitives constructed: list of primitives, center coordinates, attributes
generalized co-occurrence matrix \begin{displaymath}
a(i,j)=\sum_{(k,l)\in N(i,j)} \alpha(i-k,l-j)a(k,l)
+\sum_{(k,l)\in N(i,j)} \beta(i-k,l-j) b(k,l)
\end{displaymath}

\begin{displaymath}
P(t_1, t_2) = \frac{\char93 \{(q_1,q_2)\in S \vert f(q_1)=t_1 {\rm\ and \ } f(q_2)=t_2\}}{\char93 S}
\end{displaymath}

$S \subseteq Q \times Q$: binary relation satisfying spatial relationship
$\char93 F$: set of all primitives on the image
: set of primitive properties
$t_1, t_2$: properties which primitives have
\begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath}: function assigning to each primitive in $\char93 F$ a property of




9.4 Autocorrelation Function and Texture
texture: relates to spatial size of gray level primitives on image
gray level primitives of larger size: coarser texture
gray level primitives of smaller size: finer texture
autocorrelation function: describes the size of gray level primitives
autocorrelation function

\begin{displaymath}
p(x,y)=\frac{\frac{1}{(L_x-\vert x\vert)(L_y-\vert y\vert)}\...
... I^2(u,v)dudv} \ \ \ \ \
\vert x\vert<L_x, \vert y\vert < L_y
\end{displaymath}

autocorrelation function $p[k,l]$ for $N\times N$ image:

\begin{displaymath}
p[k,l]=\frac{\frac{1}{(N-k)(N-l)}\sum_{i=1}^{(N-k)}
\sum_{j=...
...m_{i=1}^N \sum_{j=1}^N f^2[i,j]} \ \ \ \ \
0\leq k,l \leq N-1
\end{displaymath}

gray level primitives on image relatively large: texture coarse:
autocorrelation drops off slowly with distance
gray level primitives on image relatively small: texture fine:
autocorrelation drops off quickly with distance
autocorrelation function: measure of periodicity and scale of texture




9.5 Digital Transform Methods and Texture
In the digital transform method of texture analysis, the digital image is typically divided into a set of nonoverlapping small square subimages
Fourier transform: uses the complex sinusoid basic set




9.6 Texture Energy
In the textural energy approach, the image is first convolved with a variety of kernels. Then each convolved image is processed with a nonlinear operator to determine the total textural energy in each pixel's neighborhood.




9.7 Textural Edgeness
autocorrelation, digital transforms: reference texture to spatial frequency
texture: in terms of edgeness per unit area
small neighborhoods: to detect microedges
large neighborhoods: to detect macroedges
=====Fig. 9.6=====




9.8 Vector Dispersion
In the vector dispersion technique, the image texture is divided into mutually exclusive neighborhoods, and a sloped plane fit to the gray levels is performed for each neighborhood.
=====Fig. 9.7=====




9.9 Relative Extrema Density
a texture measure: number of extrema per unit area
in any row of pixels, a pixel ALT= is a relative minimum if

\begin{displaymath}
g(i) \leq g(i+1) {\rm\ and \ } g(i) \leq g(i-1)
\end{displaymath}

pixel ALT= is relative maximum if

\begin{displaymath}
g(i) \geq g(i+1) {\rm\ and \ } g(i) \geq g(i-1)
\end{displaymath}

constant gray level run of pixels: minimum and maximum simultaneously




9.10 Mathematical Morphology
granularity of the binary image $F$

\begin{displaymath}
G(d)= 1-\frac{\char93 F \mbox{\Large$\circ$}H_d}{\char93 F} = \frac{\char93 F - \char93 F \mbox{\Large$\circ$}H_d}{\char93 F}
\end{displaymath}

$\char93 F$: number of elements in $F$
$H_d$: disk structuring element of diameter \begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath}
$G(d)$ measures the properties of grain pixels that cannot be contained in some translated structuring element of size \begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath} that is entirely contained in the grain and contains the given pixel
$G(d)$ measures the proportion of pixels participating in grains of a size smaller than \begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath}




scale-\begin{displaymath}
g(i) \geq g(i+1) {\rm\ and \ } g(i) \geq g(i-1)
\end{displaymath} volume of the blanket around a gray level intensity surface ALT=

\begin{displaymath}
V(k) = \sum_{(r,c)} (I\oplus_k H)(r,c) - (I \ominus_k H)(r,c)
\end{displaymath}

$\oplus_k$: \begin{displaymath}
g(i) \geq g(i+1) {\rm\ and \ } g(i) \geq g(i-1)
\end{displaymath}-fold dilation with the structuring element
$\ominus_k$: \begin{displaymath}
g(i) \geq g(i+1) {\rm\ and \ } g(i) \geq g(i-1)
\end{displaymath}-fold erosion with the structuring element
=====joke=====




9.11 Autoregression Models
autoregression models for texture synthesis:
next gray level value $a_{N+1}$: linear combination of synthesized and noise values

\begin{displaymath}
a_{N+1}=\sum_{k=0}^{K-1} \alpha_k a_{N-k} + \sum_{l=0}^{L-1} \beta_l b_{N-l}
\end{displaymath}

$a_k$: given starting sequence
$b_k$: randomly generated noise image
=====Fig. 9.8=====
two-dimensional autoregressive model: better performance on general textures

\begin{displaymath}
a(i,j)=\sum_{(k,l)\in N(i,j)} \alpha(i-k,l-j)a(k,l)
+\sum_{(k,l)\in N(i,j)} \beta(i-k,l-j) b(k,l)
\end{displaymath}

=====Fig. 9.9=====
The power of the autoregression linear estimator approach is that it is easy to use the estimator in a mode that synthesizes textures from any initially given linear estimator.
In this sense the autoregressive approach is sufficient to capture everything about a texture.
Its weakness is that the textures it can characterize are likely to consist mostly of microtextures




9.12 Discrete Markov Random Fields
The Markov random field model for texture assumes that the texture field is stochastic and stationary and satisfies a conditional independence assumption.
when the distributions are Gaussian, the discrete Gauss-Markov field can be written as an equation in which each pixel's value is a linear combination of the values in its neighborhood plus a correlated noise term

\begin{displaymath}
I(r,c)=\sum_{(i,j) \in N(0,0)} I(r-i, c-j)h(i,j)+u(r,c)
\end{displaymath}

function $p[k,l]$: coefficients of linear combination
model parameters $p[k,l]$: computed from texture image with least-squares method
$b_k$: joint set of possible correlated Gaussian random variables




9.13 Random Mosaic Models
random mosaic models: constructed in two steps.
first step: provides a means of tessellating a plane into cells
second step: assigns a property value to each cell




9.14 Structural Approaches to Texture Models
pure structural models: primitives in regular repetitive spatial arrangements
to describe texture: describe primitives and placement rules
Escher-inspired pattern with two of several possible candidate texels
=====V. S. Nalwa, A Guided Tour of Computer Vision, Fig. 6.5=====
(a) simple texture formed by repeated placement of disks (b) corrupted
=====R. Jain, Machine Vision, Fig. 7.3=====




9.15 Texture Segmentation
The constraint is that each region has a homogeneous texture, such as that arising from a frontal view, and that each pair of adjacent regions is differently textured




9.16 Synthetic Texture Image Generation
fractals: shapes that exhibit recursive self-similarity
Every fractal can be recursively subdivided into smaller nonoverlapping shapes, each of which is a scaled-down version of the whole, either in a deterministic sense, or in a statistical sense
fractals: shapes that exhibit recursive self-similarity
=====V. S. Nalwa, A Guided Tour of Computer Vision, Fig. 6.4=====




9.17 Shape from Texture
image texture gradients on oblique photography can be used to estimate surface orientation of the observed three-dimensional object
assumption: no depth change and no texture changes within the observed area
assumption: no subtextures
results of integrated approach to identification of image texels
=====V. S. Nalwa, A Guided Tour of Computer Vision, Fig. 6.9=====
image textural variation as cue to three-dimensional shape
=====V. S. Nalwa, A Guided Tour of Computer Vision, Fig. 6.6=====
shape of golf ball recovered from the projective distortion of circles
=====V. S. Nalwa, A Guided Tour of Computer Vision, Fig. 6.11=====




unknown plane where textural surface observed

\begin{displaymath}
Ax+By+Cz+D=0
\end{displaymath}

where $A^2+B^2+C^2=1$
from perspective projection, 3D point $(x,y,z)$ with projection $(u,v)$

\begin{displaymath}
u = f \frac{x}{z} \ \ \ \ \ \ v=f \frac{y}{z}
\end{displaymath}

\begin{displaymath}
P(i,j,d, 45^\circ)=\char93 \{[(k,l),(m,n)]\vert[(k-m=d,l-n=-d) {\rm\ or \ } (k-m=-d,
\end{displaymath}: distance from the image plane to center of projection

\begin{displaymath}
x = \frac{uz}{f} \ \ \ \ \ \ y = \frac{vz}{f}
\end{displaymath}

solving for $z$

\begin{displaymath}
z = \frac{-Df}{Au+Bv+Cf}
\end{displaymath}

keep going on ...
use similar triangle geometry
=====Fig. 9.11=====




3D implications of discontinuities in image characteristics of spatial texture
=====V. S. Nalwa, A Guided Tour of Computer Vision, Fig. 6.14=====




9.18 Summary
texture: in terms of primitives and spatial relationships
qualitatively: shape from texture can work
quantitatively: techniques generally not dependable
=====joke=====



2001-09-19
Counter:
FastCounter by bCentral