Computer Vision (II)

Course Number: 526 U1200
Credits: 3
Time: Tuesday 6, 7, 8 (2:10PM-5:00PM)
Classroom: New CSIE Classroom 309
Classification: Elective for junior, senior, and graduate students
Prerequisite: Computer Vision (I) or Digital Image Processing
Instructor: Chiou-Shann Fuh
Office: New Computer Science and Information Engineering 327
Phone: 23625336 ext. 327, 23630231 ext. 3232 ext. 327
Office Hours: Thursday 3PM-5PM
Objective: To learn computer and robot vision through extensive
course projects.




Textbook: R. M. Haralick and L. G. Shapiro, Computer and Robot Vision,
Vol. II, Addison Wesley, Reading, MA, 1993.
Reference: B. K. P. Horn, Robot Vision, MIT Press, Cambridge, MA, 1986.
Reference: L. G. Shapiro and G. C. Stockman, Computer Vision,
Prentice-Hall, Upper Saddle River, NJ, 2001.
Reference: R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision,
McGraw-Hill, New York, 1995.
Projects: assigned weekly on first few weeks (20%) and a term project (30%)
Examinations: one midterm (20%) and one final (30%)




Content:
This is the second semester of a fast pace course which covers robot and computer vision. The first semester covers low-level vision and mostly no reference to three dimensions. This semester covers higher-level techniques:
12. Illumination
13. Perspective Projective Geometry
14. Analytic Photogrammetry
15. Motion and Surface Structure from Time Varying Image Sequences
16. Image Matching
17. The Consistent-Labeling Problem
18. Object Models and Matching
19. Knowledge-Based Vision
20. Accuracy
21. Glossary of Computer Vision Terms




Bibliography

Chapter 12 Illumination

12.1 Introduction
two key questions in understanding 3D image formation

  1. what determines where some point on object will appear on image?
    answer: geometric perspective projection model
  2. what determines how bright the image of some surface on object will be?
    answer: radiometry, general illumination models, diffuse and specular
photographs illustrating how shading dramatically affect perception of shape
shading: variation in brightness of a surface, --caption--
=====Nalwa, A Guided Tour of Computer Vision, Fig. 5.1=====




refraction of light bouncing off a surface patch: basic reflection phenomenon
=====Figure 12.1=====
image intensity $I$: proportional to scene radiance
scene radiance depends on

  1. the amount of light that falls on a surface
  2. the fraction of the incident light that is reflected
  3. the geometry of light reflection, i.e.
    viewing direction and illumination directions
image intensity:

\begin{displaymath}
I = g J^i f_r C S + b
\end{displaymath}

$J^i$: incident radiance
$f_r$: bidirectional reflectance function
$C$: lens collection
$S$: sensor responsivity
$g$: sensor gain
$b$: sensor offset
=====Kodak CCD sensor=====




The appearance of a surface depends greatly on the lighting conditions.
We have to understand how images are formed if we are to recover information about the surface from one or more images.
three views of the surface of Mars taken by Viking Lander I
=====Horn, Robot Vision, Figure 10.1=====
=====Garfield 17:18=====




12.2 Radiometry
Radiometry is the measurement of the flow and transfer of radiant energy in terms of both the power emitted from or incident upon an area and the power radiated within a small solid angle about a given direction.
irradiance: the amount of light falling on a surface
irradiance: power per unit area of radiant energy falling on a surface
irradiance: measured in units of watts per square meter
radiance: the amount of light emitted from a surface
radiance: power per unit foreshortened area emitted into a unit solid angle
radiance: measured in units of watts per square meter per steradian
radiant intensity of a point illumination source: power per steradian
radiant intensity: measured in units of watts per steradian
radiant intensity: may be a function of polar and azimuth angles




geometry of incident and reflected elementary beams
=====Figure 12.2=====
$z$-axis: along the normal to the surface element $dA$ at 0
polar angle: measured from the $z$-axis (pointing north)
azimuth angle: measured from $x$-axis (pointing east)




The solid angle subtended by a surface patch is defined by the cone whose vertex is at the point of radiation and whose axis is the line segment going from the point of radiation to the center of the surface patch.
size of solid angle: area intercepted by the cone on a unit radius sphere
centered at the point of radiation
solid angle: measured in steradians
total solid angle about a point in space: $4 \pi$ steradians
determination of the solid angle $\Omega$ subtended by a small surface patch
=====Figure 12.3=====

\begin{displaymath}
\Omega = \frac{\cos \theta A}{d^2}
\end{displaymath}

$A$: surface area
$d$: distance from surface area to point of radiation ($d^2 >>A$)
$\theta$: angle the surface normal makes w.r.t. the cone axis




surface irradiance (${\rm w/m}^2$):

\begin{displaymath}
\frac{I_\circ A \cos \theta_\circ /d^2}{A}
= \frac{I_\circ \cos \theta_\circ}{d^2}
\end{displaymath}

$A$: area of surface patch
$I_\circ$ (w/sr): constant radiant intensity of point illumination source
law of inverse squares: irradiance varies inversely as square of distance
from illuminated surface to source
caption of Figure 12.4
=====Figure 12.4=====




infinitesimal slice on annulus on sphere of radius $r$, polar angle $\theta$, azimuth $\phi$
=====Horn, Robot Vision, Figure 10.8=====
slice subtends solid angle $d \Omega$, since $\cos 0^\circ = 1, d = r,
A= (r \sin \theta d \phi) * (r d \theta)$

\begin{displaymath}
d \Omega = \sin \theta d \theta d \phi
\end{displaymath}




appearance of object depends greatly on attitude in space relative to viewer
not only does outline vary, but brightness pattern within silhouette changes
=====Horn, Robot Vision, Figure 10.2=====




12.2.1 Bidirectional Reflectance Function
The bidirectional reflectance distribution function $f_r$ is the fraction of incident light emitted in one direction when the surface is illuminated from another direction.
light source and sensor positions in spherical coordinates
=====Figure 12.2=====
$\theta$: polar angle between surface normal and lens center
$\phi$: azimuth angle of the sensor
$e$: emitting from
$i$: incident to
$J^i$: irradiance of the incident light at the illuminated surface
$J^r$: radiance of the reflected light
$f_r$: ratio of the scene radiance to the scene irradiance
differential reflectance model:

\begin{displaymath}
d J^r(\theta_e, \phi_e, \theta_i, \phi_i)= d J^i(\theta_i, \phi_i)
f_r(\theta_i, \phi_i, \theta_e, \phi_e)
\end{displaymath}

The differential emitted radiance (w/m$^2$-sr) in the direction $(\theta_e, \phi_e)$ due to the incident differential irradiance in the direction $(\theta_i, \phi_i)$ is equal to the incident differential irradiance $d J^i(\theta_i, \phi_i)$ (w/m$^2$) times the bidirectional reflectance distribution function $f_r(\theta_i, \phi_i, \theta_e, \phi_e)$ (1/sr).




For many surfaces the dependence of $f_r$ on the azimuth angles $\phi_i$ and $\phi_e$ is only a dependence on their difference.

\begin{displaymath}
f_r(\theta_i, \phi_i, \theta_e, \phi_e)=f_r(\theta_i, \theta_e; \phi_e-\phi_i)
\end{displaymath}

except surfaces with oriented microstructure
e.g. mineral called tiger's eye, iridescent feathers of some birds




An ideal Lambertian surface is one that appears equally bright from all viewing directions and reflects all incident light, absorbing none.
Lambertian surface: perfectly diffusing surface, with matte appearance
reflectivity $r$: unitless fraction called reflectance factor
white blotting paper: $r=0.8$
white writing paper: $r=0.68$
white ceilings or yellow paper: $r=0.6$
dark brown paper: $r=0.13$
dark velvet: $r=0.004$
bidirectional reflectance distribution function for Lambertian surface

\begin{displaymath}
f_r(\theta_i, \phi_i, \theta_e, \phi_e) = \frac{r}{\pi}
\end{displaymath}

shading as a cue to surface shape, --caption--
=====Nalwa, A Guided Tour of Computer Vision, Fig. 5.2=====
boundary shape as a factor in 3D interpretation by humans, --caption--
=====Nalwa, A Guided Tour of Computer Vision, Fig. 5.3=====
Lambertian sphere illuminated in direction -[1 0.5 -1], --caption--
=====Nalwa, A Guided Tour of Computer Vision, Fig. 5.6=====
reflectance map: plot of brightness as function of surface orientation
reflectance map of a Lambertian surface (direction [1 0.5 -1])
=====Nalwa, A Guided Tour of Computer Vision, Fig. 5.7=====
reflectance map of a Lambertian surface (direction -[0 0 -1])
=====Nalwa, A Guided Tour of Computer Vision, Fig. 5.8=====
differential relationship for emitted radiance for Lambertian surface

\begin{displaymath}
d J^r(\theta_e, \phi_e) = \frac{r d J^i}{\pi} \ \ \ {\rm w/m}^2 {\rm -sr}
\end{displaymath}

Lambertian surface: consistent brightness no matter what viewing direction
power radiated into a fixed solid angle: same in any direction
=====Example 12.1=====
=====joke=====




12.2.2 Photometry
photometry: study of radiant light energy resulting in physical sensation
brightness: attribute of sensation by which observer aware of differences of
observed radiant energy
radiometry $\leftrightarrow$ radiant energy, photometry $\leftrightarrow$ luminous energy
radiometry $\leftrightarrow$ power, photometry $\leftrightarrow$ luminous flux
lumen: unit of luminous flux
luminous intensity: luminous flux leaving point source per unit solid angle
luminous intensity: has units of lumens per steradian
candela: one lumen per steradian
illuminance: luminous flux per unit area incident upon a surface
illuminance: in units of lumens per square meter
one lux: one lumen per square meter
foot-candle: one lumen per square foot
one foot = 0.3048 meter

\begin{displaymath}
1 {\rm foot-candle} = \frac{1}{(0.3048)^2} {\rm lux} = 10.76 {\rm lux}
\end{displaymath}

luminance: luminous flux per unit solid angle per unit of projected area
luminance: in units of lumens per square meter per steradian




12.2.3 Torrance-Sparrow Model
$J_s^r$: specular reflection from mirrorlike surface facets
$J_d^r$: diffuse reflection from Lambertian surface facets
$J^r$: reflected light from roughened surface
$J_s^r$: dependent on the view point, whereas $J_d^r$ is not
consider surfaces: $f_r(\theta_i, \phi_i, \theta_e, \phi_e) =
f_r(\theta_i, \theta_e; \phi_e - \phi_i)$, let $\phi = \phi_e-\phi_i$
light source and sensor positions specified by pair of angles --caption--
=====Figure 12.5=====
$\vec{N}$: unit surface normal
$\vec{L}$: unit positional vector of the light source
$\vec{V}$: unit positional vector of the sensor

\begin{displaymath}
\cos \theta_i = \vec{N} \cdot \vec{L}
\end{displaymath}


\begin{displaymath}
\cos \theta_e = \vec{N} \cdot \vec{V}
\end{displaymath}




Torrance-Sparrow model:

\begin{displaymath}
J^r(\theta_i; \theta_e,\phi;\lambda)=s J_s^r(\theta_i;\theta_e,\phi;\lambda)+
(1-s)J_d^r(\theta_i; \lambda)
\end{displaymath}

s ( $0 \leq s \leq 1$): proportion of specular reflection depending on surface
$\lambda$: wavelength of light




12.2.4 Lens Collection
lens collection: portion of reflected light coming through lens to film
$f$: distance between the image plane and the lens
$r_1$: distance between the object and the lens
$r_2$: distance between the lens and the image of the object
$a$: diameter of the lens
$\alpha$: angle between the ray from the object patch to the lens center
=====Figure 12.6=====




irradiance incident on differential area $d a_2$ coming from differential area $d a_1$, having radiance $d J^i$, and passing through a lens having aperture area $A = \pi a^2 / 4$:
$A \cos \alpha$: foreshortened area of aperture stop seen by $d a_1$
$r_1=s_1/\cos \alpha$: distance from $d a_1$ to the aperture
solid angle $\Omega$ subtended by aperture stop as seen from $d a_1$:

\begin{displaymath}
\Omega = \frac{A \cos \alpha}{r_1^2}=\frac{A \cos^3 \alpha}{s_1^2}
\end{displaymath}

differential radiant power $d \Phi$ passing through aperture due to $d a_1$

\begin{displaymath}
d \Phi = d J^i \Omega da_1 \cos \alpha
\end{displaymath}

radiant power passing through aperture from $d a_1$

\begin{displaymath}
d \Phi = \frac{d J^i A \cos^4 \alpha d a_1}{s_1^2}
\end{displaymath}

irradiance incident to $d a_2$: (radiant power reaching $d a_2$ is $d \Phi$)

\begin{displaymath}
d J^r = \frac{d \Phi}{d a_2} = \frac{d J^i A \cos^4 \alpha d a_1}{s_1^2 d a_2}
\end{displaymath}

assume $s_1 >> s_2$, then $s_2 = f$, thus lens magnification is $s_1/s_2$
hence $d a_1/d a_2=(s_1/s_2)^2=s_1^2/f^2$, therefore

\begin{displaymath}
d J^r = \frac{d J^i A \cos^4 \alpha}{s_1^2}\frac{s_1^2}{f^2}
= \frac{d J^i A \cos^4 \alpha}{f^2}
\end{displaymath}

since $A = \pi a^2 / 4$

\begin{displaymath}
d J^r = \frac{\pi d J^i \cos^4 \alpha}{4}(\frac{a}{f})^2
\end{displaymath}

then the lens collection $C$ is given by

\begin{displaymath}
C = \frac{dJ^r}{dJ^i}=\frac{\pi}{4}(\frac{a}{f})^2 \cos^4 \alpha
\end{displaymath}




12.2.5 Image Intensity
The image intensity gray level $I$ associated with some small area of the image plane can then be represented as the integral of all light collected at the given pixel position coming from the observed surface patch, modified by sensor gain $g$ and bias $b$.

\begin{displaymath}
I(\theta, \phi)=g \int_\lambda C S(\lambda) J^r(\theta, \phi; \lambda)\Omega^2
r^2 d \lambda +b
\end{displaymath}

$\lambda$: light wavelength
$S(\lambda)$: sensor responsivity to light at wavelength $\lambda$
$J^r(\theta, \phi; \lambda)$: radiance of observed surface patch (watts/m$^2$-sr-m)
$\Omega$: solid angle subtended by the viewing cone of camera for the pixel
$r$: distance to the observed patch
$J^r(\theta, \phi; \lambda)\Omega(\Omega r^2)$: power received for the pixel position




12.3 Photometric Stereo
In photometric stereo there is one camera but $K$ light sources having known intensities $i_1,...,i_K$ and incident vectors $v_1,...,v_K$ to a given surface patch.
In photometric stereo the camera sees the surface patch $K$ times, one time when each light source is activated and the remaining ones are deactivated.
$n$: surface normal vector of the surface patch having Lambertian reflectance
$r$: reflectivity of the Lambertian surface reflectance
$f_1,...,f_K$: observed gray levels produced
by the model of Lambertian reflectance

\begin{displaymath}
f_k = g r i_k v_k \cdot n + b, \ \ \ k = 1,...,K
\end{displaymath}

$g$: sensor gain
$b$: sensor offset




if camera has been photometrically calibrated, $g, b$ known
let $f_k^*=\frac{f_k - b}{g i_k}$ and

\begin{displaymath}
f^*= \left (
\begin{array}{c}
f_1^* \\
\vdots \\
f_K^*
\en...
...\begin{array}{c}
v_1' \\
\vdots \\
v_K'
\end{array}\right )
\end{displaymath}

in matrix form

\begin{displaymath}
f^* = rVn
\end{displaymath}

if surface normal $n$ known, then least-squares solution for reflectivity $r$:

\begin{displaymath}
r = \frac{{f^*}' Vn}{(Vn)'(Vn)}
\end{displaymath}

if $K=3$, a solution for unit surface normal $n$:

\begin{displaymath}
n = \frac{V^{-1}f^*}{\vert\vert V^{-1}f^*\vert\vert}
\end{displaymath}

if $K>3$, a least-squares solution:

\begin{displaymath}
n = \frac{(V'V)^{-1} V'f^*}{\vert\vert(V'V)^{-1} V'f^*\vert\vert}
\end{displaymath}




if $g, b$ unknown, camera must be calibrated as follows:
geometric setup with known incident angle of light source to surface normal
surfaces of known reflectivities illuminated by known intensity light source
$i_k$: known intensity of light source for $k$th trial
$v_k$: known incident direction of light source for $k$th trial
$n$: known unit length surface normal vector
$r_k$: known reflectivity of surface illuminated for $k$th trial
$y_k$: observed value from the camera
let $x_k = i_k r_k v_k \cdot n $ then unknown gain $g$ and offset $b$ satisfy

\begin{displaymath}
\left (
\begin{array}{cc}
x_1 & 1 \\
x_2 & 1 \\
\vdots & \...
...{array}{c}
y_1 \\
y_2 \\
\vdots \\
y_K
\end{array} \right )
\end{displaymath}

this leads to the least-squares solution for $(g,b)$

\begin{displaymath}
\left (
\begin{array}{c}
g \\
b
\end{array}\right ) = \left...
...
\sum_{k=1}^K x_k y_k \\
\sum_{k=1}^K y_k
\end{array}\right )
\end{displaymath}

=====Oldie 33:29=====




12.4 Shape from Shading
nonplanar Lambertian surfaces of constant reflectance factor: appear shaded
this shading: secondary clue to shape of the observed surface
shape from shading: recovers shape of Lambertian surface from image shading
$(a,b,c)$: unit vector of distant point light source direction
assume surface viewed by distant camera so perspective projection
approximated by orthographic projection
surface point position $(x,y,z)$: projected to image position $(x,y)$
$z=g(x,y)$: surface expression
unit vector normal to the surface at $(x,y)$:

\begin{displaymath}
\frac{1}{\sqrt{(\frac{\partial g}{\partial x})^2 + (\frac{\p...
...} \\
\frac{\partial g}{\partial y} \\
-1
\end{array}\right )
\end{displaymath}

gray level at $(x,y)$, within multiplicative constant

\begin{displaymath}
I(x,y) = \frac{a p(x, y) + b q(x, y) -c}{\sqrt{p^2(x,y)+q^2(x,y)+1}}
\end{displaymath}

where $p = (\partial g/\partial x) $ and $q = (\partial g/\partial y)$
$R(p,q)$: reflectance map

\begin{displaymath}
R(p,q) = \frac{a p + b q -c}{\sqrt{p^2+q^2+1}}
\end{displaymath}

$\lambda$: penalty constant
relaxation method: minimizing original error and a smoothness term
criterion function to be minimized by choice of $p,q$

\begin{displaymath}
\epsilon^2 = \sum_r \sum_c \{I(r,c)-R[p(r,c),q(r,c)]\}^2
+\lambda \{[p(r+1,c)-p(r,c)]^2
\end{displaymath}


\begin{displaymath}
+[p(r,c+1)-p(r,c)]^2
+[q(r+1,c)-q(r,c)]^2+[q(r,c+1)-q(r,c)]^2\}
\end{displaymath}

two orthographic shaded view of the same surface, --caption--
=====Horn, Robot Vision, Fig. 10.19=====
a block diagram of Dent de Morcles region in southwestern Switzerland
=====Horn, Robot Vision, Fig. 10.18=====
shape-from-shading method applied to recovery of nose shape
=====Horn, Robot Vision, Fig. 11.7=====




uniform brightness if planar surfaces since $p(x,y), q(x,y)$ constant
surfaces with curvature: surfaces with $p(x,y), q(x,y)$ not constant
brightness has some connection with $p(x,y), q(x,y)$
$p(x,y), q(x,y)$ provide information about surface height $g(x,y)$
first-order Taylor expression for g:

\begin{displaymath}
g(x+1,y)=g(x,y)+\frac{\partial g}{\partial x} \Rightarrow
g(x,y) = g(x+1, y) -p(x,y)
\end{displaymath}


\begin{displaymath}
g(x,y+1)=g(x,y)+\frac{\partial g}{\partial y} \Rightarrow
g(x,y)=g(x,y+1)-q(x,y)
\end{displaymath}

with boundary conditions on $g(x,y)$, we can solve unknown surface height and
partial derivatives $p(x,y), q(x,y)$




12.4.1 Shape from Focus
possible to recover shape from the shading profile of object edges
basic idea: cameras do not have infinite depth of field
The degree to which edges may be defocused is related to how far the 3D edge is away from the depths at which the edges are sharply in focus.




12.5 Polarization
illumination source characterized by four factors

  1. directionality: relative to surface normal in bidirectional reflectance
  2. intensity: energy coming out from source
  3. spectral distribution: function of wavelength $\lambda$
  4. polarization: time varying vibration of light energy in certain direction
polarization: time-varying vibration of the light energy in certain direction
linearly polarized: changes direction by $180^\circ$ every period
circularly polarized: phase angle difference of $90^\circ$, thus $\cos wt+i \sin wt$
elliptically polarized: phase angle difference of $90^\circ$ and different amplitude $a\cos wt+i b \sin wt$




Mathematical Meaning of Polarization
polarization of light mathematically described by using wave theory




Usefulness of Polarization in Machine Vision
At Brewster's angle, the parallel polarized light is totally transmitted and the perpendicularly polarized light is partially transmitted and partially reflected.
This effect can be used to remove the specular reflections from the window or metal surfaces by looking through them at Brewster's angle.




12.5.1 Representation of Light Using the Coherency Matrix
natural light: completely unpolarized




12.5.2 Representation of Light Intensity
S-pol: component polarized perpendicularly to the incidence plane
P-pol: component polarized parallel to the incidence plane




12.6 Fresnel Equation
Fresnel law of reflectance, --caption--
=====Figure 12.11=====




12.7 Reflection of Polarized Light
ergodic light: time average of the light equivalent to its ensemble average




12.8 A New Bidirectional Reflectance Function
bright-field: rays reflected back into objective illuminate entire surface
dark-field: reflects back into objective only rays from surface irregularities
=====Elliott, Integrated Circuit Fabrication Technology, Fig. 5.4=====




12.9 Image Intensity
image intensity can be written in terms of:




12.10 Related Work
reflectance models: have been used in computer graphics and image analysis
=====joke=====




Project due Mar. 12:
Use correlation to do image matching
find $(dx,dy)$ to minimize

\begin{displaymath}
\sum_{(x,y) \in R} \vert PIX(ima,x,y)-PIX(imb,x+dx, y+dy)\vert
\end{displaymath}

=====trucka.im=====
=====truckb.im=====
=====toy.trancd.block.ps=====
pageview toy.trancd.block.ps
Adobe Systems Inc., Postscript Language Tutorial and Cookbook,
Addison Wesley, Reading, MA, 1985.
=====lpr1l toy.trancd.block.ps=====




2002-02-26