Page 410 - Mechatronics with Experiments
P. 410
JWST499-Cetinkunt
JWST499-c06
396 MECHATRONICS Printer: Yet to Come October 9, 2014 8:1 254mm×178mm
view. The reflected light is highly dependent on the source of the light. There are four major
lighting methods used in vision systems:
1. back lighting, which is very suitable in edge and boundary detection applications,
2. camera mounted lighting, which is uniformly directed on the field of view and used
in surface inspection applications,
3. oblique lighting, which is used in inspection of the surface gloss type applications,
4. co-axial lighting which is used to inspect relatively small objects, such as threads in
holes on small objects.
An image at each individual pixel is sampled by an analog to digital converter (ADC).
The smallest resolution the ADC can have is 1-bit. That is the image at the pixel would be
considered either white or black. This is called a binary image. If the ADC converter has 2-
bits per pixel, then the image in each pixel can be represented in one of four different levels
8
of gray or color. Similarly, an 8-bit sampling of pixel signal results in 2 = 256 different
levels of gray (gray scale image or colors in the image). As the sampling resolution of pixel
data increases, the gray scale or color resolution of the vision system increases. In gray scale
cameras, each pixel has one CCD element whose analog voltage output is proportional to
the gray scale level. In color sensors, each pixel has three different CCD element for three
main colors (red, blue, green). By combining different ratios of the three major colors,
different colors are obtained.
Unlike a digital camera used to take pictures where the images are viewed later on,
the images acquired by a computer vision system must be processed at periodic intervals
in an automation environment. For instance, a robotic controller needs to know whether a
part has a defect or not before it passes away from its reach on a conveyor. The available
processing time is in the order of milliseconds and even shorter in some applications such
as visual servo applications. Therefore, the amount of processing necessary to evaluate an
image should be minimized.
Let us consider the events involved in an image acquisition and processing.
1. A control signal initiates the exposure of the sensor head array (camera) for a period
of time called exposure time. During this time, each array collects the reflected light
and generates an output voltage. This time depends on the available external light,
and camera settings such as aperture.
2. Then the image in the sensor array is locked and converted to digital signal (A to D
conversion).
3. The digital data is transfered from the sensor head to the signal processing computer.
4. Image processing software evaluates the data and extracts measurement information.
Notice that as the number of pixels in the camera increases, the computational load, and
the processing time, increases since the A/D conversion, data transfer, and processing all
increase with the increasing number of pixels and the resolution of each pixel (i.e., 4-bit,
8-bit, 12-bit, 16-bit). The typical frame update rate in commercial two-dimensional vision
systems is at least 30 frames∕s. Line-scan cameras can easily have frame update rate around
1000 frames∕s.
The effectiveness of a vision system is largely determined by its software capabilities.
That is, what kind of information it can extract from the image, how reliably can it do it, and
how fast can it do it. Standard image processing software functions include the following
capabilities.
1. Thresholding an image: once an image is aquired in digital form, a threshold value of
color or gray scale can be selected, and all pixel values below that value (white value)