Images can be processed by optical, photographic, and electronic means, but image processing using digital computers is the most common method because digital methods are fast, flexible, and precise. In the future, Electro-optical and some analog image-processing methods may be commonly used. This article focuses on the use of digital computer methods.
In a typical digital image processing system, the source of the image is usually visible light reflected from or transmitted through various objects in a scene. Optics gathers and focuses this light into a sensor that puts an electronic signal to the received light. Images can also be formed using other sources of radiation such as infrared or ultraviolet light, X-rays, radar, or sonar. Images can be synthesized from spatial data by other means, including scanning and computer-aided tomography.
The sensor signal is “digitized”–converted to an array of numerical values, each value representing the light intensity of a small area of the scene. The digitized values are called picture elements, or “pixels,” and are stored in computer memory as a digital image. The limited range and number of pixels means that the digital image is only an approximate of the light intensity from the scene.
A computer to achieve the desired result processes the digital image. Often special purpose image-processing computers are used to increase the speed of the processing operations. The sequence of processing operations is called an image processing. The processed result could be displayed, be recorded, control a manufacturing operation, provide measurements on the image, or be sent over a communication channel for remote.
Some of the equipment used in image processing is also used in computer graphics and scientific visualization. Graphics and image processing are often combined in the preparation of printed material.
IMAGE ENHANCEMENT AND RESTORATION
Image enhancement improves the quality of images, perhaps for human viewing. Removing blurring and noise, increasing contrast, and revealing details are examples of enhancement operations. Reducing the noise and blurring and increasing the contrast range could enhance the image. The original image might have areas of very low and high intensity, which mask details. An adaptive enhancement algorithm reveals these details. Adaptive algorithms adjust their operation based on the image information (pixels) being processed. In this case the mean intensity, contrast, and sharpness could be changed based on the pixel-intensity statistics in various areas of the image.
Another enhancement technique assigns colors to pixel intensities, and thus makes small intensity differences more obvious to the human eye. Color could be used, for example, to highlight details in an X-ray image. Image-enhancement operations are often used in image-processing algorithms, and are used in some digital television sets to improve the visual quality of the received picture.
Image restoration improves image quality by using information beyond that in the digital image. This information might be how the image of the scene was formed and what degradations (noise, defocusing, geometric distortions, and so on) occurred in forming or transmitting the image. Movement might blur an image, for example. Sophisticated image-restoration and enhancement algorithms to try to determine the details of the crime, for instance, processed photographs of John F. Kennedy s assassination. Images from spacecraft and satellites are restored and enhanced to reduce the effects of motion, optics, angle of view, noise, and other distortions.
IMAGE ANALYSIS AND RECOGNITION
Image analysis extracts quantitative information from an image. A high-contrast image of some electronic parts might be made, for example, with each part labeled with a unique color so that the position of each part is found by examining pixels of one color. The part positions might be used to guide a robot in picking up the parts. Other measurements include the area of each part, their outline shape, and orientation. Images are also analyzed for statistical information, such as the dispersal of pixel-intensity values. Image analysis often replaces or assists human vision in inspection and machine-vision tasks, where it can make precise and rapid measurements on images that are difficult for human vision.
Image-recognition algorithms attempt automatically to find and identify parts or objects within an image. Typical recognition tasks are not this simple, and often require finding and recognizing objects in cluttered and degraded images. One recognition method compares images of the objects with every area of a sample image. If a template matches some area of the sample image, the image might contain the corresponding object. Unfortunately, the match is usually imperfect due to image noise, object variation, object rotation, changes in lighting, and other factors and so statistical methods are used to decide if the match is valid.
Often the recognition can be made more reliable by using “feature detectors” or “matched filters” to amplify or find specific image features that contain unique or important information about the objects. The resulting features, measurements, or images are examined for patterns that match the various objects. This examination might use pattern analysis methods to recognize reliably the objects in the image.
Image compression reduces the amount of information required to store or transmit a digital image. Compression is called “losses” when the original digital image can be exactly reconstructed from the compressed image. It is called “lossy” when information is lost and the original image can be only approximately reconstructed. Because pixel values often similar to or used with adjacent pixel values, an image can be compressed by removing these correlations.
Compression is used when image storage is expensive or a large number of images must be stored, and when the image must be transmitted over a limited or expensive communication channel. For example, hospitals generate thousands of images (X-rays, CAT scans, sonograms, and ECT), and the digital storage required for these images can be dramatically reduced by compression. In this case, compression might be used to insure that no clinically important details are lost from the image. IMAGE EDITING
Image processing is used to edit images, perhaps for use in a magazine. Image editing uses many of the methods from image enhancement and restoration, such as removing image blur (or adding blur) and changing the location of pixels. For example, an element in an image might be “cut” out, reduced in size, and inserted (”pasted”) in another image. The edges of the inserted image differ in intensity and might be noticed. To remove this unwanted edge, the inserted image is smoothly blended (blurred) into the background by averaging pixel-intensity values across the edges.
Color digital images are composed of three images, so each pixel might have red, green, and blue intensity values. Items in the image can be selected and their color modified by changing the balance of these values. Image editing and compression are used in document image processing, where documents such as text, photographs, and drawings are converted to digital images. Once digitized, these images are easy to edit, store, and transmit and can efficiently replace paper in many applications.