Automatic analysis of fuel spray images

Automatic analysis of fuel spray images

107 Applications Automatic Analysis of Fuel Spray Images Amira M. Badreldin Instrumentation Department, General Motors Research Laboratories, Warren...

3MB Sizes 0 Downloads 23 Views

107

Applications

Automatic Analysis of Fuel Spray Images Amira M. Badreldin Instrumentation Department, General Motors Research Laboratories, Warren, Michigan 48090-9055. U.S.A.

Analysis of fuel spray droplets is being conducted in General Motors Research Laboratories to provide real-time information on in-focus droplet sizes and numerical density. This paper introduces a fast and efficient technique for automatic analysis of fuel spray images. The preprocessing stage consists of a global thresholding of the log-edge of the image. The thresholded image is then used as a reference to detect objects in the gray level image, and the recognition of in- and out-offocus droplets is achieved through a 3-level tree classifier. Keywords: Thresholding, Framing, Projections, Syntactic anal-

ysis, Tree classifier.

:

;

Dr. Badreldin was born in Alexandria, Egypt, in 1952. She received her B.Sc. degree in Electrical and Computer Engineering from University of Alexandria, Alexandria, Egypt, in 1975; M.Sc. degree in Systems Design from University of Waterloo, Waterloo, Ontario, Canada, in 1980; and Ph.D. degree in Electrical Engineering from University of Windsor, Windsor, Ontario, Canada, in 1985 During her studies, she has held the NSERC (Natural Sciences and Engineering Research Council) of Canada Scholarship, OGS (Ontario Graduate Scholarship), University of Windsor scholarship, and Alexandria University Distinction awards. During the summer of 1985, she was a research associate in the department of Electrical Engineering, University of Windsor. In October 1985, she joined the Department of Instrumentation, General Motors Research Laboratories, Warren, Michigan, where she is currently a senior research engineer. Her major research interests are knowledge-based image and scene analysis, character recognition, pattern recognition and information processing. She is the author and co-author of over 30 technical reports and papers published in refereed journals and conference proceedings. She is a member of the American Association for Artificial Intelligence (AAAI), Pattern Recognition Society, Sigma Xi, and the Knowledge Engineering Coordinating Committee (KECC) at GM. North-Holland Computers in Industry 9 (1987) 107-113

1. Introduction Fuel spray c o m b u s t i o n and vaporization have been of great interest to General M o t o r s for m a n y years. Fuel spray images are generated using pulsed laser light, at a rate of 10 ns, to freeze droplet m o t i o n in the spray sample volume under study. The images are then stored on a magnetic videodisc recorder for later analysis to provide information on " i n - f o c u s " droplet sizes and numerical density [1-4]. Meaningful results can only be achieved b y analyzing a large n u m b e r of images. M a n u a l data analysis requires a large a m o u n t of h u m a n labor and introduces errors due to the variations in standards for out-of-focus rejection. A u t o m a t i c data analysis allows a larger n u m b e r of images to be tested, provides consistency in evaluating image quality, and reduces m a n p o w e r requirements significantly. Preprocessing, feature extraction, and classification are the three major steps in automatic image analysis. A n u m b e r of techniques have been developed to analyze fuel spray images [5]. M o r phological operations, Gaussian and differential filtering are used to enhance the quality of the original images. Segmentation is achieved through histogram analysis. Features c o m p u t e d f r o m the radial intensity profile and radial standard deviation are used to classify objects into in- and outof-focus classes. Similar work is also reported b y others [6-9]. This research introduces a new technique for automatic analysis of fuel spray images. Each recorded image is digitized to 8 bits gray level resolution and stored as a matrix of 512 × 512 pixels in the Vicom Digital I m a g e Processor System. The preprocessing stage consists of a global thresholding of the log-edge of the image. The thresholded image is then used as a reference to detect objects in the gray level image, and the recognition of in- and out-of-focus droplets is achieved through a 3-level t r e e classifier. The fol-

0166-3615/87/$3.50 © 1987 Elsevier Science Publishers B.V. (North-Holland)

108

Applications

Computer~ m Industry.

lowing sections (1, 2 and 3) discuss in detail the different stages in the automatic analysis of fuel spray images.

2. Preprocessing The procedure for preprocessing fuel spray images is shown in Fig. 1. Poor contrast of the images is first improved by rescaling each pixel using a linear point scaling function. Shading due to uneven illumination is then reduced by a logarithmic transformation of the image. Next, Sobel magnitude edge enhancement operator is used to accentuate the edges. Finally, global thresholding followed by a smoothing operation, is used to provide a binary image. This image serves as a plan to guide the search for objects in the full resolution image. Figs. 2a and 2b illustrate the original and the corresponding segmented image respectively.

3. Recognition Algorithm The first step in the recognition algorithm is the detection of candidate objects. Isolation of objects is achieved through a border tracking algorithm ORIGINAL IMAGE

I(A} CONTRAST ENHANCEMENT

I{B) LOG

Fig. 2. (a) Original image, (b) segmented image.

(B}

l{C} ~O~E

I

~N~AN%~21J

I{E} THRESHOLDING I{F) I SMOOTHING

I {G)

PROCESSED IMAGE Fig. 1. Preprocessing procedure.

[10,11]. The thresholded image is scanned and closed boundaries are produced around each object. The coordinates of border elements are stored in an array for later use. Isolated pixels, elongated objects with a length to width ratio greater than 2, and objects with a diameter less than 5 pixels, are removed. Candidate objects are enclosed within a rectangular frame with dimensions equal to twice the object as illustrated by Fig. 3. The area covered by a window is then used as a map to extract features from the gray level image. The goal of feature extraction and selection is to find features that are effective in discriminating between pattern classes. A droplet is '°in-focus'' if its image is dark, without well defined diffraction tings around

Computers in Industry

A.M. Badreldin / Fuel Spray Images

109

where Ob is the average intensity value of the object and Ba is the average intensity value of the background surrounding the object within the frame. The second feature describing the object is the presence or absence of diffraction rings around the edge of a droplet. This is achieved through the analysis of the horizontal and vertical projections of the gray level window (W) enclosing the object. The X and Y projections are defined as X ( i ) = ~ IV(i, j ) / n ; i= 1 ..... m; j=l

and Y ( j ) = ~ W(i, j ) / m ; j = 1 ..... n; i=1

Fig. 3. Framing the candidate objects. (a) Framing objects in the thresholded image. (b) Corresponding framed objects in the gray level image.

the edge of the droplet, and without a light spot in the center. Those features are used in the recognition of in- and out-of-focus droplets through a 3-level tree classifier. The following subsections discuss in detail feature extraction and classification of fuel spray droplets. 3.1 Feature Extraction The first step in describing the object is to compute, locally, the percentage of relative difference in intensity between the object and its surrounding background. This percentage is given by Ob - Ba B~ × 100,

where m and n are the dimensions of the window. Projections can be valuable in detecting and locating objects in a picture. As an illustration, if the picture contains a relatively large, relatively compact object that is darker than its background, then the X and Y projections of the picture will have plateaus at the approximate X and Y positions of the object, respectively. Thus, examination of these projections gives a good indication that there is such an object in the picture, and where the object is located. In order to increase the sensitivity for ridges or valleys across the object and at the same time decrease the noise due to random granularities, projections were computed as a percentage of the relative difference between the projection and the background surrounding the object: n

X(i) = ~ j=l

Iv(i, j ) n

Ba × 100

Ba

for i = 1,..., m; and m Iv(i, j ) Y(J) = E m i=1 Ba for j = 1 ..... n.

Ba × 100

Digitization noise and small insignificant variations make it necessary to smooth the projection before it is used for the detection of significant concavities. This is done by a simple triangular weighting, computed as

110

Computers m Industry

Applications

[X(i- 1) + 2 X ( i ) + X(i + 1 ) ] / 4 ; Y-new(j) = [ Y ( j - 1) + 2 Y ( j ) + Y(j+ 1 ) ] / 4 ;

X-new(i) =

for i = 1 , . . . , m; and j = 1 , . . . , n. Figs. 4a and 4b illustrate the X and Y projections for in- and out-of-focus droplets respectively. A sharp change in intensity will give rise to a peak in the output of the first derivative of the projection. The first derivatives of the X and Y projections are computed as

DX(i)=X(i)-X(i-1)

for i = 2 . . . . . m;

and

DY(j)=Y(j)-Y(j-1)

for j = 2 . . . . . n.

way as the projections. The repeated smoothing operation is necessary to reduce the noise effect, since taking the derivative is a noise-sensitive process. Figs. 5a and 5b illustrate the first derivatives of the horizontal and vertical projections shown in Figs. 4a and 4b respectively. The smoothed derivatives are then encoded as strings of concatenated primitives. Those primitives are defined as + 1 corresponding to positive peak; - 1 corresponding to negative peak; 0 corresponding to no peak. Therefore, the strings generated from the horizontal and vertical derivatives of Fig. 5a are string(X) = 0 1 0 - 1 0; and

The first derivatives are also smoothed in the same

string(Y) = 0 1 0 - 1 0.

Fig. 4. Projections of droplets. (a) Projections of in-focus droplets. (b) Projections of out-of-focus droplets.

Fig. 5. Derivatives of droplets. (a) First derivatives of the projections of in-focus droplets. (b) First derivatives of the projections of out-of-focus droplets.

Computers in Industry

A.M. Badreldin / Fuel Spray Images

Similarly, the strings generated for Fig. 5b are string(X) = 0 1 0 1 0 - 1 0; and s t r i n g ( Y ) = 0 1 0 1 0 - 1 0.

The third feature describing the object is the presence or absence of a central bright spot on the droplet. In order to determine the central spot, the gray level frame is first converted into a binary frame. The scheme for determining the local background threshold is described below: • Enhance the edges using Sobel magnitude operator; • Compute the average value of the background 'Ba' surrounding the object; • Subtract the background value 'Ba' from each pixel in the enhanced window; • Evaluate the new average background value

! !ii(i~i~i!!!i!!!~!i ¸'~l!~I¸

~ ~'~ ~....

111

surrounding the object, and use it as a local threshold value to obtain a binary frame. Fig. 6 illustrates the effect of using local thresholding in the detection of the central bright spot. 3.2 Classification

The classification of droplets into in- and out-offocus classes is achieved through a 3-level tree classifier, as shown in Fig. 7. In the first level, the decision is based on the relative difference in intensity between the object and the surrounding background. If the difference is less than a prespecified threshold value T, the image is considered faint, and the object is classified as out-offocus. If the difference is greater than T, then the second stage of the tree is initiated. In level 2, the strings generated from the derivatives of the X and Y projections are analyzed. An in-focus droplet will generate strings with the sequence '0 1 0 - 1 0' in both the X and Y directions, as shown in Fig. 5a. An out-of-focus droplet with

i ~ ~i i ¸ COMPUTETHE INTENSITY LEVEL I OF THE OBJECTWITHRESPECT TO ITS SURROUNDINGBACKGROUND

ENCODET.E ~IR~TI

I I~-o~-Fo~J

Fig. 6. Local Thresholding. (a) Original image. (b) Local thresholding (zooming 2 : 1).

IOOT'O~-~O~O~

PROJECTIONS

iLO~AL~E~O~OI

Fig. 7. Flowchart of the tree classifier.

112

Applications

Computer~ tn lndu~trv

The number of concentric contours are then counted. An in-focus droplet will have only two concentric contours. The shape of both contours is circular as shown in Fig. 6b. However, a trap state may arise as shown in Fig. 8. This case is avoided by computing a shape number for each of the two concentric contours. This number is given by

(e X P ) / A , Fig. 8. Trap state.

well defined rings surrounding its edge will generate strings with larger number of states as illustrated by Fig. 5b. Therefore, if the number of states in both the X and Y strings exceeds '5', then the droplet is classified as out-of-focus; otherwise the third stage of the classifier is invoked. In level 3, the local thresholding is computed and all the borders inside the window are extracted. The relationships between the contours are found by a search procedure. For example, the existing conditions between two adjacent contours A and B are: A contains B; B contains A; or neither. The search procedure takes care of each of these conditions as it arises so that all contours are correctly represented at the completion of the search. Table 1 Output of a Testing File Img#

X-Coord. Y-Coord. A r e a Shape %To no. back

7 7 7 8 9 9 10 11 12 12 13 14 15 16 17 17 17 18 19 19 20

53 174 187 0 193 215 291 0 72 252 105 0 234 0 302 319 423 302 286 318 96

363 320 364 0 465 46 465 0 125 449 228 0 465 0 365 300 252 135 295 72 41

116 151 173 0 87 151 95 0 124 111 178 0 107 0 181 112 226 288 142 144 97

14.49 11.68 13.31 0.00 14.08 14.01 29.56 0.00 12.26 22.52 12.41 0.00 16.48 0.00 11.69 12.89 15.92 21.12 13.02 17.36 13.36

24 23 27 0 22 29 23 0 28 22 31 0 30 0 24 23 27 30 26 19 23

where P is the perimeter of the contour, and A is the area enclosed by the contour. This number is >/4~r for any shape. The circle is the most compact figure, that is, the shape number corresponding to a circle is equal to 4~r. Therefore, if the shape number computed for the two concentric contours does not exceed (47r + eps), where eps is added to allow for quantization and round off errors, then the droplet is classified as "infocus".

4. Test Results

4.1 Test 1 The algorithm was first tested using 400 images containing approximately 8000 candidate objects. Table 1 illustrates a sample of the output of a testing file. This table displays the image track number as stored in the video-disc, the position of the in-focus droplet, its area, the shape number that gives an idea about the roundness of the object, and its relative intensity value with respect to the surrounding background. For example, the image stored in track #10, as shown in Table 1, contains one in-focus droplet located at coordi-

--

60----

-

AUTONATZC MANUAL

-

5O -" 4O -"--3O

"-

2O

"-

I0

"-

0 0

'i_

I [-1

I0

20

70

I I I i-li

40

DROPLET

50

60

70

DIAMETER,

80

90

i

I O0

,um

Fig. 9. Comparisonof manual vs automaticmeasurements(test 1).

Computers in Industry

A.M. Badreldin / Fuel Spray Images --AUTOMATIC

60--

- - MANUAL X

5 0 --

0 F

40

T

3 0 --"

0 T A L

" 20 -

I-

JO 0 0

[-llli

I0

20

30

40

50

l

60

70

DROPLET DIAMETER,

80

90

l

I00

.um

Fig. 10. Comparison of manual vs. automatic measurements (test 2).

nates x = 291 and y = 465, its area is 95 pixels, its shape number is 29.56 (the droplet is elongated), and its relative intensity value with respect to its surrounding background is 23%; image #11 contains no in-focus droplets; image # 1 2 contains two in-focus droplets; and so on. The same images were also analyzed manually. A comparison of sizing the droplets manually vs. automatically is given in Fig. 9. The distribution size totalled to 120 in-focus droplets for the manual measurement. The automatic algorithm was able to identify 112 in-focus droplets, therefore, is the percentage of correct classification was > 93%. 4.2 Test 2

In Test 2, 170 images of very low quality were tested. Out of 100 in-focus droplets included in the images, 87 were correctly identified. Most of the misclassification errors were due to the low video level of recording the images and some were caused by severe background noise. Fig. 10 illustrates the comparison of sizing the droplets manually vs. automatically.

5. Conclusions A fast and efficient technique for automatic analysis of fuel spray images has been developed. The algorithm was tested using 400 images including approximately 8000 candidate objects. The percentage of correct classification was > 93%. The algorithm was also tested using 170 images of very low quality, and a recognition rate of 87% was achieved. Most of the errors were due to the low level of video recording, synchronization problem, and some were caused by severe background noise. The time of preprocessing each image was 10

113

seconds, and the classification was achieved in 35 to 50 seconds, depending on the number of candidate objects included in the test image. The decision tree offered a way of directing the overall strategy of computation, and permitted rapid progress to a final decision, using the minimum of computing resources. Moreover, although the technique is designed to handle round in-focus droplets and droplets with a length-to-width ratio less than 2, it can be modified to handle a wider range of droplet shapes that can be useful for future applications.

Acknowledgement I wish to thank Bruce Peters of the Fluid Mechanics department for running the engine experiments for collecting sample images.

References [I] J.M. Tishkoff: "Measurement of Particle Size and Velocity in a Fuel Spray", Second International Conference on Liquid Automization and Spray Systems, :#=10-1, June 20, 1982. [2] J.M. Tishkoff, D.C. Hammond, A.R. Chraplyvy: "Diagnostic Measurements of Fuel Spray Dispersion," J. Fluid Engineering, Vol. !04, September 1982. [3] B.D. Peters: "Laser-Video Imaging and Measurement of Fuel Droplets in a Spark-Ignition Engine", presented at the Conference on Combustion in Engineering, IME, April 11, 1983. [4] C. Ramshaw: "A Technique for Drop-Size Measurement by Direct Photography and Electronic Image Size Analysis", J. Institute of Fuel, July 1968. [5] L.M. Oberdier: "An Instrumentation System to Automate the Analysis of Fuel Spray Image Using Computer Vision", ASTM Symposium on Liquid Particle Size Measurement Techniques, June 1983. [6] C.S. Ow, R.I. Crane: "A Simple Off-Line Automatic Image Analysis System with Application to Drop Sizing in Two-Phase Flows", Int. J. Heat and Flow, Vol. 2. [7] M.C. Toner, M.J. Dix, H. Sawistowski: "A Television-Micro Processor System for High Speed Image Analysis", J. Phys. E: Sci. Instrum., Vol. 11, 1978. [8] C.S. Ow, R.I. Crane: "Pattern Recognition Procedures for a Television Minicomputer Spray Droplet Sizing System", J. Inst. of Energy, September 1981. [9] R. Fleeter, R. Toaz, V. Sarohia: "Application of Digital Image Analysis Techniques to Antimisting Fuel Spray Characterization", ASME, reprint 82-WA/HT-23. [10] A. Chottera and M. Shridhar: "Feature Extraction of Manufactured Parts in the Presence of Spurious Surface Reflections", Can. Elec. Eng. J., Vol. 7, No. 4, pp. 29-33, 1982. [11] A. Rosenfeld and A.C. Kak: Digital Picture Processing, Academic Press, New York, 1976.