- Email: [email protected]

Fast path planning in virtual colonoscopy Jeongjin Lee a , Gyehyun Kim b , Ho Lee b , Byeong-Seok Shin c,∗ , Yeong Gil Shin b a Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul 138-736, Republic of Korea b School of Computer Science and Engineering, Seoul National University, Seoul 151-742, Republic of Korea c Department of Computer Science and Engineering, Inha University, Inchon 402-751, Republic of Korea

Received 27 February 2008; accepted 5 July 2008

Abstract We propose a fast path planning algorithm using multi-resolution path tree propagation and farthest visible point. Initial path points are robustly generated by propagating the path tree, and all internal voxels locally most distant from the colon boundary are connected. The multi-resolution scheme is adopted to increase computational efficiency. Control points representing the navigational path are successively selected from the initial path points by using the farthest visible point. The position of the initial path point in a down-sampled volume is accurately adjusted in the original volume. Using the farthest visible point, the number of control points is adaptively changed according to the curvature of the colon shape so that more control points are assigned to highly curved regions. Furthermore, a smoothing step is unnecessary since our method generates a set of control points to be interpolated with the cubic spline interpolation. We applied our method to 10 computed tomography datasets. Experimental results showed that the path was generated much faster than using conventional methods without sacrificing accuracy, and clinical efficiency. The average processing time was approximately 1 s when down-sampling by a factor of 2, 3, or 4. We concluded that our method is useful in diagnosing colon cancer using virtual colonoscopy. 䉷 2008 Elsevier Ltd. All rights reserved. Keywords: Path planning; Path tree propagation; Farthest visible point; Control point; Virtual colonoscopy

1. Introduction Colon cancer is one of the leading causes of cancer deaths, yet it can be cured by proper treatment. Effective treatment of colon cancer depends on the early detection of colonic polyps, which demands a periodic examination of the colon. Colon examinations usually use invasive methods, such as optical endoscopy or radiological barium enemas. In optical endoscopy, a probe is inserted into the colon and the inner surface of the colon is examined by manipulating a small camera at the tip of the probe. Controlling the probe requires skill and precision, and examination of an entire colon takes a long time. In addition, insertion and manipulation of the probe is uncomfortable for the patient. Adverse effects include contagion and bleeding [1,2]. In barium enema examinations, contrast material containing barium is injected into the colon and the colon wall is ∗ Corresponding author. Tel.: +82 32 860 7452.

E-mail address: [email protected] (B.-S. Shin). 0010-4825/$ - see front matter 䉷 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.compbiomed.2008.07.002

examined using X-ray radiographs taken from different angles. Therefore, the patient experiences discomfort. Polyp detection sensitivity of barium enema radiography is not as high as that in optical endoscopy [3]. Virtual colonoscopy can overcome the difficulties of these conventional methods by increasing the sensitivity and specificity of the examination and reducing patient discomfort [3–6]. Virtual colonoscopy is a computerized polyp detection method that avoids the introduction of probes or contrast materials into the colon. With the recent introduction of multi-detector computed tomography (CT) scanners capable of generating 16 highresolution images in 0.5 s, CT processing speed has increased, and polyp detection sensitivity has been enhanced [7]. While optical endoscopy only examines the colon along the direction of the camera’s movement, a virtual camera can move in any direction to examine the colon’s interior, thereby improving the efficiency of diagnosis [8]. In current virtual colonoscopy, the camera path is determined before the examination. Manually determining the path through the colon is difficult since the

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

colon’s length may be over 1.5 m and its path is tortuous. Manual path determination requires finding the center position of each two-dimensional (2D) CT image of the colon, and interpolating between images for virtual navigation. Determining the center position of each colon section is time consuming, and suggested the need for an automated path finding scheme. Previous investigations into automatic path generation were based on topological thinning [3,9,10], Dijkstra’s shortest path algorithm [5,11–13], or a level set method [14,15]. A topological thinning algorithm can be used to successively eliminate the outermost layer of a segmented colon until only the centerline voxels remain. Hong et al. [3] used the peel onion technique [16] to determine a colon centerline. At each step in this technique, the outermost layer is peeled off, until there is only one layer of grid points left. Paik et al. [9] proposed a thinning method based on Euclidean distance mapping. By iteratively correcting a path toward its medial axis, the necessity of evaluating point criteria during thinning can be eliminated. Sadleir et al. [10] proposed optimized three-dimensional (3D) topological thinning to reduce the computational burden associated with the thinning process by using lookup tables. However, it took approximately 1.5 h for each lookup table generation. While a path defined by topological thinning methods is geometrically accurate, path generation time is excessively long. Using a 3D distance map generated in the preprocessing step, a path connecting the points most distant from the colon wall can be found using Dijkstra’s shortest path algorithm [17]. Hong et al. [5] used the difference between the global maximum distance value and the corresponding value for the distance from the colon wall to calculate a single-source shortest path, while Chen et al. [11] proposed a multistage skeletonization method, which determined the global minimum weight path in a connected graph weighted by the distance from the start point. He et al. [12] created a list of all local maximum voxels by Euclidean transformation of distance from the boundary. A shortest path algorithm then connected the point sets. Bitter et al. [13] introduced a penalty cost for coming too close to the object boundary during path generation by adding more edges and vertices. In all path generation approaches based on Dijkstra’s shortest path algorithm, the generated path cut corners and traveled along boundary voxels on the inside of sharp turns without the weight of distance from the boundary. Using a level set based method, Hassouna et al. [14] proposed a robust centerline extraction method, and introduced a new speed function for level sets. Uitert et al. [15] generated subvoxel precise centerlines using a two-step fast marching method. However, it was computationally expensive to calculate the propagated level set. In another approach, Kang et al. [18] determined view positions and view directions to minimize blind areas during navigation. However, this algorithm was not effective at minimizing the blind area between haustral folds, although the visible areas in curved regions were increased. Kwon et al. [19] used image-space information generated during rendering time to determine camera position and direction. This approach does not require preprocessing or extra storage, but it converges to local minima in complex regions.

1013

Most of the existing path-generation methods are computationally expensive since they require preliminary data structures and the 3D positions of all path points need to be calculated. In this paper, we propose a fast path planning algorithm using multi-resolution path tree propagation and the farthest visible point. Initial path generation using path tree propagation guarantees the robustness of our algorithm, while the multiresolution scheme increases computational efficiency. During path generation, control points are successively, selected from the initial path points by using the farthest visible point. The number of control points is adaptively changed according to colon shape, i.e., more control points are assigned in highly curved regions than straight regions. A small number of control points represent the whole navigation path properly aligned along the centerline of the colon. A smoothing step is unnecessary since our method generates a set of control points to be interpolated by the cubic spline. Therefore, the virtual camera can move smoothly, rather than following a zigzag line generated by connecting local point maxima. In Section 2 of this paper, we describe a fast path planning algorithm using the multi-resolution path tree propagation and the farthest visible point. In Section 3, experimental results are presented to indicate how our method efficiently generates a navigational path. We conclude our paper with a brief discussion of the results in Sections 4 and 5. 2. Methods Virtual colonoscopy is generally performed through three steps. First, a set of 2D CT images of the patient’s abdomen is acquired (Fig. 1(A)). Second, the colon images are reconstructed to provide a 3D segmented configuration of the colon (Fig. 1(B)). The virtual colon is examined by moving a virtual camera along the navigation path (Fig. 1(C)) [3–8]. After performing virtual navigation in both forward and reverse directions, areas not visible during navigation are automatically checked and displayed. Fig. 2 shows the path generation scheme proposed in this paper. First, the colon is identified using a 3D seeded region growing method [21]. The segmented region of the colon is down-sampled by a factor of n and a Euclidean distance map [22] is calculated for the down-sampled volume. Then, a path tree is propagated to connect all internal voxels that are locally most distant from the colon boundary. In this step, a multiresolution scheme is adopted to increase speed of computation, while generation of the path tree for all segmented voxels ensures computational robustness of our method. Finally, control points for the refined path are successively selected by using the farthest visible point. The control points are selected in the original volume to generate an accurate path. A further set of control points is interpolated by cubic spline to improve navigational comfortableness. 2.1. Initial path generation using the path tree propagation in the down-sampled volume Before path generation, the colon is segmented by a 3D seeded region growing method [21], where the seed point is

1014

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

Fig. 1. Steps in a virtual colonoscopy [20]: (A) CT images; (B) segmented colon; (C) virtual camera route along the navigation path.

Fig. 2. Flowchart of proposed method.

automatically determined using the method of Iordanescu et al. [23]. The threshold range for region growing is from −1024 Hounsfield units (HU) to −750 to −500 HU. The bounding region of the segmented colon is processed in the following steps. The segmented region is down-sampled by a factor of n with the optimal value of n determined experimentally to maximize computational efficiency without sacrificing accuracy. To preserve connectedness, we regard a voxel in the down-sampled volume as a segmented voxel when more than one voxel among n × n × n voxels in the original volume are segmented. The Euclidean distance map is calculated from the segmented,

down-sampled colon boundary. During generation of the distance map, the x-, y-, and z-axes scales are taken into account. Voxel length in the x and y directions corresponds to pixel spacing, while voxel length in the z direction corresponds to slice interval. The scaling factor in the z direction is equal to the ratio of slice interval to pixel spacing. In the down-sampled volume, the path tree for the initial path generation is propagated by connecting all inside voxels that are locally most distant from the boundary. First, the starting point (node) is determined using the method proposed by Iordanescu et al. [23]. A priority table is used to accelerate the

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

Index 0 1 Weight

2

Cumulative distance from the start point

3

Pointer to the parent node

. . . . .

x, y, z coordinate

Fig. 3. Priority table data structure.

subsequent propagation procedure. The priority table’s entry index corresponds to the node’s weight, which is its distance from the colon wall (Fig. 3). Each index has a list of nodes having the same weight. The weight of a node, P in the 3D graph that connects all voxels inside the colon is Weight P = DistanceMap(P),

(1)

where DistanceMap stores the Euclidean distance from the specific point to the colon boundary. A node with a large weight value has a high priority to be propagated. The data for each node include the weight value, the cumulative distance from the starting point, a pointer to the parent node, and the node’s coordinates (Fig. 3). When a new voxel is visited during path propagation, this voxel is inserted as a new node, according to its weight, into the corresponding index. Fig. 4(A) shows an overview of the method of path tree propagation. From the starting point (S), each neighborhood point that has never been visited before (points 1–4) is inserted into the priority table (Fig. 4(B)). Then the path tree is expanded from the node having the maximum weight (i.e., node index 3 in Fig. 4(B)). By repeating this procedure, the path tree is propagated along the colon (Fig. 4(C)). The cumulative distance from the start point is measured using the 10–14–17 metric to minimize error. The different x-, y-, and z-axes scales are accounted for by applying a scaling factor equal to the ratio of slice interval to pixel spacing in the z direction. The time complexity of determining the maximum weight node in the priority table is O(1) by keeping information about the maximum index of entries having more than one visited node. Therefore, the time complexity of building the whole tree is O(N ), where N is a total number of segmented voxels and where each voxel is visited once. The end point is the farthest one connected to the start point and which has the largest cumulative distance from the start point. From the end point, the path is efficiently traced back by referring to the pointer information in the parent node stored for each successive node. 2.2. Final path generation using the farthest visible point in the original volume The initial path goes along the points locally most distant from the colon boundary in the down-sampled volume. Since

1015

the initial path is composed of points having a one voxel distance, shaking and twisting of the virtual camera during navigation can occur due to colon shape complexities. Furthermore, the initial path in the down-sampled volume needs to be accurately positioned in the original volume. To solve these problems, we propose the accurate positioning method between the down-sampled volume and original volume and the efficient path refinement method using the farthest visible point. By using these methods, control points, which are interpolated for the navigational path, are successively selected as follows. The procedure specifies the starting position and the direction of a reference ray. The starting point, P0 , is a current point on the path under consideration. The position of the starting point in the original volume is calculated from initial path points in the down-sampled volume where the original position of the initial point, down-sampled by a factor of n, can be one of n × n × n points. When the position of a down-sampled point is (x, y, z), the original position lies in a box whose diagonal is from (nx, ny, nz) to (nx+n −1, ny+n −1, nz+n −1). Among n × n × n points, the exact position of the starting point is the point most distant from the boundary. The distance from the boundary is approximated by expanding a virtual sphere around the point. This approach is computationally efficient because boundary distance information is only required for a small set of points, not for the whole segmented volume. A virtual sphere is located on the possible position, Pcandidate of a down-sampled point, and the sphere’s radius is successively expanded by 1 voxel until the contact point between the expanding sphere and the colon wall is determined. The distance between Pcandidate and the contact point is the distance from the boundary. A precalculated look-up table can be supplemented in this procedure. The pre-calculated look-up table contains relative coordinates from the center of the sphere in the order of increasing distance from the center point. The surface point Psphere , can be obtained by adding the relative coordinates in the offset table (OT) to the coordinates of Pcandidate as follows: Psphere = Pcandidate + (OT_x(i), OT_y(i), OT_z(i)).

(2)

The OT is constructed by storing vertices in order, as shown in Fig. 5(B). This table is used to accelerate the expansion of a 2D disk on the x–y plane. The same approach is used for the 3D virtual sphere. The distance D from the sphere’s center (0, 0, 0) to each integer vertex is calculated and stored in the OT. Increasing the value of D is equivalent to expanding the virtual sphere. For example, the cross section of a sphere on the x.y plane is shown in Fig. 5(A). The distance between each integer vertex (i, j) and the origin (0, 0) is calculated and rounded off. Vertices having the same D value are then lined up in the 2D space. The same procedure is carried out for the four sphere quadrants. The direction of the reference ray is the difference vector between the next and current path points in the original volume. First, we find the direction with the farthest visible point with The vector respect to the starting point along the sample ray, R. representation of R is R = P0 + l · Rd (, ),

(3)

1016

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

Priority table Index 0

1

1

2

2

3

4

3 . . . . .

n

n

Visited node

n

n

Propagated node

n

n

Currently propagating node

Priority table Index 0

1

8

10

1

2

4

7

2

12

3

6

. . . . .

9

11

n

n

Visited node

n

n

Propagated node

n

n

Currently propagating node

Fig. 4. Path tree propagation: (A) overview; (B) propagation from the starting point; (C) intermediate state of propagation.

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

1017

Fig. 5. Offset table generation: (A) listing points having the same D value (illustration for the first sphere quadrant only); (B) An example of 2D offset table.

Table 1 Experimental datasets Subject

Image size

Slice number

Pixel spacing (mm)

Slice interval (mm)

1 2 3 4 5 6 7 8 9 10

512 × 512 512 × 512 512 × 512 512 × 512 512 × 512 512 × 512 512 × 512 512 × 512 512 × 512 512 × 512

501 472 482 458 466 518 474 546 444 466

0.65 × 0.65 0.61 × 0.61 0.68 × 0.68 0.66 × 0.66 0.59 × 0.59 0.71 × 0.71 0.68 × 0.68 0.71 × 0.71 0.63 × 0.63 0.60 × 0.60

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

Fig. 6. Final path generation using the farthest visible point.

and l is the propagated length where Rd is the direction of ray R, of R. We also advance R around the reference ray Rd0 (0 , 0 ) in the following range: 0 − k1 0 + k1 , 0 − k1 0 + k1 ,

(4)

where the parameter k1 represents the field of view. As R is advanced around the reference ray, the intersection point between R and the colon wall is determined. The intersection point farthest from the starting point is the farthest visible point, Pmax (Fig. 6). This position is where the viewer can see the farthest from viewpoint P0 without being blocked by the colon wall. As previously stated, the parameter k1 from Eq. (4) represents the field of view. When k1 is large, the field of view becomes broader since visibility is determined by the larger range of angular view directions; however, the final path generation time is prolonged. When k1 is small, the field of view becomes narrower since visibility is determined by a smaller range of

angular view directions, but the final path generation time is reduced. The optimal value of k1 is determined empirically in order to increase computational efficiency without losing accuracy. The candidate control point, Pcandidate , on the line between P0 and Pmax is then determined using Pcandidate (k2 ) = P0 + k2 · (Pmax − P0 ), 0 < k2 < 1.

(5)

Increasing k2 from 0 to 1, we find the maximum possible value of k2 within the following condition: DistanceMap(Pcandidate (k2 )) Threshold.

(6)

The final control point is determined as Pcandidate with k2 , which is found using Eqs. (5) and (6). The parameter, Threshold, is determined as a sufficiently large value to prevent Pcandidate from being too close to the colon boundary (Fig. 6). Control points are sparsely located in straight regions because the

1018

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

Fig. 7. The final path generated by proposed method using different down-sampling factors: (A) n = 1; (B) n = 2; (C) n = 3; (D) n = 4.

distance between Pmax and P0 is long, while more control points are located in rapidly turning zones because the distance between Pmax and P0 is short.

From each new control point, we repeat the above procedure until we reach the end point. After the control point for a generated path is selected, the cubic spline [24] is interpolated

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

1019

using the control points selected to represent the final navigation path for virtual colonoscopy. The virtual camera’s view direction is determined by calculating the tangential vector of each point. To minimize the rotation between consecutive images, the virtual camera’s up-vector is determined as follows: uˆ i =

uˆ i−1 − (uˆ i−1 · di )di , uˆ i−1 − (uˆ i−1 · di )di

(7)

where uˆ i is the up-vector in the i-th frame and di is the view direction in the i-th frame [18].

3. Experimental results and discussion We tested our method on 10 clinical datasets (Table 1). The data were obtained from a Siemens Sensation 16-channel multidetector row CT scanner and analyzed using an Intel Pentium IV PC with a 3.4 GHz CPU and 2.0 GB of main memory. The performance of our method was evaluated by visual inspection, accuracy, as well as clinical and computational efficiency. The optimal parameter, k1 (=45◦ ), as determined by the experimental results, increased computational efficiency without the loss of accuracy. Fig. 7 shows an automatically generated path using this parameter value for four different downsampling factors, n. Green dots represent control points within the path. All paths were located in the center of the colon cross-section in both highly and slightly curved regions. Furthermore, the control points were adaptively distributed along the colon centerline, and modeled the shape of the colon accurately. As expected, the number of control points changed, with more control points being established in highly curved regions. Fig. 8 shows the path generated by thinning method [3]. That path followed a zigzag line, whereas the path created by our method did not exhibit this erratic route since our virtual camera moved on a smooth interpolated route, rather than on an exact local maxima path. Fig. 9 shows images of a virtual colonoscopy along a path generated by our method. Fig. 9(A) and (B) show 10 sequentially rendered frames along the path in straight and curved regions, respectively. In both regions, the path is located at the center of the colon cross-section. We were able to identify a tumor during virtual colonoscopy (far right image, Fig. 9(C)). After the detection of the colonic polyp, the diameter should be measured. Polyps having less than 5 mm diameter can be regarded as harmless, whereas polyps having more than 8 mm diameter are regarded as harmful, and requires followup colonoscopy [25]. Other polyps, having 6–7 mm diameter, require a regular follow-up virtual colonoscopy. We evaluated the accuracy of our method by comparing centeredness of our path with that in the path generated by thinning method, which is reported to be the most geometrically accurate algorithm [3]. The centeredness of each point in both path types was measured using the Euclidean distance of each point from the wall [22]. We defined the centeredness error of each point P in our path by measuring the error between point

Fig. 8. The final path generated by thinning method [3].

P and the nearest point P on the thinning path using: Centeredness Error(P)(%) DistanceMap(P ) − DistanceMap(P) × 100. = DistanceMap(P )

(8)

The average centeredness errors for all path points in the 10 datasets were calculated. The average centeredness error of our method according to the down-sampling factor, n, was 1.9% (n = 1), 2.3% (n = 2), 2.4% (n = 3), and 3.0% (n = 4). We also compared the average number of total path points using the thinning method with the number of control points generated by our method in all 10 datasets. The average number of total path points in the 10 datasets, using the thinning method, was 8306, while the average number of control points according to n was 81 (n =1), 79 (n =2), 68 (n =3), 62 (n =4). Thus, the number of control points that were interpolated to generate our final navigation paths were less than 1% of the average total path points generated by the thinning method. In other words, more than 99% of the total path points in our method were interpolated, which improved the user comfort during navigation. To demonstrate the clinical efficiency of our method, we measured the average invisible surface area and number of

1020

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

Fig. 9. Virtual colonoscopy images: (A) 10 sequentially rendered frames along a path in a straight region; (B) 10 sequentially rendered frames in a curved region; (C) various cases, far right frame shows a portion of a polyp.

invisible objects in the 10 datasets after both forward and backward (combined) navigation with a 45◦ camera field of view (Table 2). The number of invisible objects was based on objects whose diameter was larger than 5 mm [25]. Following a combined camera navigation of the 10 datasets, the average ratio of the invisible surface areas to the total colon surface area was less than 3.5% and the average number of invisible objects was less than 7.6. These levels are acceptable for clinical applications.

Table 3 shows the average processing time for each step in our method. The average total processing time was approximately 1 s (using n = 2, 3, 4). The processing time for the final path generation step decreased little according to the downsampling factor used whereas the distance map generation and path propagation processing times decreased noticeably according to n. The computational efficiency of our method was compared with other methods, including the conventional method proposed by Wan et al. [26]. The average processing time

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

1021

Table 2 Clinical efficiency of our method Down- sampling factor

n=1 n=2 n=3 n=4

Number of invisible objects (diameter 5 mm)

Invisible surface area (%) After forward navigation

After backward navigation

After forward navigation

After backward navigation

14.0 13.8 14.3 16.1

2.3 2.5 2.4 3.5

36.6 34.4 35.2 33

5.6 4.2 7.6 6.6

Table 3 Average processing time of our method Down-sampling factor

Distance map generation

Path tree propagation

Final path generation

Total

n=1 n=2 n=3 n=4

4.19 0.53 0.17 0.06

2.19 0.23 0.08 0.05

0.52 0.50 0.48 0.47

6.90 1.26 0.73 0.58

4 Invisible surface area [%]

Centeredness error [%]

3.5 3 2.5 2 1.5 1 0.5

3.5 3 2.5 2 1.5 1 0.5

0

0 1

2 3 down-sampling factor, n

4

1

4

8 Total processing time [sec]

8 Number of invisible objects

2 3 down-sampling factor, n

7 6 5 4 3 2 1

7 6 5 4 3 2 1 0

0 1

2 3 down-sampling factor, n

4

1

2 3 down-sampling factor, n

4

Fig. 10. Performance evaluation according to various down-sampling factors, n : (A) centering accuracy evaluation; (B) clinical efficiency evaluation by invisible surface area; (C) clinical efficiency evaluation by number of invisible objects; (D) computation time efficiency evaluation.

of the conventional method [26] was 12.94 s, while the topological thinning methods [3,9] and level set methods [14,15] took several minutes. Therefore, the 1 s computational speed of our method was markedly faster than that of all methods [3,9,14,15,26]. Fig. 10 shows the results of performance evaluation according to the down-sampling factor, n. The centeredness error

increased with n since the number of interpolated path points automatically increases due to down-sampling. The invisible surface area, and number of invisible objects showed a tendency to increase as n grew. These measures are influenced by many factors including the inner colon shape so that the relation between these measures and down-sampling factor, n is not exact, but approximate. Total processing time decreased as n grew.

1022

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023

However, the change as n grew was decreased because the computational burden to position n ×n ×n down-sampled points on the original volume was exponentially increased. From these four evaluations, we conclude that the best performance of our method was achieved with the down-sampling factor of n = 2 since accuracy and clinical efficiency were similar to those using with n = 1 but the 1 s total processing time was markedly faster than the 7 s using n = 1. 4. Conclusions In this paper, we proposed a fast path planning algorithm using multi-resolution path tree propagation and farthest visible point. We achieved robust centerline determination using the path tree propagation by connecting all internal voxels having the local maximum distance from the boundary. Adoption of the multi-resolution scheme increased computational efficiency dramatically. Control points representing the refined path were successively selected from initial path points by using the farthest visible point. The number of control points was adaptively changed to model the colon shape accurately so that more control points were used in highly curved regions. Furthermore, the smoothing step was unnecessary since this step generated an interpolated smooth path from an exact local maximum path. The experimental results on 10 clinical datasets showed that the navigation path was generated dramatically faster than conventional methods and the path was located in the center of the colonic section for an effective clinical examination. The proposed method could be successfully applied to help diagnose colon cancer during virtual colonoscopy. Conflict of interest statement There is no conflict of interest for this paper. References [1] S. Dogramadzi, C.R. Allen, G.D. Bell, Computer controlled colonoscopy, in: Proceedings of IEEE Instrumentation and Measurement Technology Conference ’98, vol. 1, 1998, pp. 210–213. [2] S.J. Phee, W.S. Ng, I.M. Chen, F. Seow-Choen, B.L. Davies, Automation of colonoscopy. II. visual control aspects, IEEE Eng. Med. Biol. Mag. 17 (3) (1998) 81–88. [3] L. Hong, A. Kaufman, Y. Wei, A. Viswambharan, M. Wax, Z. Liang, 3D virtual colonoscopy, in: Proceedings of IEEE Biomedical Visualization ’95, 1995, pp. 26–32. [4] T.Y. Lee, P.H. Lin, C.H. Lin, Y.N. Sun, X.Z. Lin, Interactive 3-D virtual colonoscopy system, IEEE T. Inf. Technol. Biomed. 3 (2) (1999) 139–150. [5] L. Hong, S. Muraki, A.E. Kaufman, D. Bartz, T. He, Virtual voyage: interactive navigation in the human colon, in: Proceedings of ACM SIGGRAPH ’97, 1997, pp. 27–34. [6] G. Rubin, C. Beaulieu, V. Argiro, H. Ringl, A. Norbash, J. Feller, M. Dake, R. Jeffey, S. Napel, Perspective volume rendering of CT and MR images: applications for endoscopic imaging, Radiology 199 (2) (1996) 321–330. [7] P.J. Pickhardt, J.R. Choi, I. Hwang, J.A. Butler, M.L. Puckett, H.A. Hildebrandt, R.K. Wong, P.A. Nugent, P.A. Mysliwiec, W.R. Schindler, Computed tomographic virtual colonoscopy to screen for colorectal neoplasia in asymptomatic adults, N. Engl. J. Med. 349 (2003) 2191–2200.

[8] D.J. Vining, D.W. Gelfand, R.E. Bechtold, E.S. Scharling, E.K. Grishaw, R.Y. Shifrin, Technical feasibility of colon imaging with helical CT and virtual reality, Am. J. Roentgenol. 162 (1994) 104. [9] D.S. Paik, C.F. Beaulieu, R.B. Jeffery, G.D. Rubin, S. Napel, Automated flight path planning for virtual endoscopy, Med. Phys. 25 (5) (1998) 629–637. [10] R.J.T. Sadleir, P.F. Whelan, Fast colon centerline calculation using optimised 3D topological thinning, Comput. Med. Imaging Graph. 29 (2005) 251–258. [11] D. Chen, B. Li, Z. Liang, M. Wan, A. Kaufman, M. Wax, A tree-branch searching, multiresolution approach to skeletonization for virtual endoscopy, in: Proceedings of SPIE Medical Imaging ’00, 2000, pp. 726–734. [12] T. He, L. Hong, Reliable navigation for virtual endoscopy, in: Proceedings of IEEE Nuclear Science Symposium ’99, vol. 3, 1999, pp. 1339–1343. [13] I. Bitter, A.E. Kaufman, M. Sato, Penalized-distance volumetric skeleton algorithm, IEEE Trans. Vis. Comput. Graph. 7 (3) (2001) 195–206. [14] M.S. Hassouna, A.A. Farag, Robust centerline extraction framework using level sets, in: Proceedings of IEEE Computer Vision and Pattern Recognition, 2005, pp. 458–465. [15] R.L. Uitert, R.M. Summers, Automatic correction of level set based subvoxel precise centerlines for virtual colonoscopy using the colon outer wall, IEEE Trans. Med. Imaging 26 (8) (2007) 1069–1078. [16] T. Pavlidis, Algorithms for Graphics and Image Processing, Computer Science Press, Rockville, MD, 1982. [17] E.W. Dijkstra, A note on two problems in connexion with graphs, Numerishe Mathemetik 1 (1959) 269–271. [18] D.G. Kang, J.B. Ra, A new path planning algorithm for maximizing visibility in computed tomography colonography, IEEE Trans. Med. Imaging 24 (8) (2005) 957–968. [19] K.J. Kwon, B.S. Shin, An efficient camera path computation using imagespace information in virtual endoscopy, in: Lecture Notes in Computer Science, vol. 3280, 2004, Springer, Berlin, pp. 118–125. [20] J. Lee, H. Hong, Y.G. Shin, S.H. Kim, An efficient path-generation method for virtual colonoscopy, in: Lecture Notes in Computer Science, vol. 3773, 2005, Springer, Berlin. pp. 339–347. [21] C.L. Wyatt, Y. Ge, D.J. Vining, Automatic segmentation of the colon for virtual colonoscopy, Comput. Med. Imaging Graph. 24 (1) (2000) 1–9. [22] T. Saito, J.I. Toriwaki, New algorithms for Euclidean distance transformations of an n-dimensional digitised picture with applications, Pattern Recognition 27 (11) (1994) 1551–1565. [23] G. Iordanescu, P.J. Pickhardt, J.R. Choi, R.M. Summers, Automated seed placement for colon segmentation in computed tomography colonography, Acad. Radiol. 12 (2) (2005) 182–190. [24] R.H. Bartels, J.C. Beatty, B.A. Barsky, An introduction to splines for use in computer graphics and geometric modeling, Morgan Kaufmann, San Francisco, 1998, pp. 9–17. [25] J. Nappi, H. Yoshida, Automated detection of polyps with CT Colonography: evaluation of volumetric features for reduction of falsepositive findings, Acad. Radiol. 9 (4) (2002) 386–397. [26] M. Wan, Z. Liang, Q. Ke, L. Hong, I. Bitter, A. Kaufman, Automatic centerline extraction for virtual colonoscopy, IEEE Trans. Med. Imaging 21 (12) (2002) 1450–1460.

Jeongjin Lee is a Research Professor in Department of Radiology University of Ulsan, College of Medicine, Korea. He received his BS degree (2000) from Department of Mechanical Engineering, and MS degree (2002) and PhD degree (2008) from School of Computer Science and Engineering at Seoul National University, Korea. His current interests are virtual colonoscopy, computer animation, image segmentation, and registration.

Gyehyun Kim is a PhD candidate in Computer Science and Engineering at the Seoul National University, Korea. He received his BS degree (2002) from school of electronic engineering and MS degree (2004) from Department of Information Communication and Engineering at Soongsil

J. Lee et al. / Computers in Biology and Medicine 38 (2008) 1012 – 1023 University. His current interests are medical image registration, and segmentation. Ho Lee is a PhD candidate in School of Electrical Engineering and Computer Science at the Seoul National University, Korea. He received BS degree (2000) from School of Electronic Engineering and MS degree (2002)from Department of Information Communication and Engineering at Soongsil University. He was working at Research and Development Center of INFINITT as a Research Engineer. His major research interests include image registration, image reconstruction, visualization, and medical image processing.

1023

Byeong-Seok Shin is a Professor in the School of Computer Engineering, Inha University Korea. Current research interests include volume rendering, real-time graphics, and medical imaging. He received BS, MS, and PhD in Computer Engineering from the Seoul National University in Korea. Yeong Gil Shin is a Professor in the School of Electrical Engineering and Computer Science, Seoul National University, Korea. He received BS degree (1982) and MS degree (1984) in Computer Science from Seoul National University, Korea, and PhD degree (1990) in Computer Science from University of Southern California. Current research interests include computer graphics, volume rendering, and medical imaging.