- Email: [email protected]

Contents lists available at SciVerse ScienceDirect

Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa

Medical image interpolation based on multi-resolution registration Juelin Leng a , Guoliang Xu a,∗ , Yongjie Zhang b a

State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China b

Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA

article

info

Article history: Received 14 October 2012 Received in revised form 21 March 2013 Accepted 28 April 2013 Keywords: Image interpolation Registration-based interpolation L2 -gradient flow Multi-resolution transformation Shape feature preservation

abstract We present a novel registration-based image interpolation approach in this paper. The proposed method is divided into two steps: image registration and intensity interpolation. An image registration method is developed to construct a corresponding transformation, which is represented by the bicubic B-spline vector-valued function, between the given images so that the image features are well matched. To match features from coarse to fine, a multi-resolution strategy is applied with different numbers of B-spline control points adopted at various resolution levels. After registration, the intensity values of in-between images are calculated by linear/cubic interpolation along the matching lines. Experimental results demonstrate that our interpolation approach is effective, robust, and capable of producing continuously deformed in-between images with clear shape features. © 2013 Elsevier Ltd. All rights reserved.

1. Introduction Image interpolation is a widely used operation in image processing, medical imaging, and computer graphics. In medical imaging, a slice sequence of an organ or tissue is obtained with high resolution using CT, MRI, or other modalities. However, the spacing between neighboring slices is often much larger than the pixel size (as shown in Fig. 1.1), which is attributed to the capability of the imaging devices, or time/storage/dose limitation. Direct application of such data for 3D reconstruction often yields structure-discontinuous, step-shaped iso-surfaces and other problems, since the voxel dimensions are anisotropic. To obtain volume data with isotropic dimensions for 3D structure reconstruction, we need to interpolate several in-between slices (see [1,2]). Many methods have been developed for solving the image interpolation problem. These methods can be classified into two categories (see [3]): scene-based and object-based. The scene-based method is simple, intuitive, and easy to use, but the accuracy is usually poor. Generally, the object-based method has better accuracy, but the computational complexity is higher. For the scene-based method, the pixel intensity of in-between slices is determined directly from the pixel intensity of the given slices at the identical position. The earliest scene-based method is linear interpolation, which is simple, fast, and widely used. Afterwards, cubic spline and other polynomials were used in medical image interpolation (see [4,5]). In recent years, the kriging method in statistics was used in gray value interpolation (see [6]). Basically, all these methods perform intensity averaging of the neighboring slices without considering the shape feature deformation. Hence, the resultant inbetween slices have blurring effects at the object boundary. To overcome the shortcomings of the scene-based methods, several object-based interpolation schemes have been developed (see [7–9]), in which the object information extracted from the given slices is used for guiding the interpolation

∗

Corresponding author. Tel.: +86 010 62624352. E-mail addresses: [email protected] (J. Leng), [email protected] (G. Xu), [email protected] (Y. Zhang).

0898-1221/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.camwa.2013.04.026

2

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

Fig. 1.1. A sequence of MR images. The spacing distance between neighboring slices is much larger than the pixel size; therefore several slices should be interpolated between them.

process. The shape-based method (see [10–13]), as one type of object-based method, constructs distance transforms of the given slices, then interpolates the displacement instead of interpolating the intensity values. Therefore, it maintains geometric changes of objects in a better way. Because the shape-based method can be implemented efficiently and it also achieves reasonably good results, it has become a widely used technique. However, it cannot effectively deal with objects with holes, large offsets, or heavy invagination. To overcome these drawbacks, several modified shape-based methods using information extracted from images have been developed (see [14,15]). In some object-based methods, flexible registration is used to align neighboring slices, and then image interpolation is carried out between the corresponding positions in each slice (see [8,16,17]). These methods, including using the fluid model (see [18]), elastic model (see [19]), optical flow model (see [20]) and B-spline-based free-form deformation (see [21]), became popular in the 1990s. The success of registration-based image interpolation depends on two prerequisites (see [21]): the neighboring slices contain similar anatomical features, and the registration algorithm is capable of finding the transformation which maps similar features correctly. In this paper, we present a novel and accurate registration-based interpolation method. Two continuous maps, represented as bicubic B-spline vector-valued functions, for the image domain are constructed based on their multi-resolution representations, so that the features of the given images are matched at multiple scales: from large to small. By minimizing an energy functional, an L2 -gradient flow is derived. The flow is discretized by the finite element method in the spatial direction and an explicit Euler scheme in the temporal direction. The in-between images are then obtained by interpolating intensity values along the matching lines at any sampling rate. The experimental results show that our method is robust, efficient, and yields desirable results. The remainder of this paper is organized as follows. In Section 2, we describe the image interpolation problem and the motivation of our approach. Our registration model, interpolation strategies, and algorithm outlines are presented in Section 3. Section 4 is devoted to the computational details of image registration and intensity interpolation. Section 5 presents several experimental results. Finally, we conclude the paper with a summary in Section 6. 2. Motivations Given a sequence of images (also called as slices) {Ii }Ni=0 with the same size (w + 1) × (h + 1) (w and h are two positive integers), the image interpolation problem is to determine a sequence of feature-preserved in-between slices Ji,m , i M

i = 0, . . . , N − 1, m = 1, 2, . . . , Mi , such that {Ji,m }m= 1 makes up a continuous transition from Ii to Ii+1 . Here, Mi is the number of slices inserted between Ii and Ii+1 . For simplicity, we assume that Mi = M for i = 0, 1, . . . , N − 1.

Note that, for the image interpolation problem, both the input and the output images are discrete. To facilitate computing, a discrete image I = [Iij ](w+1)×(h+1) with size (w + 1) × (h + 1) can be treated as a continuous function I (u) with j u = [u, v]T ∈ Ω = [0, 1]2 . Here, I ( wi , h ) = Iij , i = 0, 1, . . . , w, j = 0, 1, . . . , h, and other function values are computed by

bilinear interpolation. We define the point set U = {uij = [ wi , h ]T : i = 0, 1, . . . , w, j = 0, 1, . . . , h} as the pixel point set. As mentioned above, image interpolation methods can be classified into two types: scene-based and object-based. For scene-based interpolation, the intensity values of the in-between image are determined directly from the intensity values of the given images at the same positions. Scene-based methods are simple and computational inexpensive, but may produce significant artifacts. As shown in Fig. 2.1, in-between images obtained from linear intensity interpolation are blurred and ghosted. Since the direct interpolation does not take account of the shape and structure information. The registration-based method, as one type of object-based method, is well used for image interpolation. Given two images I0 (u) and I1 (u), the basic registration-based interpolation always include two steps. First, using a registration method to align one image with another, so that the transformation (position map) between the two images can be constructed. Second, based on the map, a traditional interpolation is applied along the matching lines. j

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

3

Fig. 2.1. Images obtained from linear intensity interpolation of I0 and I1 , i.e., I (u, t ) = (1 − t )I0 (u) + tI1 (u), t ∈ [0, 1].

Fig. 2.2. The example of asymmetric roles played by I0 and I1 . First row: the results of aligning I0 (u) to I1 (u). Second row: the results of aligning I1 (u) to I0 (u).

The main idea of image registration is to find a transformation x(u), which aligns one image to the other. The similarity of two images can be defined as the sum of the squared intensity difference (SSD), the cross correlation (CC), and the mutual information (MI). The transformation x(u) can be rigid or non-rigid. Rigid transformation is simpler, with fewer parameters, like translation, rotation, and scaling factors. However, non-rigid transformation is more flexible. The transformation x(u) is usually represented by piecewise-linear or B-spline functions. To smooth the transformation, some regularization terms which minimize the variation of the partial derivatives of the transformation are applied. Hence, the energy functional for the registration model is written as E (x) = S (I0 (x(u)), I1 (u)) + R(x),

(2.1)

where the first term measures the similarity between I0 (x(u)) and I1 (u), and the second term reflects the regularity of x(u). In the similarity term, I0 is deformed by x, while I1 is unchanged. Here, the registration model (2.1) which deforms only one image is called the single direction model. Note that the roles of I0 and I1 are asymmetry in the single direction model. To illustrate the asymmetry problem, we present an example in Fig. 2.2. The map x0 (u) is constructed to deform I0 such that I0 (x0 (u)) ≈ I1 (u); this process is called forward registration. In addition, and inversely, we construct another map x1 (u) to satisfy the condition I1 (x1 (u)) ≈ I0 (u); this process is called backward registration. Note that the ring hole in I1 cannot emerge when deforming I0 to I1 , while the hole gets smaller when deforming I1 to I0 . The maps x0 and x1 are represented by B-spline functions and displayed i using grids. The grid of a map x(u) (u = [u, v]T ∈ [0, 1]2 ) consists of a vertical curve family {x( 20 , v)}20 i=0 and a horizontal curve family {x(u,

j 20

)}20 j=0 . If the backward registration is the inverse process of the forward registration, then the equation

1 1 x− = x1 or x0 = x− 0 1 must be satisfied. However, as shown in Fig. 2.2, we can observe that the forward registration is not inverse to the backward registration, and therefore the roles of I0 and I1 are different in the single direction registration (01) (10) model. In Fig. 2.2, image Jmid (u) is the mid-image interpolated based on the forward registration, and image Jmid (u) is the mid-image derived by the backward registration. Averaging the in-between images obtained from the forward and backward registration is a general method to avoid the bias (see [22]). However, the artifacts may still exist, since the in-between images generated by the two registration approaches may be quite different from each other. In our model, we consider using two maps to deform both I0 and I1 .

4

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

3. Methods and algorithm outlines Our proposed image interpolation has two steps: image registration and intensity interpolation. In this section, we present our registration model and interpolation strategies, and then outline the algorithm. 3.1. Our registration model Given two images I0 and I1 with size (w + 1) × (h + 1), we represent them as continuous functions I0 (u) and I1 (u) with u = [u, v]T ∈ Ω = [0, 1]2 using bilinear interpolation. In our registration model, we intend to find two maps xk = [xk , yk ]T : Ω → Ω ,

k = 0, 1,

satisfying the following: (i) xk are C 2 maps; (ii) xk (0, v) = 0, xk (1, v) = 1, yk (u, 0) = 0 and yk (u, 1) = 1; (iii) for a given ϵ (0 < ϵ < 1), det[xku , xkv ] ≥ ϵ

on Ω ,

(3.1)

such that E (x0 , x1 ) =

[I0 (x0 (u)) − I1 (x1 (u))]2

Ω

1.0 + c I0 (x0 (u))2 + I1 (x1 (u))2

+ λ2

1 k=0

Ω

du + λ1

1 ∥xku (u)∥2 + ∥xkv (u)∥2 du k=0

Ω

∥xku (u) × xkv (u)∥2 du

(3.2)

is minimized. Here, xku (u) and xkv (u) denote the first-order partial derivatives of xk (u) with respect to the variables u and v , respectively. Condition (iii) is referred to as a regularity condition which guarantees xk to be an injective map. Let V (Ω ) = {x : Ω → Ω satisfying conditions (i)–(iii)}; then ∀x ∈ V (Ω ) is a one-to-one subjective map (proved in [23]). In this paper, maps xk (u) (k = 0, 1) are represented as vector-valued bivariate cubic B-spline functions defined on Ω . The first term of the energy functional (3.2) is referred to as the fidelity term, which is used to minimize the error between two deformed images. Since the registration problem is ill posed, several constraints are required to make xk (u) (k = 0, 1) as regular as possible. The second and third terms are applied for smoothing the transformations xk 1 2 2 (k = 0, 1). We call R1 (x0 , x1 ) = k=0 Ω (∥xku ∥ + ∥xkv ∥ )du the first-order regularization term, and call R2 (x0 , x1 ) =

1

k=0

Ω

∥xku × xkv ∥2 du the area regularization term. Parameters λ1 and λ2 are two given coefficients of the regularization

terms. The interpretations and analysis of our model are as follows. The fidelity term. For the fidelity term, two maps x0 (u) and x1 (u) are designed to deform I0 and I1 such that the deformed images Ik (xk (u)) (k = 0, 1) are similar to each other. Compared to single direction registration, using two maps in the fidelity term overcomes the bias problem of I0 and I1 . Furthermore, images I0 (x0 (u)) and I1 (x1 (u)) are matched much better than by using only one map, since the number of degrees of freedom is doubled. To measure the similarity of two images I0 and I1 , one simple and computationally inexpensive metric is the sum of squared intensity difference (SSD) Ω (I0 (u)− I1 (u))2 du. However, the mismatching endurance of the SSD for a low intensity region is more than for a high intensity region. In practical applications, the features with low intensity are as important as the features with high intensity. Thus, a modified SSD metric Ω g (u)(I0 (u) − I1 (u))2 du is applied in our model. Here, the factor g (u) = 1/ 1.0 + c I0 (u)2 + I1 (u)2 , with the denominator no less than 1.0, and c is a given positive constant. The first-order regularization term.

1

1

For an arbitrary map x(u), 0 ∥xu (u, v0 )∥du is the arc length of the plane curve C (u) := x(u, v0 ), while 0 ∥xv (u0 , v)∥dv is the arc length of the plane curve C (v) := x(u0 , v). Hence, the first-order regularization term Ω ∥xu (u)∥2 + ∥xv (u)∥2 du aims to make the transformation map x vary uniformly with respect to its variables u and v . The regularization term is a convex function with respect to x, and it reaches a minimum if and only if x is the identity map. The area regularization term. Under the parametric form x : Ω → Ω , the area element is given by ∥xu × xv ∥du. For Ω = [0, 1]2 , the area of x ∈ V (Ω )

1 ∥xu × xv ∥du ≤ |Ω | 2 · ( Ω ∥xu × 1 xv ∥2 du) 2 , and the equality holds if and only if ∥xu × xv ∥ ≡ 1. Thus the area regularization term Ω ∥xu × xv ∥2 du constrains 2 2 2 2 the transformation of x such that the area element remains consistent. Since ∥xu × xv ∥ = ∥xu ∥ ∥xv ∥ − ⟨xu , xv ⟩ , the area regularization term can be written as Ω ∥xu ∥2 ∥xv ∥2 − ⟨xu , xv ⟩2 du as well.

is |Ω | =

Ω

∥xu × xv ∥du = 1. Using the Cauchy–Schwarz inequality, we have 1 =

Ω

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

5

Fig. 3.1. Illustration of registration-based linear interpolation. Arrows between plane t = 0 and plane t = 1 are several matching lines. The intensity 1 value at v0 on the interpolating plane t = t0 is determined by linearly interpolating I0 (u0 ) and I1 (u1 ), where u0 = v˜ − t0 (v0 ) and u1 = y01 (u0 ).

Selection of parameters λ1 and λ2 . The selection of parameters λ1 and λ2 depends on the deformation between the two given images. The first-order regularization term R1 penalizes the flexibility of transformations xk (k = 0, 1). Therefore, a large λ1 should be chosen if the deformation between two given images has a high rigidity. In contrast, a small λ1 is taken if there exists a large distortion between I0 and I1 . The regularization term R2 constrains the area elements ∥xku × xkv ∥ = | det[xku , xkv ]| for k = 0, 1, which prevents maps disobeying the regularity condition (3.1) during the registration process. Therefore, with a small λ2 , the registration process may be forced to stop, because condition (3.1) is not satisfied. However, the images cannot be well matched by taking a large λ2 , since elastic deformation is suppressed by the regularization term R2 . According to our experience, the proposed registration model is not so sensitive to the regularization parameters. Therefore, in our experiments, parameters λ1 and λ2 are selected empirically according to the given images. The registration model is solved using a geometric flow-based method, and the computational details will be introduced in the next section. 3.2. Our interpolation strategies Suppose that the solution of the registration model (3.2) is (x0 (u), x1 (u)). Then the transformations between I0 and I1 can be constructed. Let the definitional domain of I0 (u) and I1 (u) be Ω0 = [0, 1]2 and Ω1 = [0, 1]2 , respectively. From 1 the matching relation I0 (x0 (u)) ≈ I1 (x1 (u)), the transformation from I0 to I1 is given as y01 := x1 ◦ x− : Ω0 → Ω1 , and 0 −1 the transformation from I1 to I0 is given as y10 := x0 ◦ x1 : Ω1 → Ω0 . Then, the image interpolation step is performed based on these transformations. Transformations x0 and x1 are invertible, since they are one-to-one subjective maps. Thus, 1 −1 transformations y01 = x1 ◦ x− 0 and y10 = x0 ◦ x1 are one-to-one subjective maps as well. 3.2.1. Registration-based linear interpolation Suppose that we are given two images I0 and I1 . A registration-based linear interpolation method is used for inserting slices between them. Let I : Ω × [0, 1] → R be a 3D image satisfying I (u, 0) = I0 (u) and I (u, 1) = I1 (u). The intensity values of I (u, t ) for t ∈ (0, 1) are interpolated along the matching lines connecting slice t = 0 and slice t = 1. Then, for each t0 ∈ (0, 1), an in-between slice with respect to I0 and I1 is given as I (u, t0 ). For any u0 ∈ Ω , the matching line is defined as Lu0 (t ) = (1 − t )u0 + ty01 (u0 ),

t ∈ [0, 1].

Then we have Lu0 (0) = u0 , Lu0 (1) = y01 (u0 ) := u1 . Therefore, the matching lines connect each point u0 at I0 to its matching point u1 at I1 (see Fig. 3.1). Let v˜ t (u) = (1 − t )u + ty01 (u), with fixed t ∈ [0, 1]. For a fixed (v0 , t0 ) ∈ Ω × [0, 1], there is a unique matching line Lu0 (t ) satisfying Lu0 (t0 ) = v0 . The 1 relationship between v0 and u0 is u0 = v˜ − t0 (v0 ). Then the intensity at (v0 , t0 ) is computed by linearly interpolating I0 (u0 ) and I1 (y01 (u0 )). Thus, the 3D image is given as 1 1 I (u, t ) = (1 − t )I0 v˜ − v− t (u) + tI1 y01 (˜ t (u)) .

Let M be the number of slices interpolated between I0 and I1 . Then the in-between slices are

Jm (u) = I u,

M +1

u0 = v˜ −m1 (u), M +1

m

= 1−

m M +1

u1 = y01 (u0 ).

I0 (u0 ) +

m M +1

I1 (u1 ),

m = 1, 2, . . . , M ,

(3.3)

6

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

Fig. 3.2. Illustration of registration-based cubic interpolation. After registration, for ∀pi located on Ii , the corresponding points pi−1 , pi+1 , pi+2 can be 2 found in Ii−1 , Ii+1 , Ii+2 , respectively. The location pi,m at Ji,m is computed by Catmull–Rom interpolation of {pj }ij+ =i−1 , and the intensity at pi,m is determined by Ii (pi ) and Ii+1 (pi+1 ).

3.2.2. Registration-based cubic interpolation Suppose that we are given a series of image slices {Ii }Ni=0 , and the image registration has been performed for each pair of neighboring slices Ii and Ii+1 . Let yi,i+1 be the map mapping Ii to Ii+1 , and yi+1,i be the map mapping Ii+1 to Ii . For a series of matched points {pi }Ni=0 , with pi located on Ii , cubic interpolation is more appropriate than linear interpolation to construct matching lines, since the former interpolation leads to a smooth shape of the 3D volume structure. The matching lines 3 starting from pi are defined as Lpi (t ) = j=0 fj (t )pi−1+j , with t ∈ [0, 1], Lpi (0) = pi and Lpi (1) = pi+1 , where

1 1 f0 (t ) = − t 3 + t 2 − t , 2 2 f1 (t ) = 3 t 3 − 5 t 2 + 1, 2

2

3 1 f2 (t ) = − t 3 + 2t 2 + t , 2 2 1 1 f3 (t ) = t 3 − t 2 , 2

and

pi−1 = yi,i−1 (pi ), pi+1 = yi,i+1 (pi ), pi+2 = yi+1,i+2 (yi,i+1 (pi )).

(3.4)

2

Here, the cubic interpolation method is chosen as the Catmull–Rom method [5]. Let Ji,m (u), m = 1, . . . , M be the slices inserted between Ii and Ii+1 . For a point pi,m located on Ji,m , there exists a unique pi satisfying that

pi,m = Lpi

m M +1

=

3 fj

j =0

m M +1

pi−1+j .

(3.5)

Then the intensity at pi,m is determined by Ji,m (pi,m ) =

1−

m M +1

Ii (pi ) +

m M +1

Ii+1 (pi+1 ).

(3.6)

Our registration-based cubic interpolation is illustrated in Fig. 3.2. 3.3. Algorithm outline Given two images I0 and I1 with size (w + 1) × (h + 1), we intend to achieve image registration using a multi-resolutionbased method, so that images are aligned from coarse to fine. During the image registration, images Ik (k = 0, 1) and maps xk (k = 0, 1) are represented by multi-resolution. Let L be the number of resolution levels, and let (w0 , h0 ), . . . , (wL−1 , hL−1 ) be a sequence of integer pairs satisfying

wl = E λlw w0 ,

hl = E λlh h0 ,

l = 1, . . . , L − 1,

where λw > 1 and λh > 1 are the given factors, and E [·] denotes taking the integer part. At the lth level, the images Ik are (l) (l) converted to Ik by a Gaussian filter, and the maps xk (u) are represented as (wl , hl )-level bicubic B-spline functions xk . We refer to a bicubic B-spline function defined on the knots

1

0, 0, 0, 0,

,

2

wl wl

,...,

wl − 1 1 2 hl − 1 , 1, 1, 1, 1 × 0, 0, 0, 0, , , . . . , , 1, 1, 1, 1 , wl hl hl hl

as the (wl , hl )-level bicubic B-spline function. Therefore, the (wl , hl )-level bicubic B-spline function space is spanned by the ( w1 )

basis Ni

l

( h1 )

(u)Nj

l

(v), for i = 0, . . . , wl + 2, j = 0, . . . , hl + 2.

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

7

The main algorithm of our multi-resolution registration-based image interpolation method is given in Algorithm 3.1. Algorithm 3.1 (Image Interpolation). I. Given Ii , i = 0, 1, . . . , N, and parameters (w0 , h0 ), λw , λh , M, L, and c. For i = 0, . . . , N − 1, image registration is implemented to the neighboring slices Ii and Ii+1 . Without loss of generality, the image registration steps for I0 and I1 are as follows. (0,0) (0,0) 1. Initialize x0 (u) and x1 (u) as identity maps, and represent them using (w0 , h0 )-level bicubic B-spline functions. 2. For l = 0, . . . , L − 1, do the following: (l)

(a) Smooth I0 and I1 using the Gaussian filter G(u) with standard deviations (see [23]) σu −2

−h−2

= (

wl−2 −w −2 3

1

) 2 , σv(l) =

) 2 . Then we obtain the smoothed images I0(l) and I1(l) . (l) (l) (b) In (wl , hl )-level bicubic B-spline space, compute (x0 , x1 ) which minimize the energy functional (l) 2 (l) 1 I0 (x0 (u)) − I1 (x1 (u)) du + λ ∥xku ∥2 + ∥xkv ∥2 du E˜ (x0 , x1 ) = (l) 1 ( l ) Ω 1.0 + c I0 (x0 (u))2 + I1 (x1 (u))2 k=0 Ω 1 + λ2 ∥xku × xkv ∥2 du, (

hl

1

3

k =0

(l,0)

(3.7)

Ω

(u) (k = 0, 1) as initial values. (l) (l+1,0) (c) If l ̸= L − 1, convert the minimizer xk (u) to the (wl+1 , hl+1 )-level bicubic B-spline representations xk (u) (k = 0, 1) using the least square approximation (see Section 4.1). Set l = l + 1. (L−1) 1 −1 3. Set xk (u) = xk (u) (k = 0, 1), and compute x0 (x− 1 (u)) and x1 (x0 (u)), which are the transformations between I0 with xk

and I1 . II. Based on the transformations obtained from image registration, the slices {Ji,m (u)}M m=1 for i = 0, . . . , N − 1 are interpolated using the interpolation method presented in Section 3.2. 4. Computational details In this section, we give the computational details for our registration-based interpolation method. 4.1. Initialize xk (u) (0,0)

To start image registration, we need to set up the initial maps xk (0,0)

tity map: xk ( w1 0

)

(k = 0, 1). The initial maps are chosen to be the iden-

(u) = u. For a bicubic B-spline represented map x(u) =

( h1 0

w0 +2 h0 +2 i=0

j =0

(0)

(0)

pij φi(h +3)+j (u), where φi(h +3)+j (u) 0 0

)

(u)Nj (v) are basis functions, if x(u) = u, then the control point coordinates satisfy pij = [pi , qj ]T , i = 0, . . . , = Ni w0 + 2, j = 0, . . . , h0 + 2, where 0 0 i = 0, j = 0, 1 1 i = 1, j = 1, 3w0 3h0 i−1 j−1 pi = i = 2, . . . , w0 , and qj = j = 2, . . . , h0 , w h0 0 1 1 i = w0 + 1, j = h0 + 1 , 1− 1− 3w0 3h0 1 i = w0 + 2, 1 j = h0 + 2 . (l+1,0)

(u) (k = 0, 1) for the (wl+1 , hl+1 )-level registration is obtained from the solution x(kl) (u) at the (l+1,0) (wl , hl )-level. The least square approximation method is utilized to compute the control points pkij (i = 0, . . . , wl+1 + 2, j = 0, . . . , hl+1 + 2), such that 2 w +1 +2 (l+1,0) 2 l+1 +2 hl (l) (l+1,0) (l+1) (l) x du = du ( u ) − x ( u ) p φ ( u ) − x ( u ) k k kij k i(hl+1 +3)+j The initial value xk

Ω

Ω

(l+1)

i=0

(w1 l+1

j=0

)

(h 1 )

(u)Nj l+1 (v) are basis functions; the definitions can be found in a text reaches a minimum. Here, φi(h +3)+j (u) = Ni l+1 book on splines (see [24], p. 173, for instance). The computation of the least square approximation is trivial, and is omitted because of space limitation.

8

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

4.2. Compute the minimizer of the energy functional (3.7) (l)

To compute xk (u), we minimize (3.7) using L2 -gradient flow. To simplify the computation, we assume that the factor g (u) = 1

(l)

(l)

1.0 + c I0 (x0 (u))2 + I1 (x1 (u))2

in the fidelity term has been computed using xk (u) obtained from the

previous step. Hence, we consider the energy functional E˜ (x0 , x1 ) =

Ω

(l)

(l)

g (u) I0 (x0 (u)) − I1 (x1 (u))

2

du + λ1

1 ∥xku ∥2 + ∥xkv ∥2 du k=0

+ λ2

1

Ω

k=0

Ω

∥xku × xkv ∥2 du.

(4.1)

To minimize the energy functional (4.1), we adopt the gradient flow method, which transforms an optimization problem into an initial value problem of an ordinary differential equation. To construct the gradient flow, we need to address two main issues: the definition of a gradient and a suitable choice of inner product. For an Fréchet differentiable functional E (x) defined for function x : Ω → Rm , its gradient gradg E (x) in a metric g is defined by g (gradg E (x), φ) = δ(E (x), φ),

∀φ ∈ C0∞ (Ω ; Rm ),

dE (x+εφ)

where δ(E (x), φ) := |ε=0 is the first-order variation of the energy functional E (x). For the same energy functional, dε different metrics g will generate different gradient flows. If the metric is chosen as an L2 inner product, then we have ⟨gradL2 E (x), φ⟩L2 = δ(E (x), φ), and we obtain the L2 -gradient flow

∂x = −gradL2 E (x), ∂t or the weak-form L2 -gradient flow

∂x , φ = −⟨gradL2 E (x), φ⟩L2 , ∂t

∀φ ∈ C0∞ (Ω ; Rm ).

Let xk (u, ε) = xk (u) + ε Φk (u),

u ∈ Ω , k = 0, 1,

(Ω ; R ). Then we have the first-order variation of the energy functional (4.1) ˜ δ E (x), Φ := δ E˜(x0 , x1 ), Φ0 , Φ1 d E˜ (x0 (·, ε), x1 (·, ε)) = dε ε=0 1 1 T (l) (l) (l) g (u) Ik (xk (u)) − I1−k (x1−k (u)) ∇ Ik (xk )T Φk du + λ1 = xku Φku + xTkv Φkv du

where Φk ∈

C01

2

k=0

+ λ2

Ω

k=0

1 k=0

Ω

Ω

T αk Φku + βkT Φkv du.

Here, x = [xT0 , xT1 ]T ,

Φ = [Φ0T , Φ1T ]T ,

αk = ∥xkv ∥ xku − ⟨xku , xkv ⟩xkv , βk = ∥xku ∥2 xkv − ⟨xku , xkv ⟩xku . Introducing an artificial time t ≥ 0, we derive the weak-form L2 -gradient flow: T ∂x Φ du = −δ E˜ (x), Φ , ∀Φ ∈ C01 (Ω ; R4 ). ∂ t Ω 2

Taking Φ = ei φ, φ ∈ C01 (Ω ), i = 1, 2, 3, 4, where ei ∈ R4 is a unit vector with its ith element as 1, we have the following weak-form L2 -gradient flows for evolving xk :

Ω

(l) (l) (l) I0 (x0 ) − I1 (x1 ) ∇ Ik (xk )φ ∂ xk k+1 φ du = (−1) (xku φu + xkv φv ) du (l) du, −λ1 (l) ∂t Ω 1.0 + c I0 (x0 )2 + I1 (x1 )2 Ω − λ2 (αk φu + βk φv ) du, k = 0, 1, Ω

(l,0)

with xk = xk

for t = 0.

(4.2)

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

9

Therefore, the minimization problem is converted to a time-dependent ordinary differential equation (ODE) (4.2). In the spatial direction, we use the finite element method to discrete the ODE. A forward Euler scheme is applied for the temporal discretization. We solve Eq. (4.2) in a bicubic B-spline vector-valued function space. Let (l)

xk (u) =

w l +2 h l +2 i=0

(l)

1

pkij Ni

wl

(u)Nj

1 hl

(v),

j =0

( w1 )

(l)

( h1 )

where pkij ∈ R2 are the corresponding two-dimensional control points, and Ni l (u)Nj l (v) are the bicubic B-spline basis functions. (l) For the sake of description, we reorder the control points and B-spline basis functions, such that the map xk (u) can be represented as (l)

xk (u) =

nl

(l)

xkr φr(l) (u),

(4.3)

r =0

where (l)

(l)

xkr = pkij ,

r = i(hl + 3) + j,

(l)

φr (u) = Ni

1

wl

1 hl

(u)Nj

(v),

r = i(hl + 3) + j,

nl = (wl + 3)(hl + 3) − 1. (l)

n

(l)

(l)

(l)

l Substituting xk (u, t ) = r =0 xkr (t )φr (u) into (4.2), and then taking the test function φ as φi , for i = 0, . . . , nl , we can discretize (4.2) as a system of ordinary differential equations (ODEs)

M(l)

(l)

dXk (t ) dt

= Q(kl) ,

k = 0, 1,

(l)

(4.4)

(l)

(l)

(l) n

with the control points Xk (t ) = [xk0 (t ), . . . , xknl (t )]T ∈ R(nl +1)×2 as unknown vectors. In Eq. (4.4), M(l) = [mij ]i,lj=0 ∈ (l)

(l)

(l)

R(nl +1)×(nl +1) , and Qk = [qk0 , . . . , qknl ]T ∈ R(nl +1)×2 . Here,

(l)

mij =

Ω

φi(l) φj(l) du,

(l) (l) (l) (l) I (x ) − I1 (x1 ) (l) (l) (l) 0 (l) 0(l) ∇ Ik (xk )φi du (l) (l) Ω 1 + c [I0 (x0 )]2 + [I1 (x1 )]2 (l) (l) (l) (l) αk(l) φiu(l) + βk(l) φi(vl) du, − λ1 xku φiu + xkv φiv du − λ2

(l)

qki = (−1)k+1

Ω

Ω

αk(l) = ∥x(klv) ∥2 x(kul) − ⟨x(kul) , x(klv) ⟩x(klv) ,

βk(l) = ∥x(kul) ∥2 x(klv) − ⟨x(kul) , x(klv) ⟩x(kul) .

For the temporal direction discretization of the ODE system (4.4), we use the following forward Euler scheme: (l,s+1)

Xk

−1 (l,s) − X(kl,s) = M(l) Qk , τs

s = 0, 1, . . . , k = 0, 1,

(4.5) (l,0)

where τs is a temporal step size, and s is the iteration number. The initial value Xk (l,0)

of xk

(l,s+1)

(u), and then we can obtain Xk

(l,s)

iteratively from the previous Xk

(l)

is given by the control point coordinates

.

The iterative algorithm for computing xk (u) is given as Algorithm 4.1. Since matrix M(l) is the Kronecker product of two small matrices, the inverse matrix (M(l) )−1 can be pre-computed using a fast computation (see [23]). To make the solution (l) (l,s+1) xk (u) map Ω to Ω , Xk is forced to project its boundary control points to the boundary of Ω at each iteration step. The regularity condition for a map x(u) requires that det[xku (u), xkv (u)] ≥ ϵ0 > 0 for all u ∈ Ω . In our algorithm, we just check j the regularity condition at the pixel point set U = {uij = [ wi , h ]T : i = 0, 1, . . . , w, j = 0, 1, . . . , h}. (l)

Algorithm 4.1 (Compute xk (u) Using an Explicit Finite Element Method). (l,0)

= [x(k0l,0) , . . . , x(knl,0l ) ] as the given initial values. −1 (l,s) 2. (a) Compute Yk := M(l) Qk . (b) Compute τs using the method introduced in Section 4.3.

1. Set s = 0, Xk

(l,s)

10

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

Fig. 4.1. Illustration of Algorithm 4.2.

Fig. 5.1. Effect of the regularization terms. The first row: only the first-order regularization term R1 is used; the coefficient λ1 is selected as 100.0. The second row: only the area regularization term R2 with coefficient λ2 = 1000.0 is used. The third row: both R1 and R2 are applied to image registration, where λ1 = 50.0 and λ2 = 500.0. Each map (x0 , x1 , y10 , and y01 ) is displayed as a mesh grid which consists of one vertical curve family and one horizontal j i 20 , v)}20 curve family. For a map x(u)(u = [u, v]T ∈ [0, 1]2 ), the vertical curve family is chosen as {x( 20 i=0 , and the horizontal curve family is {x(u, 20 )}j=0 .

(l,s+1)

= X(kl,s) + τs Y(kl,s) , and project the boundary control points for X(kl,s+1) to the boundary of Ω . (l,s+1) (d) Check the regularity condition for xk (u). If it is irregular, reduce τs to 0.618τs , then go back to the above step (c). (l,s+1) 3. Check the terminate condition ∥Xk − X(kl,s) ∥ < ε, where ε is a given threshold. If the terminate condition is satisfied (l,s+1) (l) or the maximum iteration reaches, stop the iteration, and Xk is the required coefficient vector of xk (u). Otherwise, set s = s + 1 and return to step 2. (c) Compute Xk

4.3. Compute the temporal step size τs (l,s)

To achieve high computational efficiency, we seek the best temporal step size τs . Suppose that xk compute τs according to the specific characteristics of the images. Let Φ have (l,s+1)

xk

(l)

−1 (l,s) T (l) (u) = (X(kl,s+1) )T Φ (l) = (X(kl,s) )T Φ (l) + τs ( M(l) Qk ) Φ = x(kl,s) (u) + τs δk(l,s) (u),

k = 0, 1,

(u) is known. We = [φ0 (u), . . . , φnl (u)] . Then, from (4.5), we (l)

(l)

T

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

11

Fig. 5.2. Interpolating result of inserting three images between I0 and I1 , where the image size is 256 × 256. The coefficients of the regularization terms are selected as λ1 = 50.0, λ2 = 500.0. In the implementation, the maximum number of iterations is 10 for each level, and the total execution time is 24.57 s.

Fig. 5.3. The result of our registration-based interpolation using two maps xk , k = 0, 1. The size of input images is 256 × 256. The coefficients of the regularization terms are selected as λ1 = 50.0, λ2 = 10.0. In the implementation, the maximum number of iterations is 20 for each level, and the total execution time is 55.63 s.

(u)T := (Φ (l) (u))T (M(l) )−1 Q(kl,s) . We define the temporal step size τs such that (l) (l,s) (l,s) (l) (l,s) (l,s) 2 1 I0 (x0 + τ δ0 ) − I1 (x1 + τ δ1 ) (l,s) 2 (l,s) (l,s) 2 (l,s) ∥ du E (τ ) = + τ δ ∥ + ∥ x + τ δ du + λ ∥ x (l) (l,s) 1 kv kv ku ku (l) (l,s) 1.0 + c [I0 (x0 )]2 + [I1 (x1 )]2 Ω k=0 Ω 1 (l,s) + λ2 ∥(x(kul,s) + τ δku ) × (x(klv,s) + τ δk(lv,s) )∥2 du (l,s)

where the incremental δk

k=0

Ω

is minimized. From E (τ ) = 0, we obtain an approximation of τs as ′

τs = −

E ′ (0) E ′′ (0)

.

1 −1 4.4. Compute x− 0 and x1 1 Given the bicubic B-spline represented maps xk (u) (k = 0, 1), we need to compute x− k (uij ) (k = 0, 1) at the pixel point −1 −1 uij ∈ U using Algorithm 4.2. Then y01 (u) = x1 (x0 (u)) and y10 (u) = x0 (x1 (u)) can be calculated at u ∈ U.

˜ ij = [ Wi , H ]T : i = 0, . . . , W , j = 0, . . . , H }, and then we In Algorithm 4.2, x(u) is resampled at a refined grid U˜ = {u ˜ ij ), for i = 0, . . . , W , j = 0, . . . , H. From u˜ ij = x−1 (xij ), the value of x−1 (u) at the pixel point u = uij ∈ U have xij := x(u can be resampled using a bilinear interpolation based on the barycentric coordinate. Note that the size of the refined grid is (W + 1) × (H + 1), which is computed according to the first-order derivatives of x. Even if x is seriously distorted, the numerical accuracy will still be satisfactory, because the sampling grid U˜ is sufficiently dense. j

12

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

Fig. 5.4. Results of interpolating I0 and I1 . First row: resultant images obtained by traditional linear interpolation. Second row: in-between images using our method. Third and fourth rows: other results produced by registering I0 and I1 . The size of the input images is 256 × 256. The coefficients of the regularization terms are selected as λ1 = 100.0, λ2 = 100.0. In the implementation, the maximum number of iterations is 20 for each level, and the total execution time is 46.71 s.

Algorithm 4.2 (Computation of x−1 (uij )). 1. Compute x(u) on the pixel point set U (uij ∈ U are the black points in Fig. 4.1(a)), and estimate W = E (maxi,j ∥xu (uij )∥ + 0.5) · w, H = E (maxi,j ∥xv (uij )∥ + 0.5) · h, where E (·) denotes taking the integer part.

˜ ij = [ Wi , H ]T : i = 0, . . . , W , j = 0, . . . , H } (u˜ ij are shown in Fig. 4.1(b)), and 2. Create newly refined grid points U˜ = {u ˜ ij ), for i = 0, . . . , W , j = 0, . . . , H. compute xij := x(u 3. For each quadrilateral j

Qij = [xij , xi+1,j , xi+1,j+1 , xi,j+1 ],

i = 0, . . . , W − 1, j = 0, . . . , H − 1,

find a minimal rectangle Rij (see the dashed rectangle in Fig. 4.1(c)) containing Qij , and then do the following. (a) For each pixel point ui0 j0 ∈ U located within Rij , find its barycentric coordinate [s, t ]T with respect to Qij . That is, find [s, t ]T such that ui0 j0 = (1 − t ) (1 − s)xij + sxi+1,j + t (1 − s)xi,j+1 + sxi+1,j+1 . (b) If s, t ∈ [0, 1], then we set ˜ ij + su˜ i+1,j + t (1 − s)u˜ i,j+1 + su˜ i+1,j+1 . x−1 (ui0 j0 ) = (1 − t ) (1 − s)u 4.5. Interpolate in-between images For image interpolation, if we are given only two images I0 and I1 , linear interpolation is used to compute the in-between images. If we are given a series of slices {Ii }Ni=0 (N > 1), we first register each pair of neighboring slices. Then slices between

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

13

Fig. 5.5. Results of interpolating I0 and I1 . First row: resultant images obtained by traditional linear interpolation. Second row: in-between images using our method. Third and fourth rows: other results produced by registering I0 and I1 . The size of the input images is 256 × 256. The coefficients of the regularization terms are selected as λ1 = 50.0, λ2 = 50.0. In the implementation, the maximum number of iterations is 20 for each level, and the total execution time is 47.12 s.

I1 and IN −1 are computed by cubic interpolation, and other inserted slices (between I0 and I1 , or between IN −1 and IN ) are computed by linear interpolation. To interpolate M images between I0 and I1 , we use the function (3.3) to compute Jm (u), m = 1, . . . , M. Since the output images are discrete, we just need to compute the intensity values at pixel point set U = {uij , i = 0, . . . , w, j = 0, . . . , h}. Setting u0 = uij , we can obtain intensity values of Jm (u) at u = v˜ m (uij ), for i = 0, . . . , w, j = 0, . . . , h. Then the intensity M +1

values of Jm (u) at pixel point set U are resampled using bilinear interpolation. To interpolate M slices {Ji,m }M m=1 between Ii and Ii+1 , for i = 1, . . . , N − 2, we choose the cubic interpolation method presented in Section 3.2. For each pi ∈ U, the corresponding matching points pi−1 , pi+1 and pi+2 can be computed using 2 (3.4). Then the position of pi,m with respect to Ji,m is determined by interpolating {pj }ij+ =i−1 using (3.5), and the intensity value of Ji,m (pi,m ) is computed using (3.6). Therefore, the intensity values of Ji,m (u) at pixel point set U are resampled using bilinear interpolation. 5. Illustrative examples We verify the performance of our image interpolation method through several numerical experiments. Our algorithm was implemented in C++, and it ran on a PC with Intel Core 2.83 GHz CPU; no multi-core programming was involved. In our experiments, the intensity values of input images are within the range [0, 255]. For registering each pair of given images, we chose the multi-resolution level L = 6 and initial level (w0 , h0 ) = (8, 8), and the increasing factors for the control point number were λw = λh = 1.5. The factor c in the fidelity term of the registration model was selected as 0.001. In this section,

14

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

Fig. 5.6. First row: three consecutive image slices. Second row: the results for interpolating one slice between slice 0 and slice 2 using linear interpolation (left), optical flow-based interpolation (middle), and the proposed method (right). Third row: corresponding difference images between the interpolated image and the original image (slice 1). The image size of the input slices is 252 × 220. For the proposed method, the coefficients of the regularization terms are selected as λ1 = 50.0, λ2 = 50.0, the maximum number of iterations is 10 for each level, and the execution time is 22.7 s. The execution time of the optical flow-based method is 8.9 s.

each transformation map is displayed as a grid which consists of one vertical curve family and one horizontal curve family. i For a map x(u)(u = [u, v]T ∈ [0, 1]2 ), the vertical curve family is chosen as {x( 20 , v)}20 i=0 , and the horizontal curve family is {x(u,

j 20

)}20 j =0 .

5.1. Synthetic images The first example is to show the effect of the regularization terms. We align I0 and I1 (as shown in Fig. 5.2) using three kinds of regularization method. In Fig. 5.1, the first row shows the solution of the registration model involving only the first-order regularization term R1 . Since the first-order derivatives are penalized during the registration process, the mesh curves are forced to be straight. The maps in the second row are obtained by just using the regularization term R2 .

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

15

Fig. 5.7. Original medical image sequences. (a) The MR brain image sequences (181 × 217, 161 slices). (b) The CT chest image sequences (256 × 256, 69 slices).

We can observe that the grid cells of xk (k = 0, 1) have uniform area. Hence, the two regularization terms are both good at smoothing the map, but with different effects. With both R1 and R2 applied to the registration model, the maps as shown in the third row combine the benefits of these two regularization terms. Fig. 5.2 shows the continuous deformation from I0 to I1 . Compared to Fig. 2.1, the object in the in-between slices has clear outlines, since the linear interpolation is performed along the matching lines. The proposed method, which overcomes the bias problem of using a single map, intends to align two given images using two maps xk (u) (k = 0, 1). Fig. 5.3 shows the experimental results using our proposed model, where y01 and y10 are the transformations between I0 and I1 . Although I0 and I1 are not matched exactly, the interpolated middle image Jmid (u) is actually the mediacy of I0 and I1 . 5.2. Real images We present two examples to show the performance of our method for interpolating real images. Figs. 5.4 and 5.5 are the results of our proposed method and the traditional linear interpolation method. The images produced by traditional linear interpolation as shown in the first rows are so blurry that we cannot identify the features of the interpolated images. From the figures in the second rows, we can see that the in-between images Ji (u) (i = 1, 2, 3) generated by our method keep the shape features clear, and they describe the gradual evolution process from I0 (u) to I1 (u). From the last two rows of Figs. 5.4 and 5.5, we can observe that I0 and I1 are well matched by the maps xk (k = 0, 1) such that I0 (x0 (u)) and I1 (x1 (u)) are almost identical. To highlight the transformations, we overlay xk on Ik (u), and overlay the identity map on Ik (xk (u)). Then we can see the deformation controlled by xk . 5.3. Medical images The performance of our proposed method is then validated on medical image sequences. We compare the proposed method with two other interpolation methods: one is traditional linear interpolation, and the other is an optical flow-based method. The optical flow-based method applies the demons algorithm [19,25] to find the displacement between the moving scene m and the static scene s. For each iterative step, the velocity field is calculated using the equation

− → (m − s) ∇ s − → , v = − → ( ∇ s)2 + (m − s)2 for every point in s, and then regularized by Gaussian smoothing. Based on the displacement mapping from one image to the other, the in-between slices could be generated. Since the roles played by the two images are different for the optical flow, we exchange the two images, and the eventual in-between slices are formed as a weighted average. For a quantitative comparison, the interpolated slices are compared with the original slices using the mean square difference (MSD) measure. Here, the mean square difference between the interpolated slice Iint and the original slice Iori is defined as MSD =

1

(w + 1)(h + 1)

w,h where {uij }i=0,j=0

w h

2

Iint (uij ) − Iori (uij ) ,

i =0 j =0

are pixel points. Moreover, the difference image represented as Iint − Iori + 120 is calculated and displayed to evaluate the interpolating quality. The first row of Fig. 5.6 shows three consecutive slices. Three different interpolation methods are applied for interpolating one slice between slice 0 and slice 2; the interpolated slices and the corresponding difference images are shown in the second

16

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

Fig. 5.8. First row: three original slices for MR brain data. Second row: interpolated slice 98 using three different methods, linear interpolation (left), optical flow-based interpolation (middle), and the proposed method (right). Third row: corresponding difference images between the interpolated slices and the original slice.

row and the third row, respectively. It can be observed that the linearly interpolated slice is blurred, and large differences appear compared to the original slice 1. Both the optical flow-based method and the proposed method are much better than linear interpolation, with clear feature outlines conserved. According to the difference images and the MSD error measure, we can observe that the proposed method is more accurate than optical flow-based interpolation. We further test the performances of the three interpolation methods on two series of medical image slices: one is an MR brain sequence (see Fig. 5.7(a)), and the other is a CT chest sequence (see Fig. 5.7(b)). The original MR brain series consists of 161 slices {Ii }160 i=0 with size 181 × 217. Three different interpolation methods are applied for producing three interslices between each pair of the subsequence {I4i }40 i=0 , and the interpolated slices are compared to the original slices using the MSD measure. The original CT chest series consists of 69 slices with size 256 × 256, and the slice number ranges from 0 to 68. Odd slices are removed and interpolated by even slices using three interpolation methods.

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

17

Fig. 5.9. First row: three original slices for CT chest data. Second row: interpolated slice 63 using three different methods, linear interpolation (left), optical flow-based interpolation (middle), and the proposed method (right). Third row: corresponding difference images between the interpolated slices and the original slice.

Table 5.1 Comparison of interpolation methods.

Linear Optical flow Proposed

MR brain

CT chest

MSD

CPU time (s)

MSD

CPU time (s)

83.1 45.2 39.9

0.06 267.2 643.7

166.9 137.9 133.1

0.05 419.7 873.8

To compare the proposed method to the optical flow-based method and linear interpolation, the MSD error and execution time are calculated, as shown in Table 5.1. Here, the MSD value in the table is computed by averaging the MSD errors of all interpolated slices, and the CPU time is the total execution time for interpolating the whole sequences. The results indicate that the proposed method performs better than the other two in terms of MSD error. For registering each pair of neighboring slices, regularization coefficients λ1 = 50.0 and λ2 = 50.0 are selected in our proposed algorithm, and the maximum number of iterations is 10 for each level. Compared to the optical flow method, the proposed algorithm needs twice the running time (several minutes), because each registration iteration is expensive.

18

J. Leng et al. / Computers and Mathematics with Applications 66 (2013) 1–18

Figs. 5.8 and 5.9 illustrate comparison results for one slice of the MR brain data and the CT chest data, respectively. The slice obtained by the linear interpolation shows a large difference at the outline of deformed features. The optical flowbased method is more desirable than linear interpolation, but it did not perform so well in some small details. The proposed method introduces the least difference to the original slice among the three interpolation methods, because our registration approach matches image features more accurately. 6. Conclusions A novel multi-resolution registration-based image interpolation approach has been presented in this paper. Two continuous and regular maps are constructed based on multi-resolution levels so that the features of the given images are matched from large to small. The in-between images are then obtained by interpolating intensity values along the matching lines at any sampling rate. There are several advantages of the proposed method. First, two maps are utilized for image registration so that the role of two given images is symmetric. Second, proper regularization terms are selected to smooth the maps. Third, an L2 -gradient flow method is applied to solve the registration problem, and an ideal strategy for computing the step size is used. Fourth, in-between images can be interpolated at any sampling rate. The proposed method is compared with two other interpolation methods, and the experimental results demonstrate that the proposed method is effective and outperforms both of them. Acknowledgments J. Leng and G. Xu were supported by NSFC under the grants (11101401, 81173663), NSFC key project under the grant (10990013) and Funds for Creative Research Groups of China (grant No. 11021101). Y. Zhang was supported in part by her NSF CAREER Award OCI-1149591 and a research grant from the Winters Foundation. References [1] S. Bao, B. Lin, Research on interpolation methods for outline insert value weight arithmetic, Journal of Chongqing Normal University 20 (3) (2003) 29–32. [2] R. Robb, Biomedical Imaging, Visualization, and Analysis, John Wiley & Sons, 2000. [3] G. Grevera, J. Udupa, An objective comparison of 3D image interpolation methods, IEEE Transactions on Medical Imaging 17 (4) (1998) 642–652. [4] G. Herman, S. Rowland, M. Yau, A comparative study of the use of linear and modified cubic spline interpolation for image reconstruction, IEEE Transactions on Nuclear Science 26 (2) (1979) 2879–2894. [5] T. Lehmann, C. Göonner, K. Spitzer, Survey: interpolation methods in medical image processing, IEEE Transactions on Medical Imaging 18 (11) (1999) 1049–1075. [6] M. Stytz, R. Parrott, Using kriging for 3D medical imaging, Computerized Medical Imaging and Graphics 17 (6) (1993) 421–442. [7] A. Dhawan, L. Arata, Knowledge-based 3D analysis from 2D medical images, IEEE Engineering in Medicine and Biology Magazine 10 (4) (1991) 30–37. [8] A. Goshtasby, D. Turner, L. Ackerman, Matching of tomographic slices for interpolation, IEEE Transactions on Medical Imaging 11 (4) (1992) 507–516. [9] W. Higgins, C. Orlick, B. Ledell, Nonlinear filtering approach to 3D gray-scale image interpolation, IEEE Transactions on Medical Imaging 15 (4) (1996) 580–587. [10] K. Chuang, C. Chen, L. Yuan, C. Yeh, Shape-based grey-level image interpolation, Physics in Medicine and Biology 44 (6) (1999) 1565–1577. [11] G. Grevera, J. Udupa, Shape-based interpolation of multidimensional grey-level images, IEEE Transactions on Medical Imaging 15 (6) (1996) 881–892. [12] G. Herman, J. Zheng, C. Bucholtz, Shape-based interpolation, IEEE Computer Graphics and Applications 12 (3) (1992) 69–79. [13] S. Raya, J. Udupa, Shape-based interpolation of multidimensional objects, IEEE Transactions on Medical Imaging 9 (1) (1990) 32–42. [14] V. Chatzis, I. Pitas, Interpolation of 3D binary images based on morphological skeletonization, IEEE Transactions on Medical Imaging 19 (7) (2000) 699–710. [15] T. Lee, C. Lin, Feature-guided shape-based image interpolation, IEEE Transactions on Medical Imaging 21 (12) (2002) 1479–1489. [16] G. Penney, J. Schnabel, D. Rueckert, M. Viergever, W. Niessen, Registration-based interpolation, IEEE Transactions on Medical Imaging 23 (7) (2004) 922–926. [17] D. Rueckert, L.I. Sonoda, C. Hayes, D.L.G. Hill, M.O. Leach, D.J. Hawkes, Nonrigid registration using free-form deformations: application to breast MR images, IEEE Transactions on Medical Imaging 18 (8) (1999) 712–721. [18] M. Bro-Nielsen, C. Gramkow, Fast fluid registration of medical images, in: Proceeding of the 4th International Conference on Visualization in Biomedical Computing, VBC’96, 1996, pp. 267–276. [19] J.P. Thirion, Image matching as a diffusion process: an analogy with Maxwell’s demons, Medical Image Analysis 2 (3) (1998) 243–260. [20] J. Ehrhardt, D. Säring, H. Handels, Optical flow based interpolation of temporal image sequences, in: Proceedings of SPIE Medical Imaging 2006: Image Processing, vol. 6144, 2006, pp. 830–837. [21] G. Penney, J. Schnabel, D. Rueckert, D. Hawkes, W. Niessen, Registration-based interpolation using a high-resolution image for guidance, in: Proceedings of Medical Image Computing and Computer-Assisted Intervention, vol. 3216, 2004, pp. 558–565. [22] D. Frakes, L. Dasi, K. Pekkan, H. Kitajima, K. Sundareswaran, A. Yoganathan, M. Smith, A new method for registration-based medical image interpolation, IEEE Transactions on Medical Imaging 27 (3) (2008) 370–377. [23] Y. Zheng, Z. Jing, G. Xu, Flexible multi-scale image alignment using B-spline reparametrization, in: Y. Zhang (Ed.), Image-Based Geometric Modeling and Mesh Generation, Springer, Netherland, 2013, pp. 21–53. [24] G. Xu, Geometric Partial Differential Equation Methods in Computational Geometry, Science Press, Beijing, China, 2008. [25] Y. Zhang, Y. Jing, X. Liang, G. Xu, L. Dong, Dynamic lung modeling and tumor tracking using deformable image registration and geometric smoothing, Molecular & Cellular Biomechanics 9 (3) (2012) 213–226.