Cyclotorsion measurement using scleral blood vessels

Cyclotorsion measurement using scleral blood vessels

Accepted Manuscript Cyclotorsion measurement using scleral blood vessels Aydın Kaya, Ali Seydi Keçeli, Ahmet Burak Can, Hasan Basri Çakmak PII: S0010...

3MB Sizes 0 Downloads 4 Views

Accepted Manuscript Cyclotorsion measurement using scleral blood vessels Aydın Kaya, Ali Seydi Keçeli, Ahmet Burak Can, Hasan Basri Çakmak PII:

S0010-4825(17)30152-X

DOI:

10.1016/j.compbiomed.2017.05.030

Reference:

CBM 2685

To appear in:

Computers in Biology and Medicine

Received Date: 16 January 2017 Revised Date:

29 May 2017

Accepted Date: 29 May 2017

Please cite this article as: Aydı. Kaya, A.S. Keçeli, A.B. Can, H.B. Çakmak, Cyclotorsion measurement using scleral blood vessels, Computers in Biology and Medicine (2017), doi: 10.1016/ j.compbiomed.2017.05.030. This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

AC C

EP

TE D

M AN U

SC

RI PT

ACCEPTED MANUSCRIPT

ACCEPTED MANUSCRIPT

Cyclotorsion Measurement Using Scleral Blood Vessels

Hacettepe University

Faculty of Engineering, Department of Computer Engineering 06800, Ankara, Turkey (aydinkaya, aliseydi, abc)@cs.hacettepe.edu.tr 2

Hitit University

19030, Çorum, Turkey [email protected]

Corresponding author: Aydın Kaya

Hacettepe University

TE D

Address:

M AN U

Faculty of Medicine, Department of Ophthalmology

Faculty of Engineering, Department of Computer Engineering 06800, Ankara, Turkey

EP

Email 1: [email protected]

Email 2: [email protected]cettepe.edu.tr Tel: +90 312 297 75 00 / 158

AC C

Fax: +90 312 297 75 02

SC

1

RI PT

Aydın Kaya1, Ali Seydi Keçeli1, Ahmet Burak Can1, Hasan Basri Çakmak2

ACCEPTED MANUSCRIPT ABSTRACT Background and Objectives Measurements of the cyclotorsional movement of the eye are crucial in refractive surgery procedures. The planned surgery pattern may vary substantially during an operation because of the position and eye movements

RI PT

of the patient. Since these factors affect the outcome of an operation, eye registration methods are applied in order to compensate for errors. While the majority of applications are based on features of the iris, we propose a registration method which uses scleral blood vessels. Unlike previous offline techniques, the proposed method is applicable during surgery.

SC

Methods

The sensitivity of the proposed registration method is tested on an artificial benchmark dataset involving five eye

M AN U

models and 46,305 instances of eye images. The cyclotorsion angles of the dataset vary between -10° and +10° at 1° intervals. Repeated measurements and ANOVA and Cochran’s Q tests are applied in order to determine the significance of the proposed method. Additionally, a pilot study is carried out using data obtained from a commercially available device. The real data are validated using manual marking by an expert.

TE D

Results and Conclusions

The results confirm that the proposed method produces a smaller error rate (mean = 0.44±0.41) compared to the existing method in [1] (mean = 0.64±0.58). A further conclusion is that feature extraction algorithms affect the

EP

results of the proposed method. The SIFT (mean = 0.74±0.78), SURF64 (mean = 0.56±0.46), SURF128 (mean = 0.57±0.48) and ASIFT (mean = 0.29±0.25) feature extraction algorithms were examined; the ASIFT method

AC C

was the most successful of these algorithms. Scleral blood vessels are observed to be useful as a feature extraction region due to their textural properties. Keywords: Eye registration, Feature extraction, Feature matching, Scleral blood vessel, Cyclotorsion

1. INTRODUCTION Excimer laser keratorefractive surgery, for example photorefractive keratectomy (PRK) or laser-assisted in situ keratomileusis (LASIK), can effectively correct refractive disorders. However, a few patients complain of glare and haloes, inadequate night vision and obscure conditions following surgery, even though their visual sharpness has improved [2-4]. A loss in contrast sensitivity may cause these patient complaints, despite their visual acuity

ACCEPTED MANUSCRIPT being well above the targeted level. An increase in visual anomalies after surgery, which disassociate the retinal picture and cause diminished visual execution, may explain this [5-7]. Customized surgery procedures using wavefront measurements, iris registration methods, eye-tracking solutions and scanning spot laser ablation have been proposed to obtain a higher postoperative quality [8-10]. The main aim is to minimize higher-order

RI PT

anomalies, thus achieving improved visual performance. One of the most important reasons for failure to attain optimum surgical results is cyclotorsion, the angular deviation of the eyes [11]. Cyclotorsion is defined as the rotation of the eye around the anteroposterior axis [12]. The human eye moves around the z-axis, in addition to the x- and y-axes, and cyclotorsion occurs with

SC

movements of the head and body. Although the degree differs depending on the individual, cyclotorsion typically ranges from +7.7 excyclotorsion to -11 degrees [13]. This interval decreases in the operative position

M AN U

[14]. In patients undergoing refractive surgery, the difference in the cyclotorsional change between a seated position and a supine position is reported to range between two and seven degrees [15-17]. If this ocular cyclotorsion is not compensated for, it can have a negative effect on the outcome of the refractive surgery [8]. For example, a rotation of six degrees can reduce the effect of an astigmatic correction by approximately 20% [9]. Ocular cyclotorsion or excyclotorsion may also be responsible for residual astigmatism following refractive

TE D

surgery [18].

Further possible causes of incorrect alignment with the axis of astigmatism include tilting of the patient’s head, unmasking of a cyclophoria, unintentional rotation of the operating microscope and distortion of the globe by a

EP

lid retractor. In addition to refractive surgery operations performed using excimer laser systems, the introduction of toric intraocular lens implantations requires eye-tracking systems with robust cyclotorsion compensation [19].

AC C

Moreover, the new generation of diagnostic technologies, such as optic coherence tomography [20, 21] and adaptive optics [22, 23], require precise eye tracking and registration. Primitive eye tracking methods fail to compensate for these problems. Eye-tracking systems generally use image processing methods with infrared light (IR). IR light reflected from the eye is gathered using IR-sensitive cameras and evaluated to determine the optic axis [24]. The position of the limbus (iris margin) and the pupil are generally used for tracking [25]. Although the speed requirements can be met using these methods, twodimensional tracking cannot detect all types of eye movements and may cause a loss of pattern [24]. To identify the change in cyclotorsion angle between the preoperational and operational position of a patient, the iris registration method [9] uses the natural features of the iris region. Iris registration [9], a noninvasive method for

ACCEPTED MANUSCRIPT torsional alignment of a captured wavefront image of a patient’s eyes during surgery, is a previously developed which allows images of the eye to be captured after the LASIK flap is lifted. This method accurately calculates the amount of cyclotorsional eye displacement that occurs when the patient is positioned for surgery. However, this method has some important and crucial limitations.

RI PT

In this paper, we extend previous work by Kaya, Can and Cakmak [1]; we propose a new eye registration method to overcome the limitations of the current pattern registration process and address cyclotorsional eye movements in the supine position. The proposed method uses scleral blood vessels, which have been shown in studies by Kaya, Can and Cakmak [1] and Hoshino and Nakagomi [26] to be useful as a feature extraction region

SC

due to their textural properties. With the proposed registration method, we aim to address three major limitations: the high failure rate of successful image acquisition, neglect of the pupillary centroid shift, and the

M AN U

application of a passive registration process to an active process. In this method, landmarks related to conjunctival and limbal vasculature are considered, and a registration process is developed using these landmarks. We also define an experimental model and setup to measure the sensitivity of the eye registration method. This model uses a three-dimensional (3D) graphical model of the eye to produce eye images rotated through various angles. In this system, a set of images is created using ground truth values for rotation angles.

TE D

The dataset contains 46,305 eye images, created using five eye models from real operating data, with cyclotorsion angles in the range -10° to +10° at 1° intervals. This dataset provides a benchmark for possible future eye registration methods, which to the best of our knowledge is currently not available. The proposed registration method is tested and analyzed using this dataset, and a pilot study is performed using real data to

EP

measure the effectiveness of the method.

AC C

2. MATERIALS

In order to study the cyclotorsion problem within the eye registration process, we create an empirical dataset for the precise calculation of the cyclotorsion angle. In previous work [1], a video dataset acquired from Ataturk Research Hospital’s Ophthalmology department was used, in which the videos were captured using a Schwind Esiris® excimer laser camera, and a SONY XC-555P CCD was used to record the surgeries. These videos are in RGB format with a resolution of 704×576 pixels, and are the actual data captured from the LASIK device. The ground truth data for the cyclotorsional angle cannot be determined from the videos; it was therefore necessary to create synthetic data using computational graphical methods. Five reference images from different patients were used. Reference images from this dataset (Figure 1(b)) were used as texture, and a 3D eye model was created (Figure 1(a-c)) with a spherical reference surface using Blender™ 3D modeling software, giving eye

ACCEPTED MANUSCRIPT images with ground truth values for the cyclotorsion angle. We rendered 46,305 images by rotating the eye model through x, y, and z angles from +10 to -10 degrees (at one-degree intervals). The resolution of the rendered images was 800×600 pixels in RGB format. Figure 2 displays three images rendered by the 3D eye

M AN U

SC

RI PT

model with various values of the cyclotorsion angle.

(b)

AC C

EP

TE D

(a)

(c)

Figure 1. (a) Eye model from the camera’s viewpoint; (b) reference texture acquired from the actual dataset; (c) eye model from the world viewpoint.

ACCEPTED MANUSCRIPT

(b)

(c)

RI PT

(a)

Figure 2. Rendered models with different x, y, and z values. The center of the iris is corrected after image rendering. The values of the cyclotorsion angle are (a) x:0, y:-1, z:-8: (b) x:2, y:-5, z:-3: (c) x:7, y:-8, z:-10

3. METHOD

SC

In this section, we describe the experimental setup and the proposed method in detail. An outline of the method

AC C

EP

TE D

M AN U

is presented in Figure 3.

ACCEPTED MANUSCRIPT Reference Image & Pattern

RI PT

Reference image coordinates are x:0 y:0 z:0. The ablation pattern is a simple cross sign centered on the reference image.

Feature Extraction & Matching

M AN U

The region of interest (ROI) is the area surrounding the iris containing scleral blood vessels. This area is extracted using image processing techniques.

SC

Determination of the Region of Interest

TE D

Interest point detection methods (SIFT, SURF, ASIFT) are used to extract features from the region of interest. The image shows the feature points extracted from the ROI of the reference image. After processing the reference image, feature extraction and reference feature matching are applied to all subsequent images.

Model Fitting

EP

After feature matching, a Hough transform is applied to feature pairs to improve the matching performance. The RANSAC method is then used for outlier detection. An affine matrix is then applied to fit the ablation model, and finally, the cyclotorsion angle is calculated.

AC C

Figure 3. Outline of the proposed method

3.1. Preprocessing

Before the feature extraction step, we apply several preprocessing steps in order to determine the salient area for the scleral blood vessels. The features of the model are obtained from the scleral blood vessel region around the iris, which must be salient. Gray-level images or the red, green and blue channels in RGB images can be used to detect feature points in this area. We examined these types of images and channels to determine which channel or image type is most useful, and found that the red channel provides low distinctiveness in blood vessels, while the blue channel has high brightness and but reduced detail in the blood vessels. Therefore, we use both red- and

ACCEPTED MANUSCRIPT blue-channel images. In our experiments, gray-level and green-channel images provided similar performance in

SC

RI PT

terms of feature point detection. We developed the remainder of the method using the green channel information.

Figure 4. Extraction of the region of interest

M AN U

In addition to the cyclotorsional movements of the human eye, the camera used to record the surgery may also have undesired movements. A registration operation must therefore be robust against these problems. These movements make it difficult to create a geometrical model from the RGB images. As a first step towards generating a model, the region of interest (ROI) must be approximately located. Artifacts such as eyelid holders or unrelated regions such as the iris must be removed from the ROI. In our experiments, we observed that

TE D

selection of the ROI using the iris region boundary provided more salient interest points and reduced the error rate. The selection of the ROI also improves the processing speed by limiting the number of interest points. Figure 4 displays the output of morphological operations applied to the reference image to determine the ROI.

EP

The iris has more intensity than the other regions, and separating this region from the edges of the image by cutting connections and cleaning small regions provides its approximate region. To determine the ROI, the green

AC C

channel image is first converted to a binary image. After taking the complement of the binary image, dilation and closing operations are applied using a disk structure element. To reduce the connection between the iris region and the image edges, an erosion operation is applied using a line structure element. The regions connected to the image boundaries are deleted to leave only the iris region in the image. However, regions other than the iris may remain. When the largest region is selected, only the iris region remains in the image, since the other regions are smaller. Dilation and erosion operations are then performed on this image using a larger disk structure element. Finally, the difference between the dilated and eroded images is used to obtain the ROI mask. When this mask is applied to the original image, the ROI will be obtained, as illustrated in the final image in Figure 4.

ACCEPTED MANUSCRIPT 3.2. Feature Extraction After determining the region of interest, distinctive feature points within the ROI are extracted from the images. We apply the SIFT (Scale Invariant Feature Transform) [23], SURF (Speeded up Robust Features) [24] and ASIFT (Affine-SIFT) [25] algorithms to the entire image, and select points of interest that fall within the ROI. This method facilitates the determination of salient feature points and eliminates edge effects. Otherwise,

RI PT

applying these algorithms directly to the ROI would cause edge effects and create artificial feature points on the edges of the ROI.

SIFT is a computer vision algorithm for image feature detection and description, and was proposed by Lowe

SC

[27]. The extracted features are distinctive, and are scale- and orientation-invariant. This method is used for image matching, image registration and object tracking, and has become the basis of many feature extraction

M AN U

methods such as SURF and ASIFT.

SURF is a scale- and transformation-invariant feature extraction method proposed by Bay, Ess, Tuytelaars and Van Gool [28] and is based on the SIFT method. The extraction of interest points in SURF is similar to that in the SIFT method; however, the usage of Haar Wavelet filters and integral images reduces the time complexity. This is the main advantage of SURF compared to SIFT.

TE D

ASIFT is an extended version of the SIFT method [29]. Whereas the original SIFT method covers four affine parameters, the ASIFT method covers six. ASIFT is fully affine-invariant, and has its own wrong-match elimination process; it utilizes the ORSA method, which is more reliable than RANSAC [30]. The RANSAC

EP

algorithm is a general estimation approach used for estimating selected model parameters. The ORSA (Optimized Random Sampling) algorithm is an optimized version of the RANSAC method. This method uses a

AC C

a-contrario approach to model the expected residual error and focuses on inliers to optimize consensus, and thus generally produces better results than RANSAC. After extracting feature points using SIFT, SURF or ASIFT, the feature vectors of the reference image and subsequent images are matched using the second closest neighbor approach. The Euclidean distances between one feature point of the reference image and all feature points of the subsequent image are calculated, and the two closest matches with the smallest Euclidian distances are selected. A threshold value is selected within the range 0.0-1.0. If the product of the threshold value and the second smallest distance is greater than the smallest distance, the feature point with the smallest distance is a valid match; otherwise, the feature is discarded.

ACCEPTED MANUSCRIPT 3.3. Model Fitting We create a representative ablation model in the form of a cross (Figure 5a) and superimpose the center of this model on the center of the unrotated eye model (x:0,y:0,z:0) (Figure 5b). As the eyes of the patient move, the

M AN U

SC

RI PT

cross sign moves, and the angles of this movement are calculated.

(a) Figure 5. (a) Representative cross model; (b) superimposed model and eye

(b)

We propose a measurement method for the cyclotorsion angle and compare this with the method of Kaya, Can and Cakmak [1] (the rotational component of the affine transformation matrix). Kaya, Can and Cakmak [1] do not calculate a cyclotorsion value but the rotation of the ablation pattern. However, as we superimpose the

TE D

centers of the models, as in Figure 5b, we can obtain an approximate cyclotorsion value from the affine matrices’ rotational component. Thus, we can compare the two methods. Equation (1) gives a sample representation for the affine matrix A with homogeneous coordinates. ‫ݐ‬௫ ‫ݐ‬௬ อ 1

EP

−‫ߠ݊݅ݏ‬ ܿ‫ߠݏ݋‬ 0

(1)

AC C

ܿ‫ߠݏ݋‬ ‫ = ܣ‬อ ‫ߠ݊݅ݏ‬ 0

In the proposed method, we do not directly use the affine matrix rotational component, unlike Kaya, Can and Cakmak [1]. In defining our calculation method, we apply the affine transformation to the top (pt), bottom (pd), left (pl) and right (pr) points of the representative model to determine their new position after the x, y and z angles are changed. Let p be any point on the model; then, the transformed point p’ can be determined as in Equation (2). ‫ = ݌‬ሾ‫ݔ‬

‫ݕ‬

‫݌‬ᇱ = ‫ିܣ‬ଵ ‫݌‬T

1ሿ

(2)

ACCEPTED MANUSCRIPT

Using Equation (2), the transformed points ‫݌‬௧ᇱ , ‫݌‬ௗᇱ , ‫݌‬௟ᇱ , and ‫݌‬௥ᇱ are calculated. m1 and m2 are the slope values

between ‫݌‬௧ᇱ −‫݌‬ௗᇱ and ‫݌‬௟ᇱ −‫݌‬௥ᇱ , respectively. The cyclotorsion angle C is calculated using the average of these slopes as shown in Equation (3). ݉ଵ = (‫݌‬௧ − ‫݌‬ௗ ) / (‫݌‬௧ᇱ௫ − ‫݌‬ௗᇱ௫ ) ᇱ௬

ᇱ௬

(3)

݉ଶ = (‫݌‬௟ − ‫݌‬௥ ) / (‫݌‬௟ᇱ௫ − ‫݌‬௥ᇱ௫ ) ᇱ௬

RI PT

ᇱ௬

‫( = ܥ‬arctan(݉ଵ ) + arctan(݉ଶ ))/2

4. RESULTS

SC

As explained in Section 2, the dataset used in this study contains 46,305 eye images, which were produced from five reference images by rotating the 3D eye model through x, y, and z angles of between -10 and 10 degrees.

M AN U

The four different feature extraction methods (FEM) of SIFT, SURF64, SURF128, and ASIFT, and the two cyclotorsion measurement methods (CEM) of the affine matrix rotational component [1] and the proposed method were used in these experiments. Figure 6 represents the mean error rates of all of these methods for each cyclotorsion angle. The error rate for each image is calculated by taking the difference between the ground truth cyclotorsion value and the calculated value. The mean of the error rates is then calculated for all images with the

TE D

same cyclotorsion value and is plotted on the graph. Figure 7 shows a box plot of these experiments with respect to the FEM and CEM methods. According to these results, the proposed method using ASIFT feature extraction

AC C

EP

method produced the lowest mean error rates.

ACCEPTED MANUSCRIPT

Mean Error Rates of Methods 1.0000 0.9000

0.7000 0.6000

RI PT

Mean Error (degree)

0.8000

0.5000 0.4000 0.3000

0.1000 0.0000 -8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

M AN U

-10 -9

SC

0.2000

Cyclotorsion Angle (degree)

SIFT Affine SURF64 P. Method ASIFT Affine

SIFT P. Method

SURF64 Affine

SURF128 Affine

SURF128 P. Method

ASIFT P. Method

Figure 6. Mean error rates of the methods for cyclotorsion angles between -10° and 10°.

TE D

1.2000

1.0000

EP

0.8000

AC C

0.6000

0.4000

0.2000

0.0000 SIFT Affine

SIFT P. Method

SURF64 Affine

SURF64 P. Method

SURF128 Affine

Figure 7. Box plot of experimental results with respect to FEM and CEM.

SURF128 P. ASIFT Affine Method

ASIFT P. Method

ACCEPTED MANUSCRIPT We used several statistical tests to compare the significance and performance of the experimental methods. A four-factor (FEM: SIFT, SURF64, SURF128, ASIFT) × 2 (CEM: Affine rotation, proposed method) repeated measures ANOVA test was applied to compare the general error rate among the methods. Mauchly's test of sphericity was used for the homogeneity assumption of variance between the differences of the dependent measures. According to the test results, the FEM variable violates the sphericity assumption (Mauchly W = 0.33,

RI PT

χ2(5) = 10203.40, p < 0.01); thus, the variance is not homogenous. The interaction effect of the FEM and CEM variables also violates the sphericity assumption (Mauchly W = 0.48, χ2(5) = 6793.29, p < 0.01). For this reason, a Greenhouse-Geisser correction (ɛ < 0.75) was applied. The CEM variable’s sphericity assumption could not be

Table 1. ANOVA results for FEM and CEM df (Error) 1.73 (15986.61) 1 (9260.00) 1.97 (18240.35)

FEM CEM FEM*CEM

F

p

M AN U

Source

SC

tested, since it has two levels.

ƞ p2

2662.91

0.000

0.22

4396.89

0.000

0.32

889.40

0.000

0.09

TE D

According to the ANOVA results presented in Tables 1 and 2, the effect of the FEM and CEM on the error rates is significant. The error rate of the proposed method is less than that of the affine rotation method [34]. Furthermore, the interaction effect of the FEM*CEM variables on the error is also significant. Additionally, post

EP

hoc analysis (Bonferroni correction) was performed to determine the levels of the FEM variable which affect the error rate. According to multiple comparison results, all FEM methods show statistical significance (p < 0.001)

AC C

except for the comparison between the SURF64 and SURF128 methods (p > 0.05). The error rate of the SIFT method is greater than those of SURF64, SURF128 and ASIFT. The smallest error rate was provided by the ASIFT method. The ASIFT method produced the best performance compared to the other feature detectors. Table 2. FEM and CEM mean and standard error values

FEM SIFT SURF64 SURF128 ASIFT CEM Affine Rotation [1] Proposed Method

Mean

Std. Error

0.74 0.56 0.57 0.29

0.01 0.00 0.01 0.00

0.64 0.44

0.01 0.00

ACCEPTED MANUSCRIPT The same analysis (post hoc) was performed on the FEM*CEM variables to determine the cyclotorsion measurement method offering the best performance. According to the results of multiple comparisons, the error rates of the proposed method were lower than those of the affine rotation method [1] for all feature extraction methods. Table 3 displays the descriptive statistics of each method, and The proposed method produced superior

RI PT

results compared to the previous study for all feature detectors. Table 3. Descriptive statistics (mean and standard deviation) for each method of the FEM*CEM variables

Affine Rotation [1] Proposed Method

SURF64 ܺത = 0.71 ߪ = 0.52 ܺത = 0.41 ߪ = 0.39

SURF128 ܺത = 0.70 ߪ = 0.53 ܺത = 0.44 ߪ = 0.43

SC

SIFT ܺത = 0.85 ߪ = 0.98 ܺത = 0.63 ߪ = 0.58

ASIFT ܺത = 0.29 ߪ = 0.27 ܺഥ = 0.29 ߪ = 0.22

M AN U

When looking at the mean error rate, sample-based performance may be overlooked. A given method may perform well in general, yet produce significant error values in a small number of cases. In such cases, the values of the mean and standard deviation may be large compared to other methods. We therefore define a threshold value to analyze the sample-based performance of the tested methods and to determine whether the error rate remains by a certain range. Three different thresholds t (0.5, 1, and 1.5 degrees) were used in these experiments; the thresholds were selected to be below the error range reported by Bara, Mancebo and Moreno-Barriuso [15].

TE D

If the difference between the angle calculated by the proposed method and the actual angle is less than the threshold, the error rate is considered to be acceptable; otherwise, it is not acceptable. Acceptable values are assigned a value of “1”, and unacceptable values “0”. Table 4 and Figure 8 present the frequency values of the

EP

dichotomous data. Since the 46,305 eye images of the dataset were obtained from five eye samples, we present the mean values for five samples. Thus, 9,261 results are presented, covering all combinations between +10 and -

AC C

10 degrees in the x, y, z dimensions. As expected from the previous results, ASIFT produced more acceptable results than the other feature extraction methods. When the threshold was lowered, the rate of acceptable estimations was reduced.

Table 4. Frequency of unacceptable (0) and acceptable (1) values according to different thresholds (t) for each method

SIFT Affine SIFT P. Method SURF64 Affine SURF64 P. Method SURF128 Affine

t = 1.5 0 1158 422 765 178 671

1 8103 8839 8496 9083 8590

t = 1.0 0 2454 1453 2071 649 1742

1 6807 7808 7190 8612 7519

t = 0.5 0 5823 4623 5491 2680 5456

1 3438 4638 3770 6581 3805

ACCEPTED MANUSCRIPT SURF128 P. Method ASIFT Affine ASIFT P. Method

293 54 0

8968 9207 9261

734 280 80

8527 8981 9181

2707 1390 1546

6554 7871 7715

t=1.5

SIFT P. Method

SURF64 SURF64 SURF128 SURF128 Affine P. Method Affine P. Method

M AN U

SIFT Affine

SC

RI PT

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0

0

ASIFT Affine

ASIFT P. Method

ASIFT Affine

ASIFT P. Method

1

t=1.0 10000 9000

TE D

8000 7000 6000 5000 4000 2000 1000 0

SIFT P. Method

AC C

SIFT Affine

EP

3000

SURF64 Affine

SURF64 P. SURF128 SURF128 Method Affine P. Method 0

1

ACCEPTED MANUSCRIPT t=0.5 10000 9000 8000 7000 6000 4000 3000 2000 1000 0 SURF64 SURF64 P. SURF128 SURF128 Affine Method Affine P. Method 0

ASIFT Affine

ASIFT P. Method

SC

SIFT Affine SIFT P. Method

RI PT

5000

1

M AN U

Figure 8. Bar graphs of dichotomous data (unacceptable (0)/acceptable (1)) for three threshold values Cochran’s Q test was performed on the dichotomous data and the results are shown in Table 5. There is a statistically significant difference in the proportion of samples measured by methods for all threshold values. Table 5. Cochran’s Q test results for three thresholds Source

7 7 7

Q

p

3260.64 7063.56 15971.83

0.001 0.001 0.001

TE D

t = 1.5 t = 1.0 t = 0.5

df

EP

4.1. Pilot Study Using Real Data

A pilot study was carried out using = real eye data captured during LASIK surgery. The images in the pilot study

AC C

were captured by a Schwind Esiris® excimer laser camera, a SONY XC-555P CCD, which is used for recording surgeries. Sample images from the pilot study are shown in Figure 9. Some of the images are occluded with tools (2nd row, 2nd column) or blurred by cleaning solutions and camera movements (2nd row, 4th column). One hundred random image samples were selected from video data for three different patients. The characteristics of the dataset are given in Table 6. Since the actual cyclotorsion angle could not be calculated, expert-marked data were used in this study. First, a simple cross pattern was applied to a reference image. Then, an expert marked the expected top, bottom, left and right points of this pattern on subsequent images. Finally, the rotation of the pattern was calculated with the proposed method (P.Method/ASIFT), and the existing method [1] (Affine/SIFT). The results of the pilot study are summarized in Table 7.

M AN U

SC

RI PT

ACCEPTED MANUSCRIPT

Figure 9. Sample images from the pilot study data Table 6. Characteristics of the dataset

Pilot study data 100 3 [-1.42°, +1.60°] 1.47° 0.32°

TE D

No. of instances No. of patients Measured cyclotorsion range Mean cyclotorsion angle (absolute) Std. dev. (absolute)

Table 7. Results of the measurement methods used in the pilot study

AC C

EP

No. of instances Mean cyclotorsion error Std. dev. (SD) Valid measurement (t=0.5) Valid measurement (t=1.0) Valid measurement (t=1.5)

Affine/SIFT[1] 100 0.68° 0.72° 56 82 90

P.Method/ASIFT 100 0.48° 0.46° 70 90 96

A paired samples t-test was conducted to compare the errors of Affine/SIFT and the P.Method/SIFT for the pilot study data. There is a significant difference in the measurements for Affine/SIFT and P.Method/ASIFT. McNemar’s test was applied to the dichotomous data, prepared using three different thresholds as shown in Table 6. The results were p = 0.01 for a 1.5-degree threshold, p = 0.04 for a 1-degree threshold, and p = 0.02 for a 0.5-degree threshold. Thus, there is a statistically significant difference in the proportion of samples measured using these methods for all threshold values.

ACCEPTED MANUSCRIPT

5. DISCUSSION The current iris registration process is a passive tracking process. At the moment iris registration is performed, there is a successful alignment of the iris with the previously obtained iris map; the cornea is therefore also successfully aligned with the previously determined treatment. However, if there is any movement of the patient’s head or eyes after the moment of iris registration, the treatment may be performed along an axis or in a

RI PT

position other than that desired. The current technology allows pupillary but not iris tracking throughout the procedure, which allows overall maintenance of the treatment within the general area. However, torsional movements [31, 32] or changes in centroid shift [33, 34] throughout the procedure are not actively addressed. An

SC

important cause of compromised results after wavefront-guided LASIK includes a misalignment of the ablation profile due to shifting of the pupillary center when the illumination is changed. This misalignment can compromise the surgical treatment and lead to reduced visual acuity and refraction, under correction of existing

M AN U

aberrations, and the introduction of new aberrations [16, 35-37].

Moreover, successful iris registration is not possible in some patients despite a clear view through the cornea. Chernyak [38] reported a successful iris registration rate of 82.2% using a sample of 80 eyes. Donnenfeld, Nattis, Fishman, Roth, Stein and McDonald [39] reported a successful iris registration rate of 93% for 1,193 eyes.

TE D

Failure to obtain a successful iris registration is higher in cases with light eye colors. This high rate of failure in acquiring a successful iris registration requires the development of new methods that can provide superior results.

EP

New methods have been developed to overcome the deficiencies of iris registration. Mosquera and Arbelaez [40] assessed a six-dimensional eye tracking method using a Schwind Amaris® platform by means of the

AC C

postoperative outcomes of LASIK intervention. The results showed that this platform provides successful outcomes in terms of a reduction of higher-order optical aberrations. A successful eye-tracking system with robust registration for cyclotorsion has many potential applications, such as developing diagnostics systems such as optical coherence tomography, adaptive optics and microperimetry [41]. Further, the analysis of torsional ocular deviations can assist in the early diagnosis of many ocular and systemic diseases before complications arise [42-45]. An expert system to guide ophthalmologists during surgical interventions, including pattern laser systems for retinal photocoagulation [46] and intraoperative surgical plan overlays coupled with operating microscopes, will require a new generation of eye-tracking technologies. Hoshino and Nakagomi [26] use conjunctival blood vessels as feature regions to detect eye movements and measure rotational eye movement; these can be used as indicators for several disorders. There are also studies that use scleral blood vessels on

ACCEPTED MANUSCRIPT ocular biometrics [47-49]. The vein structure of the eye is used as biometric information in these studies, although cyclotorsional movement of the eye is not studied. In another study, Shimizu and Fukui [50] used an artificial dataset similar to ours to estimate predicted eye gaze. Their eyeball model-based approach uses the iris and pupil region to estimate eye gaze angles. The artificial dataset helps to validate the results and calculate the estimation errors. Using scleral blood vessels can be a solution to the deformation of the iris region and provide

RI PT

superior feature regions of interest. To our knowledge, the only previous study that uses scleral blood vessels for eye registration is that of Kaya, Can and Cakmak [1], which uses SIFT feature detector for feature extraction and calculates only the translation and rotation of the ablation pattern. In the proposed method, we apply four feature

SC

extractors (SIFT, SURF 64, SURF128, and ASIFT) and improve an existing method for cyclotorsion detection. There may be some possible practical limitations of the proposed system. When using the proposed method in

M AN U

various eye surgery operations, certain adverse effects may arise that influence the operation of the system. The appearance of the episcleral or conjunctival vessels may change during surgery when applying vasoconstrictor agents. Updating the extracted feature data and redefining the reference interest points during the operation may help to address this limitation. Furthermore, unlike iris registration [8], the proposed method can be used as an

6. CONCLUSION

TE D

online eye registration technique if an efficient implementation can fulfill the speed requirements.

In this study, we present an eye registration method which measures cyclotorsional eye movements using scleral blood vessels. We study various feature extraction methods and compare cyclotorsion measurement methods

EP

with the actual rotation. Statistical tests were applied to evaluate the effects of these methods on the measurements. The results confirm that the proposed method produces superior results compared to the affine

AC C

rotation method of Kaya, Can and Cakmak [1]. Moreover, it is demonstrated here that the feature extraction method has an impact on the accuracy of these methods and that improved matching feature extractors are beneficial to procedure outcomes. In our experiments, the SIFT, SURF, and ASIFT feature extraction methods were used; these methods were chosen due to their robustness in the detection of scleral blood vessel features. Of these feature extraction methods, ASIFT was the most successful, with higher affine parameters. An experimental setup was also created to measure the sensitivity of the proposed method to the cyclotorsion measurement in the supine position. In the study of Kaya, Can and Cakmak [1], the measurement of the rotation was performed with the assistance of an operator, and only the rotation angle for the ablation pattern was considered. Due to the constraints on the dataset, cyclotorsion of the patient’s eye could not be measured in their

ACCEPTED MANUSCRIPT study. The experimental model proposed here therefore provided an environment to test the sensitivity of eye registration methods. A dataset of 46,305 eye image instances was created from five different eye samples with ground truth rotation angles. Since the proposed 3D graphical model enables the production of more eye images from different sample images, the experimental environment could easily be expanded and used for testing

RI PT

possible future eye registration methods. In future work, we plan to implement the proposed registration method in adaptive optics such as cataract toric alignment devices. The combination of GPU-based applications may be another research direction for improving real-time registration performance.

SC

REFERENCES

AC C

EP

TE D

M AN U

[1] A. Kaya, A.B. Can, H.B. Cakmak, Designing a Pattern Stabilization Method Using Scleral Blood Vessels for Laser Eye Surgery, Pattern Recognition (ICPR), 2010 20th International Conference on, 2010, pp. 698-701. [2] R.A. Applegate, G. Hilmantel, H.C. Howland, E.Y. Tu, T. Starck, E.J. Zayac, Corneal first surface optical aberrations and visual performance, J Refract Surg, 16 (2000) 507-514. [3] T. Oshika, S.D. Klyce, R.A. Applegate, H.C. Howland, M.A. El Danasoury, Comparison of corneal wavefront aberrations after photorefractive keratectomy and laser in situ keratomileusis, Am J Ophthalmol, 127 (1999) 1-7. [4] T. Oshika, K. Miyata, T. Tokunaga, T. Samejima, S. Amano, S. Tanaka, Y. Hirohara, T. Mihashi, N. Maeda, T. Fujikado, Higher order wavefront aberrations of cornea and magnitude of refractive correction in laser in situ keratomileusis, Ophthalmology, 109 (2002) 1154-1158. [5] T. Seiler, M. Kaemmerer, P. Mierdel, H.E. Krinke, Ocular optical aberrations after photorefractive keratectomy for myopia and myopic astigmatism, Arch Ophthalmol, 118 (2000) 17-21. [6] T. Seiler, M. Mrochen, M. Kaemmerer, Operative correction of ocular aberrations to improve visual acuity, J Refract Surg, 16 (2000) S619-622. [7] W. Verdon, M. Bullimore, R.K. Maloney, Visual performance after photorefractive keratectomy. A prospective study, Arch Ophthalmol, 114 (1996) 1465-1472. [8] D.A. Chernyak, Cyclotorsional eye motion occurring between wavefront measurement and refractive surgery, J Cataract Refr Surg, 30 (2004) 633-638. [9] D.A. Chernyak, Iris-based cyclotorsional image alignment method for wavefront registration, IEEE Trans Biomed Eng, 52 (2005) 2032-2040. [10] D. Morley, H. Foroosh, Computing Cyclotorsion in Refractive Cataract Surgery, Ieee T Bio-Med Eng, 63 (2016) 2155-2168. [11] H. Kim, C.-K. Joo, Ocular cyclotorsion according to body position and flap creation before laser in situ keratomileusis, Journal of Cataract & Refractive Surgery, 34 (2008) 557-561. [12] A. Harden, B. Dulley, Cyclotorsion: a new method of measurement, Proc R Soc Med, 67 (1974) 819-822. [13] A.R. Lucena, J.A.D.A. Mota, D.R. de Lucena, S.D.M. Ferreira, N.L. de Andrade, Cyclotorsion measurement in laser refractive surgery, Arq Bras Oftalmol, 76 (2013) 339-340. [14] D.C. Fahd, E. Jabbour, C.D. Fahed, Static cyclotorsion measurements using the Schwind Amaris laser, Arq Bras Oftalmol, 77 (2014) 159-163. [15] S. Bara, T. Mancebo, E. Moreno-Barriuso, Positioning tolerances for phase plates compensating aberrations of the human eye, Appl Opt, 39 (2000) 3413-3420. [16] A. Guirao, D.R. Williams, I.G. Cox, Effect of rotation and translation on the expected benefit of an ideal method to correct the eye's higher-order aberrations, J Opt Soc Am A Opt Image Sci Vis, 18 (2001) 1003-1015. [17] J.D. Stevens, Astigmatic excimer laser treatment: theoretical effects of axis misalignment, European Journal of Implant and Refractive Surgery, 6 (1994) 310-318. [18] M.J. Tjon-Fo-Sang, J.T. de Faber, C. Kingma, W.H. Beekhuis, Cyclotorsion: a possible cause of residual astigmatism in refractive surgery, J Cataract Refract Surg, 28 (2002) 599-602. [19] A. Bachernegg, T. Ruckl, W. Riha, G. Grabner, A.K. Dexl, Rotational stability and visual outcome after implantation of a new toric intraocular lens for the correction of corneal astigmatism during cataract surgery, J Cataract Refr Surg, 39 (2013) 1390-1398.

ACCEPTED MANUSCRIPT

AC C

EP

TE D

M AN U

SC

RI PT

[20] B. Braaf, K.V. Vienola, C.K. Sheehy, Q. Yang, K.A. Vermeer, P. Tiruveedhula, D.W. Arathorn, A. Roorda, J.F. de Boer, Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO, Biomed Opt Express, 4 (2013) 51-65. [21] K.V. Vienola, B. Braaf, C.K. Sheehy, Q. Yang, P. Tiruveedhula, D.W. Arathorn, J.F. de Boer, A. Roorda, Real-time eye motion compensation for OCT imaging with tracking SLO, Biomed Opt Express, 3 (2012) 29502963. [22] F. Felberer, M. Rechenmacher, R. Haindl, B. Baumann, C.K. Hitzenberger, M. Pircher, Imaging of retinal vasculature using adaptive optics SLO/OCT, Biomed Opt Express, 6 (2015) 1407-1418. [23] J. Zhang, Q. Yang, K. Saito, K. Nozato, D.R. Williams, E.A. Rossi, An adaptive optics imaging system designed for clinical use, Biomed Opt Express, 6 (2015) 2120-2137. [24] R.R. Krueger, R.A. Applegate, Wavefront customized visual corrections: the quest for super vision II, Slack Incorporated2003. [25] F. Li, S. Munn, J. Pelz, A model-based approach to video-based eye tracking, Journal of Modern Optics, 55 (2008) 503-531. [26] K. Hoshino, H. Nakagomi, Measurement of rotational eye movement under blue light irradiation by tracking conjunctival blood vessel ends, System Integration (SII), 2013 IEEE/SICE International Symposium on, 2013, pp. 204-209. [27] D.G. Lowe, Object recognition from local scale-invariant features, Computer vision, 1999. The proceedings of the seventh IEEE international conference on, Ieee, 1999, pp. 1150-1157. [28] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, Speeded-Up Robust Features (SURF), Comput Vis Image Und, 110 (2008) 346-359. [29] J.M. Morel, G.S. Yu, ASIFT: A New Framework for Fully Affine Invariant Image Comparison, Siam J Imaging Sci, 2 (2009) 438-469. [30] L. Moisan, B. Stival, A probabilistic criterion to detect rigid point matches between two images and estimate the fundamental matrix, International Journal of Computer Vision, 57 (2004) 201-218. [31] A.E. Ciccio, D.S. Durrie, J.E. Stahl, F. Schwendeman, Ocular cyclotorsion during customized laser ablation, J Refract Surg, 21 (2005) S772-774. [32] S. Flodin, P. Karlsson, M.A. Gronlund, Cyclotorsion Measured in a Patient Population Using Three Different Methods: A Comparative Study, Strabismus, 24 (2016) 28-36. [33] E. Donnenfeld, The pupil is a moving target: Centration repeatability, and registration, Journal of Refractive Surgery, 20 (2004) S593-S596. [34] Y. Yang, K. Thompson, S.A. Burns, Pupil location under mesopic, photopic, and pharmacologically dilated conditions, Investigative ophthalmology & visual science, 43 (2002) 2508-2512. [35] M. Bueeler, M. Mrochen, T. Seiler, Maximum permissible lateral decentration in aberration-sensing and wavefront-guided corneal ablation, J Cataract Refract Surg, 29 (2003) 257-263. [36] M. Bueeler, M. Mrochen, T. Seiler, Maximum permissible torsional misalignment in aberration-sensing and wavefront-guided corneal ablation, J Cataract Refract Surg, 30 (2004) 17-25. [37] M. Mrochen, M. Kaemmerer, P. Mierdel, T. Seiler, Increased higher-order optical aberrations after laser refractive surgery: a problem of subclinical decentration, J Cataract Refract Surg, 27 (2001) 362-369. [38] D.A. Chernyak, From wavefront device to laser: an alignment method for complete registration of the ablation to the cornea, J Refract Surg, 21 (2005) 463-468. [39] E.D. Donnenfeld, A. Nattis, G.R. Fishman, J. Roth, J. Stein, M.B. McDonald, Effect of cyclotorsion and pupil centroid shift on excimer laser photoablation: analysis of 1000 cases, American Society of Cataract and Refractive Surgery symposiumSan Diego, Calif, 2007. [40] S.A. Mosquera, M.C. Arbelaez, Use of a Six-dimensional Eye-tracker in Corneal Laser Refractive Surgery With the SCHWIND AMARIS TotalTech Laser, Journal of Refractive Surgery, 27 (2011) 582-590. [41] S.N. Markowitz, S.V. Reyes, Microperimetry and clinical practice: an evidence-based review, Canadian journal of ophthalmology. Journal canadien d'ophtalmologie, 48 (2013) 350-357. [42] J.L. Oviedo, A. Caparros, Information and visual attention in contingent valuation and choice modeling: field and eye-tracking experiments applied to reforestations in Spain, J Forest Econ, 21 (2015) 185-204. [43] N.D. Smith, F.C. Glen, V.M. Mönter, D.P. Crabb, Using eye tracking to assess reading performance in patients with glaucoma: a within-person study, Journal of ophthalmology, 2014 (2014). [44] O. Braddick, J. Atkinson, Development of human visual function, Vision research, 51 (2011) 1588-1609. [45] K.-U. Schmitt, M.H. Muser, C. Lanz, F. Walz, U. Schwarz, Comparing eye movements recorded by search coil and infrared eye tracking, Journal of clinical monitoring and computing, 21 (2007) 49-53. [46] M.S. Blumenkranz, The evolution of laser therapy in ophthalmology: a perspective on the interactions between photons, patients, physicians, and physicists: the LXX Edward Jackson Memorial Lecture, Am J Ophthalmol, 158 (2014) 12-25 e11. [47] Y. Lin, E.Y. Du, Z. Zhou, N.L. Thomas, An Efficient Parallel Approach for Sclera Vein Recognition, Ieee T Inf Foren Sec, 9 (2014) 147-157.

ACCEPTED MANUSCRIPT

AC C

EP

TE D

M AN U

SC

RI PT

[48] S. Crihalmeanu, A. Ross, Multispectral scleral patterns for ocular biometric recognition, Pattern Recogn Lett, 33 (2012) 1860-1869. [49] R. Derakhshani, A. Ross, S. Crihalmeanu, A new biometric modality based on conjunctival vasculature, Artificial Neural Networks in Engineering, St. Louis, USA, (2006) 1-8. [50] M. Shimizu, K. Fukui, Eye-gaze estimation accuracy and key in human vision, 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), IEEE, 2015, pp. 48-53.