Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






PROPOSED RETINAL IDENTIFICATION ALGORITHM

INTRODUCTION

Nowadays, biometrics such as fingerprint, face, hand geometry, voice, iris and retinal are widely used in access control application [1, 2]. There are pros and cons to each one of mentioned biometrics and the usage of them is relied on required security level and limitations of application. Among these biometrics, retinal blood vessel pattern provides a more reliable and secure biometric identifier because of its location in the back of eye globe and its robustness against deceits while it is not accessible [3]. The retinal blood vessel pattern in comparison to other biometrics is less vulnerable to external damages [4].

Fig. 1 illustrates a retinal image where Optical Disc (OD), the blood vessels and fovea have been labeled. The OD is a bright region where ganglion cell axons and the blood vessels exit from the eye [5]. The fovea is located in the central portion of the retina, where eye’s photo receptors are situated. Because of existence of these receptors, during scanning the retina, there is the least reflection from this region; therefore, the fovea becomes darker in the retinal images. Moreover, contrast of the blood vessels especially thin vessels around fovea and also in the marginal regions is low.

Fig. 1: A retina image from DRIVE database.

 

Although, low contrast and non-uniform illumination conditions are two important issues that should be considered, but the most challenging issue in the retinal-based human identification is natural movements of head and eye during capturing process [6]. All these problems can cause the restrictions in extracting the features from the retinal image and also more complexity in the matching procedure while the size of extracted feature vectors could dramatically change in different conditions. Hence, in the most previous studies, the retinal image registration is considered as the first step before the feature extraction or matching process to compensate the effect of eye-movement issue. This can be achieved by localizing the OD or macula which is always accompanied with degradation of identification performance. In some works, by defining some relations among the extracted features and comparing them with other images, this problem is handled while these solutions induce the time-consuming processes on the algorithms.

To identify the human based on retinal image, several works have been presented in the literature. There are different ways to categorize these works according to methodology, the extracted features and strategies for matching and identification. However, we categorize the previous studies into three main groups based on the methodology.

In the first group, the features are extracted directly from the retinal images without any segmentation or registration processes [6, 7, 8]. A few works have been reported in this group due to the restriction in variety of the features that could be extracted and also high sensitivity of these algorithms to noise and low contrast of the images. In these works, the most of researchers extracted some common features generally from the retinal image for example the shape features such as Hu-moments [6], the texture features such as entropy, energy, contrast, homogeneity and correlation from gray-level co-occurrence matrices in different directions [7] and the color features such as the mean, standard deviation, skewness and kurtosis from color histogram [7] or the structural features such as corner points from the retinal images [8]. Moreover, in these algorithms to match the retinal images, the distance measures [6, 7] or a matching model [8] were used to determine the similarity of query with the enrolled images in database. For example, in [8], the matching was carried out by comparing the similarity of described model functions based on closeness and orientation of corner points of each image in polar coordinates. Nevertheless, the feature extraction in these algorithms has low computational complexity but their matching procedures induced a lot of computational operation to compare all possibilities [8]. The common disadvantages of these algorithms are their sensitivity to noise, eye-movement, non-uniform illumination and different illumination conditions.



The majority of related works belong to the second group in which where a blood vessel extraction step is applied at first then features have been extracted from the vessel tree pattern [9-20].

Different features can be defined from the vessels’ tree but the extracted features in this group of the algorithms are mostly the structural features such as the number of pixels in skeleton of the segmented vessels, the number and/or locations of end-points, bifurcation points and cross-over points, their angles, diameters and distances in the vessel tree of retinal image [13-16]. Although this kind of features is fairly reliable and strong but it is always problematic to extract and match these feature vectors. To create and match the feature vectors in this group of the researches, different scenarios were applied such as angular and circular partitioning [10], engaging fractal dimension and box counting algorithm [11], or even comparing described similarity measurements using sample lines in the reference and query images after translating, rotating and scaling the query image in different scales and directions to find the maximum matched pixels of the sample line in the enrolled images [12] or quantifying the location and measurement of vascular network by weighted norms in function spaces [20]. Due to comparing different possibilities considering rotated, translated and missed points to obtain the best match between the query and enrolled images, the complexity of these algorithms in the matching process is high even though the size of feature vectors are mostly small. Therefore, the most of these algorithms are just appropriate for the verification application. For example in [13], Lajevardi et al. presented a verification algorithm based on biometric graph matching by using the bifurcation points. For matching, they applied a graph registration approach and an error tolerant graph matching in order to achieve the best alignment between two comparing graphs considering translation, rotation and noise issues. At last, a Support Vector Machine (SVM) classifier was used to verify the query graph. In some studies of this group, transform-based features were also extracted from vessels’ pattern such as wavelet energy features [17], vascular energy features calculated from disjoint annuli of fixed difference between inner and outer radii after performing a polar transform [19]. The geometric features were rarely extracted in some studies such as in [18] where Zernike moments were used to form the feature vectors then K-Nearest Neighbor (KNN) classifier was used in the matching process for the identification application.

In comparison to the first group, the introduced algorithms in this group are by far more robust against noise and mentioned issues if in the first step, the vessel segmentation is performed well which means a major need for contrast enhancement. The structural features in these studies are tangibly reliable while they are directly in relation with vascular characteristics in the retinal images. The common drawback among all of studies in this group is the dependency of performance of these algorithms to strength of the applied vessel extraction method due to the most of desirable features such as end-points should be extracted from the thin vessels. Although the previous works of this group reported a high accuracy rather than first group of works but they were still time-consuming and also sensitive to strengthen of engaged method to extract thin vessels.

In third group of studies, the OD or macula localization is a necessary step to extract the features. The features could be directly extracted from the retinal images similar to the first group or from the vessel tree pattern after the vessel segmentation similar to the second group. Therefore, all mentioned features could be used in this group of works as well. The OD localization could be done by template matching [21], Haar wavelet and snakes model [22], fuzzy circular Hough transform after eliminating vessels around region of interest [23], or by using intensity values in circular regions and finding the situation of OD [24]. This procedure could have different purposes such as the registration in several studies. For this aim, the macula localization is also performed because by associating the center of OD with the center of Fovea, all of the translation, rotation and scaling factors are given but as far as finding the exact location of macula is a more difficult and complex task than OD, so in the most of the works, the researchers extracted the mass center of image instead of the macula which make the algorithm robust just against rotation. In [22, 25, 26], the OD location was used to compensate the rotation issue by lying on the horizon line between the mass center of image and the center of OD. For the feature extraction, different methods are introduced such as applying Fourier-Mellin transform on the retinal images in addition of computing complex moment magnitude [22], or applying partitioning on Fourier spectrum then computing Energy of each partition to obtain desired feature vectors [25, 26].

In some studies, the aim of OD localization is both the registration and the feature extraction [24, 27, 28]. The idea of extracting features from a region around the OD firstly was introduced in [21]. The authors in [21], after OD localization, extracted the blood vessels and divided the vessels situated in a ring shape area around OD into three sets based on their thickness. The feature vectors were formed according to the orientation and position of blood vessels in this region. Then, the identification was performed by putting a threshold on weighted summation of modified similarity measurements associated to three scales of vessels. Also in [27], for the feature extraction, the same method was followed just Radon transform on the same ROI in the image is applied to form feature vectors. A combination of features such as bifurcation points, mean intensity spread of the OD and circular segmentation around OD was extracted for the identification in [24]. In [28], logarithmic spiral sampling grid was applied while it traced throughout the vessel tree of retina around OD.

Although these algorithms could overcome to eye-movement issue to some extent but utilized techniques to find the OD or macula positions are accompanied with some errors. Therefore, the exact location of these points is sometimes unachievable and that could cause a complexity in the matching process.

 

Due to the eye-movement and different illumination condition issues, different size of the extracted feature vectors is unavoidable and this fact makes the matching process a more complicated and time consuming task although the size of extracted feature vectors are mostly small. The importance of security in desired application imposes a huge amount of computation in order to consider all possibilities in matching process while some points or features are missed. Although, the previous works tried to offer some solutions such as registering images as the first step to obtain rotation and translation factors or comparing the query and enrolled images with different motion parameters to find the best correspondences, but both mentioned solutions impose a huge amount of computational processes.

To overcome mentioned problems and limitations, in this paper, we propose a novel retina-based human identification algorithm based on a new definition of geometric shape features in a hierarchical matching structure. For this purpose, the vessel pattern from the retinal image is extracted and a set of major regions that have been surrounded by thick vessels are employed to define our geometric shape features. In matching process, an efficient iterative hierarchical matching structure is designed to match the major regions of the query and enrolled images. This procedure is accomplished in a hierarchical structure based on elimination step-by-step of candidates in search space from the simple through complex features. To provide a fast and accurate algorithm, the feature extraction, matching structure and decision making scenario interact with each other during identification. In other words, more regions and features will be investigated and extracted if these are necessary in the matching structure.

The rest of paper is organized as follows. In section 2, the principal parts of proposed retinal identification algorithm consisting of ROI definition and extraction, hierarchical matching structure, decision making scenario are described in details. In section 3, experimental results with setting the parameters of algorithms are presented and discussed. Finally, our conclusions are obtained in section 4.

 

PROPOSED RETINAL IDENTIFICATION ALGORITHM

Fig. 2 shows block diagram of proposed retinal identification algorithm. At first, to extract features from the retinal images, the blood vessels are segmented. In the segmented vessels of retinal images, to overcome the issues caused by eye-movement, the surrounded regions by the blood vessels are considered as Regions Of Interest (ROI). These regions are formed and optimized from the segmented vessels using watershed algorithm. In the next step, geometrical shape features based on boundary and region are extracted from the defined ROIs. In the matching step, a fast modified hierarchical matching structure in interact with both the feature extraction and decision making steps is applied whose basic task is to match the query ROI and all enrolled ROIs in database. The identification procedure is carried out in a decision making scenario according to the matched ROIs. In the following, the details of proposed identification algorithm are described.

 


 

 

Fig. 2: Block diagram of the proposed retinal identification algorithm.

 

Vessel Extraction

To extract the blood vessels, we employ a fast efficient algorithm introduced in our previous work [29] which could partly eliminate some destructive factors such as different illumination conditions and improved contrast of the retinal images. Then, a fast accurate algorithm represented by Saleh et al. [30] is performed to obtain a binary image consisting of the blood vessels.

 

Preprocessing

As previously mentioned, different illumination conditions beside physical traits of retina cause non-uniform reflection during capturing process. Moreover, poor contrast of the captured images induces a preprocessing algorithm especially where the vessel segmentation is required. Here, we perform an efficient preprocessing algorithm presented in [29] which uses the red and green sub-bands of retinal images to obtain a desirable image as input for the segmentation.

In imaging, an image can be modeled by the illumination and reflection components as,

(1)

where and are terms of illumination and reflection, respectively. In the retinal images, the green band is usually used due to better contrast while the red channel doesn’t give any proper information about the vessels and the blue channel is too noisy. But, the red channel hands valuable information about the illumination pattern during the capturing process. Accordingly the averaged red band and green channel are considered as the illumination and input image (i.e. ), respectively. At last, the reflection component is considered as enhanced image that gives a more desirable image in which the blood vessels are more recognizable and also the contrast of image particularly around OD, fovea and marginal regions is improved. The enhanced image is obtained by Eq. (1) as,

(2)

where and are the green and averaged red channels of retinal image, respectively.

 


Date: 2016-04-22; view: 742


<== previous page | next page ==>
The compound nominal predicate | Regions of Interest Definition and Extraction
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.009 sec.)