Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Discussion and Comparison with Other Works

Since in the most presented works in the literature, a unique publicly available database was not utilized in regard to the number of individual with considering different conditions during capturing process, maximum rotation degrees and translation values, we could not have a fair comparison among related works. Nevertheless, we compared our work with previous studies that have been utilized more similar databases based on conditions of database and the processing time and performance of algorithm.

In some works, request information for the matching task was extracted directly from the retinal images. For example, Dehghani et al. [8] extracted corner points directly from the retinal images, and then some model functions based on closeness and orientation of extracted corner points were compared for identification. In this work, a combination of rotated retinal images of DRIVE and STARE databases was used to evaluate their algorithm where processing time of 5.3s and accuracy rate of 100% were reported. This work shows how complicated and time consuming could be matching process in identification application despite the small size of extracted feature in [8].

In some studies, the vessel segmentation was applied to provide a more stable condition for feature extraction which imposes a more computational time to the algorithm. For example, Barkhoda et al. [10] segmented the retinal images then used radial and angular partitioning to form the feature vectors concerning the number of segmented pixels placed in these partitions. They obtained the accuracy rate of 99.75% by testing their algorithm on DRIVE database while each image was rotated 11 times. In other work, Kose et al. [12] extracted vessels’ tree then they rotated, translated and rescaled the segmented query image to evaluate some factors. They tested the algorithm on a database containing about four hundred retinal images and achieved the accuracy rate of over 95%. In [13], a verification algorithm was presented using biometric graph matching based on the bifurcation points. A rotation and translation toleration technique was engaged to compare these graphs. The achieved accuracy rate in this work was 100% on the retinal images in VARIA database. In [15], the bifurcation points were extracted as features and evaluation results reported the accuracy rate of 100% with average processing time of 15s for each image of Aria database. Ortega et al. [16] presented a verification algorithm based on bifurcation and crossover points extracted from the segmented retinal images. They achieved accuracy rate of 100% while average verification time for each query image was reported 0.155s. In [11], fractal dimension and box counting algorithms were engaged to recognize the individuals. This algorithm was evaluated on DRIVE database and in the best condition the accuracy rate of 98.33% with average processing time of 3.1s was reported. Aich et al. [18] used Zernike moments as feature vectors and KNN classifier to identify the query image. They applied their algorithm on DRIVE and STARE databases while achieved the accuracy rate of 100% and 98.64% on these two databases, respectively.



In some studies OD localization was also applied to extract distinguishing features. For example, in [21], Farzin et al. localized OD and segmented the retinal images then performed a polar transform on the vessels located around OD. Finally, they formed feature vectors concerning the orientation, position and thickness of these vessels. This algorithm was tested on a database consisting 60 images from DRIVE and STARE databases while each image were rotated five times to obtain 300 images. They achieved the accuracy rate of 99%. A summary of comparison of our proposed identification algorithm and other presented works in the literature obtains in Table 5.

Table 5: A comparison between our proposed identification algorithm and previous works in the literature.

Method Mode Considered Issues Number of images Number of subjects Time (s) Accuracy rate (%)
Rotation Translation Scaling Illumination condition
Dehghani et al. [8] Identification ü û û û 5.3
Barkhoda et al. [10] Identification ü û û û - 99.75
Farzin et al. [21] Identification ü û û û -
Ortega et al. [16] Verification ü ü ü ü 0.155
Xu et al. [9] Identification - - - - - - 277.8 98.5
Sukumaran et al. [11] Identification - - - - 3.1 98.33
Islam et al. [37] Identification - - - - -
Jafariani et al. [22] Identification ü û ü - - - -
Lajevardi et al. [13] Verification ü ü ü ü - - -
Bevilacqua et al. [14] Identification - - - - -
Betaouaf et al. [15] Identification ü ü ü ü - -
Aich et al. [18] Identification ü û û û -
- 98.64
Proposed algorithm Identification ü ü û û 3.216
3.225

 

CONCLUSIONS

In this paper, a rotation-invariant human identification algorithm based on a novel definition of geometrical shape features of the retinal image using a new hierarchical matching structure was presented. In this algorithm, firstly, an efficient preprocessing was performed which improved the contrast and remarkably overcomes undesired non-uniform illumination issues. The blood vessels were extracted using a threshold-based method so that we could define the geometrical shape features such as region-based and boundary-based features on the surrounded regions by thick vessels. To match the SRs, we represented a modified hierarchical matching structure. Using this iterative structure and by interfering each feature in its proper situation, we achieved an accurate and trust worthy identification while its computational time remained low in comparison to other related studies. The identification took place in the decision making scenario where at least two matched SRs were belong to the same individual and satisfied the predefined necessary condition. The experimental results demonstrate the efficiency and effectiveness of our proposed identification algorithm in comparison with other presented algorithms in the literature.

 


Date: 2016-04-22; view: 686


<== previous page | next page ==>
Parameter Setting of Decision Making Scenario | The dictator decides
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.006 sec.)