Dynamic iris biometry: a technique for enhanced identification
© Pierscionek et al; licensee BioMed Central Ltd. 2010
Received: 11 January 2010
Accepted: 1 July 2010
Published: 1 July 2010
The iris as a unique identifier is predicated on the assumption that the iris image does not alter. This does not consider the fact that the iris changes in response to certain external factors including medication, disease, surgery as well as longer term ageing changes. It is also part of a dynamic optical system that alters with light level and focussing distance. A means of distinguishing the features that do not alter over time from those that do is needed. This paper applies iris recognition algorithms to a newly acquired database of 186 iris images from four subjects. These images have greater magnification and detail than iris images in existing databases. Iris segmentation methods are tested on the database. A new technique that enhances segmentation is presented and compared to two existing methods. These are also applied to test the effects of pupil dilation in the identification process.
Segmentation results from all the images showed that using the proposed algorithm accurately detected pupil boundaries for 96.2% respectively of the images, which was an increase of 88.7% over the most commonly used algorithm. For the images collected, the proposed technique also showed significant improvement in detection of the limbal boundary compared to the detection rates using existing methods. With regard to boundary displacement errors, only slight errors were found with the proposed technique compared to extreme errors made when existing techniques were applied. As the pupil becomes more dilated, the success of identification is increasingly more dependent on the decision criterion used.
The enhanced segmentation technique described in this paper performs with greater accuracy than existing methods for the higher quality images collected in this study. Implementation of the proposed segmentation enhancement significantly improves pupil boundary detection and therefore overall iris segmentation. Pupil dilation is an important aspect of iris identification; with increasing dilation, there is a greater risk of identification failure. Choice of decision criterion for identification should be carefully reviewed. It needs to be recognised that differences in the quality of images in different databases may result in variations in the performance of iris recognition algorithms.
The function of the iris is to regulate the amount of light that enters the eye and reaches the retina. The regulation of pupil size is not only for controlling light levels, but is a response to optical, neurological and emotional factors mediated by the autonomic nervous system. Clinically, unless there is a neurological impairment or a neoplasm on the iris, this part of the eye receives less attention than other components. Iris conditions are relatively uncommon when compared to the range of anomalies and diseases that are found in the eye as principally ocular or as secondary manifestations of a systemic illness. However, beyond the clinical realm, the iris is increasingly becoming recognised as a tissue that can act as a reliable biometric for purposes of identification.
A biometric is any physical or behavioural characteristic that can be used to uniquely identify an individual. The suitability of a biometric is measured by the number of degrees-of-freedom or independent dimensions of variation. The iris contains approximately 266 degrees-of-freedom, the largest among facial features . Uniqueness of the iris arises from its complex pattern that may contain many distinct features including nerve rings, fibre thinning, pigment spots and crypts. Each iris may also be classified according to texture (fine, fine/medium, medium/coarse, coarse) and colour (blue/grey, amber, light brown, brown, dark brown) .
The concept of the iris as a biometric means of identification was first proposed and patented by Flom and Safir . Daugman [4, 5] subsequently developed this system and his algorithm remains in use within many commercial iris recognition systems .
The importance of the iris as a unique identifier assumes that its appearance is stable throughout life and all biometric systems developed to date are based on this assumption. This does not take into account physiological changes to the iris, notably with age, as well as alterations to features that may occur in response to external factors such as medication, disease and/or surgery. The dynamics of the system also need to be considered as the iris expands and contracts with varying light levels and focussing distances.
Changes occur within varying time periods and depending on the extent of these changes, they may render iris recognition a less reliable method of identification than first proposed. What is required is a means of identifying the features in the iris which do alter over time from those that are immutable.
A reliable method for identifying iris features and distinguishing between those that alter and those that do not requires a data set with good quality images and development of algorithms to segment the iris, to extract pertinent features and to accurately match images of the same iris. Methods of segmentation and feature extraction vary in existing iris recognition algorithms [1, 7]. This study is an extension of previous, preliminary work . It considers the use of existing methods by applying them to an image data set with greater resolution of the iris than in previous databases and proposes an enhanced means of iris segmentation that would improve iris recognition and ultimately could help to better distinguish between mutable and immutable features. The effect of pupil dilation on iris recognition is also investigated.
Images were captured using a Takagi clinical biomicroscope (slit lamp), model number SM-70 at 16 × magnification. Image size was 571 × 767 pixels with 96 × 96 dpi. The biomicroscope was attached to a desktop computer and Anterior Retinal Capture (ARC) specialist software was used to acquire, view and store images. Each subject focussed on a fixed target positioned on the slit lamp to maintain a steady primary gaze position each time images were captured and this was verified by stability in the position of Purkinje image I. Room lights were turned off to minimise spurious illumination and reflections. Slit lamp illumination was set to its lowest level so as to avoid discomfort to the subject and full constriction of their pupil. Slit beam angle was set at 45° and beam aperture was set at a maximum.
In total, 186 iris images from right eyes of four Caucasian adults aged between 23 and 64 years were captured: 19 images from Subject A were captured over 13 weeks, 98 from Subject B over 43 weeks, 40 from Subject C over 24 weeks and 29 images from Subject D over 16 weeks. Images were captured approximately 1-3 times per week for each subject. Tropicamide (0.5%), a standard clinical means of dilating the pupil that has a parasympatholytic action, was used on three separate occasions. This was to investigate whether increasing pupil size lowers iris image identification success rates as a change in pupil size alters the proportion of iris tissue that is visible. Tropicamide (0.5%) was instilled in the right eye of Subject B and 8 of Subject B's 98 images were captured when the pupil was dilated. Images taken for varying degrees of dilation were compared with images of the iris with a non-dilated pupil of the same subject. Ethical approval for the study was granted by the University of Ulster Biomedical Sciences Ethics Filter Committee.
As part of Masek's algorithm, following inner and outer iris boundary detection, edge detection and thresholding algorithms were implemented to identify image regions containing occlusions from eyelids, eyelashes and specular reflections. These regions are discarded from further use in the latter stages of the algorithm.
Pertinent features were extracted from the normalised iris pattern using log Gabor filters to decompose an iris image into complex-valued phase coefficients with amplitude information discarded7. These are quantised to obtain two binary values for each coefficient depending on the quadrant in which the value lies in the complex plane. An example of an encoded iris is shown Figure 2 (c).
The final stage requires comparison of different iris images to determine whether they belong to the same person. The test of statistical independence used involves calculating the Hamming Distance (HD) between two iris codes [1, 7]. HD is a measure of dissimilarity between two irides and is based on a decision criterion derived from the bimodal distribution of inter-class (different subjects) and intra-class (same subject) variation of HDs calculated for many irides. The separation between these curves gives a range of values from which the decision criterion is selected. Values within this range were tested on iris images to determine how this affected identification.
Analysis was conducted on a subsample of 91 images, 2 of Subject A, 75 of Subject B, 9 of Subject C and 5 of Subject D with pupil dilation occurring in 7 of Subject B's images. These images were the ones accurately segmented by the proposed segmentation technique with occlusions removed successfully. The HD between all possible pairs of images was calculated giving 4186 pair-wise comparisons.
This method uses integrodifferential operators [1, 4]. These are circular edge detectors used to locate the limbal and pupil boundaries of the iris. The image is smoothed with a Gaussian filter and the integral of the smoothed radial image derivative is computed along sequences of concentric circles centred at each pixel in the image. The maxima of these contour normal derivatives correspond to the pupil and limbal boundaries and are found using an exhaustive search across the image domain over all possible circles. Due to non-concentricity of the pupil and iris, separate searches were performed to detect the pupil and limbal boundaries, starting with the outer boundary. The primary search for the limbal boundary sets the smoothing function for a coarse scale of analysis due to the abrupt intensity transition between the iris and sclera. This first search is exhaustive across the image, while the second search looks only within the detected iris region to find the pupil boundary (the pupil will always be contained within the iris). The smoothing function is set to a finer scale of analysis in the pupil boundary search due to the fainter intensity transition between pupil and iris than between iris and sclera in Daugman's image dataset.
The proposed technique is an enhancement of Daugman's method. (This method was chosen because unlike Masek's algorithm , it does not require image dependent parameters). The Daugman method requires an exhaustive search of every pixel across the entire image domain. The proposed enhancement restricts the search to a limited region by finding an initial estimate of the pupil centre via thresholding the greyscale image . Morphological operators are applied to the thresholded image and the pupil is differentiated from all other image objects as the main central object. It is detected by 'blob' analysis where a group of pixels organized into a structure, commonly called a blob, is analysed to obtain its characteristics: the centre of gravity and radius (shown in Figure 3 (b)). These values provide an estimate of pupil location within the image and are used to define a 10 × 10 search window within which precise pupil location is detected. Daugman's integrodifferential operators are applied in reverse order to initially detect pupil boundary followed by limbal boundary. When the pupil centre is found, a second search window is defined and Daugman's integrodifferential operator applied to detect the limbal boundary (Figure 3 (b)).
Success rates for pupil and iris boundary detection
Pupil and iris
Using the proposed method of enhancement, accuracy in pupil detection using Daugman's method increased to 96.2%, an increase of 88.7% over Daugman's original method. This method also provided a 29% improvement on Masek's pupil detection technique from 67.2% to 96.2%.
Pupil dilation-matching criteria
Images featuring pupil dilation from Subject B were compared with images from the same person without dilation. This consisted of 476 comparisons (68 non-dilated images compared with each of the 7 dilated images). Of the 7 dilated images, 3 were slightly dilated (4.6 mm), 3 were moderately dilated (6.3 mm) and 1 was highly dilated (7.4 mm).
Dilated pupil identification success results
Identification success (%)
Decision criterion HD
4.6 mm dilation
6.3 mm dilation
7.4 mm dilation
For an iris recognition system to identify iris images accurately, precision is required at every stage of processing. This study has considered segmentation and matching as applied to dilated and undilated pupils. Specific segmentation methods that function successfully on certain data sets have been proposed but as yet there is no generic technique that works well on all datasets which contain images taken under varying conditions. Images collected in previous studies, CASIA  and Bath  are of lower magnification and resolution than those used in this study, and so contain less feature detail. They also contain a significant portion of surrounding facial detail and have fewer iris pixels. The images presented in this study have been collected at higher magnification using specialised clinical imaging equipment.
Results from this study show that Daugman's method has the worst performance in detecting both pupil and iris boundaries for the image dataset collected in this study.
Daugman [1, 4] introduced integrodifferential operators that behave as circular edge detectors to detect the limbal and pupil boundaries of the iris by computing the maxima in the contour integral of a smoothed radial image derivative along concentric circles. A major factor in the poor performance of Daugman's technique is that if the limbal boundary is not accurately located initially there are difficulties with pupil detection. Masek's technique was found to be relatively more effective in pupil detection but could be improved for limbal boundary detection. The proposed method of enhancement significantly improved detection of both pupil and limbal boundaries. It should be noted that the images in this database of a much higher quality than those used in previous studies. The performance of the tested algorithms may depend on the quality of the images. Hence the lower performance found for the existing algorithms of Daugman and Masek, compared to the enhancement proposed, may reflect the fact that these algorithms were developed with reference to databases of lower quality images.
Enhancements to Daugman's method have also been proposed by other authors. Tisse  implemented a combination of integrodifferential operators and the circular Hough transform to obtain an approximation of pupil centre and to provide an improved starting point for the integrodifferential operator. This technique improved segmentation accuracy by 14% on a database of 50 eye images from 5 subjects. The database used by Tisse  was of lower resolution and magnification than the images used in this study. Zuo et al  proposed a segmentation technique in which intensity and location characteristics of the pupil and iris were enhanced before segmentation using Daugman's technique, and this was compared to Masek segmentation. Using images from the CASIA database, this enhancement to Daugman's technique reported an increase of 12.84% on segmentation success when compared to Masek's technique.
An alternative segmentation method, first proposed by Wildes , utilises an edge detection operator and the Hough transform to detect the circular boundaries of the iris. Masek  implemented a similar technique but with Canny edge detection.
Techniques that use integrodifferential operators remain the most robust of segmentation methods as they do not require definition of parameters across different image data sets, but enhancements are still required. Hough transforms have provided a feasible alternative but require image dependent parameters to be defined that may alter when capture conditions change. A technique that functions generically across all imaging conditions has not yet been developed.
Matching to decide if a pair of iris images are from the same individual is conducted by obtaining the HD between two irides and determining whether they match based on a chosen threshold. This threshold value is derived from the bimodal distribution of inter-class and intra-class variations. This value selection is to some extent subjective and in practice the choice depends on the desired security level. A high security system will have a low decision criterion to ensure little chance of a false match. This will, however, increase the possibility of rejecting a match between iris image pairs from the same individual. Conversely, for a low security system, the decision criterion could be increased to reduce the number of false rejections but will in turn increase the possibility of incorrectly matching iris images from different individuals. A decision criterion of HD = 0.4 has been used previously [1, 7]. In this study, a range of HD values between 0.37 and 0.41 were tested for a series of pupil dilations. The success in matching can be improved by selecting a higher HD value, but success rate decreases rapidly regardless of HD value for a pupil size above 7 mm. Further work is required to develop a system that can recognise the changing behaviour of iris structure caused by pupil dilation.
Variability in iris pattern and structure may differ according to varying factors, e.g. light and medication may deform the iris differently . The effect of other causes such as surgical procedures and medical conditions, as well as age, may result in changes to the iris over time. A system that can accurately match a pair of iris images from the same individual irrespective of changes in image features will require enhancements at all stages of processing together with a means of knowing which features are immutable.
The robustness of iris segmentation techniques have been examined on a new database of high quality images. Existing methods produced inadequate results when applied to the images collected in this study. A proposed enhancement of one of the existing techniques improves segmentation accuracy significantly and will advance the development of iris recognition systems and assist in identification of feature stability. Matching reliability is significantly lower when a pupil is dilated. A more thorough investigation of iris dynamics across a larger population is required.
The authors acknowledge financial support from the Northern Ireland Department of Education and Learning.
- Daugman J: How Iris Recognition Works. IEEE Trans Circuits Syst Video Technol. 2004, 14 (1): 21-30. 10.1109/TCSVT.2003.818350.View ArticleGoogle Scholar
- IEEE Computer Society, Pierscionek B, Crawford S, Scotney B: Iris recognition and ocular biometrics - the salient features. Proceedings of the 12th International Machine Vision and Image Processing Conference: 3-5. 2008, IEEE Computer Society, September ; Northern IrelandGoogle Scholar
- Flom L, Safir A: Iris Recognition System. US Patent 4 641 349. 1987Google Scholar
- Daugman J: High confidence visual recognition of persons by a test of statistical independence. IEEE Trans Pattern Anal Mach Intell. 1993, 15: 1148-1161. 10.1109/34.244676.View ArticleGoogle Scholar
- Daugman J: Biometric Personal Identification System Based on Iris Analysis. US Patent 5 291 560. 1994Google Scholar
- Ratha NK, Govindaraju V: Iris Recognition in Less Constrained Environments. Advances in Biometrics: Sensors, Algorithms and Systems. Edited by: Matey JR, Ackerman D, Bergen J, Tinker M. 2007, New York: Springer-Verlag New York, Inc, 107-131.Google Scholar
- Masek L: Recognition of Human Iris Patterns for Biometric Identification. The University of Western Australia. 2003, [http://www.csse.uwa.edu.au/~pk/studentprojects/libor/LiborMasekThesis.pdf]Google Scholar
- IEEE Computer Society, Rankin D, Scotney B, Morrow P, McDowell R, Pierscionek B: Comparing and Improving Algorithms for Iris Recognition. Proceedings of the 13th International Machine Vision and Image Processing Conference: 2-4. 2009, IEEE Computer Society, September ; Republic of IrelandGoogle Scholar
- Canny J: A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell. 1986, 8 (6): 679-698. 10.1109/TPAMI.1986.4767851.PubMedView ArticleGoogle Scholar
- Hough PV, Arbor A: Method and Means for Recognizing Complex Patterns. US Patent 3 069 654. 1962Google Scholar
- Low AA: Automatic selection of grey level for splitting. Introductory Computer Vision and Image Processing. 1991, New York: McGraw-Hill, 57-58.Google Scholar
- Chinese Academy of Sciences Institute of Automation (CASIA) iris database. [http://www.cbsr.ia.ac.cn/english/IrisDatabase.asp]
- Monro DM, Rakshit S, Zhang D: Smart Sensors Ltd. Iris Image Database. [http://www.irisbase.com/download.htm]
- Tisse C, Martin L, Torres L, Robert M: Person identification technique using human iris recognition. Proceedings of Vision Interface. 2002, 294-299.Google Scholar
- Zuo J, Kalka N, Schmid N: A robust iris segmentation procedure for unconstrained subject presentation. Biometric Consortium Conference. 2006, IEEE Computer Society, Biometrics Symposium: Special Session on Research: 21 August-19 September 2006; BaltimoreGoogle Scholar
- Wildes R: Iris Recognition: An Emerging Biometric Technology. Proc IEEE. 1997, 85: 1348-1363. 10.1109/5.628669.View ArticleGoogle Scholar
- IEEE Computer Society, Rakshit S, Monro DM: Medical Conditions: Effect on Iris Recognition. Proceedings of the IEEE 9th Workshop on Multimedia Signal Processing: 1-3. 2007, IEEE Computer Society, 357-360. October ; Crete, GreeceGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.