Monday, June 22, 2009

Iris Recognition -- The Source of the Project and Methodologies to be Used

The Source of the Project


In Spring 2008, I took a Biometrics class as a CS elective. It was a great class, and gave me a good introduction to the methods that are used in the field. For that class, we had to complete a semester project, either independently or with a group. Being a shy person and, besides that, quite adept at procrastinating, I procrastinated myself right out of working with a group. That itself wasn't too much of a problem; most of my experiences with group projects in CS classes haven't been that great. The problem was that Iris Recognition was the biometric process that I was most interested in and, according to the professor, it was a rather involved field as well. She wasn't quite sure one student should take on such a project on his own.

The professor and I ultimately decided that I would try my best to get some sort of IR process built and working using MatLab, an expensive computational software package that is installed on many of the campus computers, as a platform. MatLab has built-in functions for reading in, manipulating, and displaying images. I read the in-class resources relevant to IR, attempted to boil the methodologies down to a number of image manipulation methods, and wrote my MatLab program to do what I thought needed to be done. I got an A on that project.

This class, and my continued interest in IR, are what brought me to choose a similar idea for my capstone. For this project, however, I would not be using MatLab and its built-in image functions -- that would make the project too trivial and wouldn't allow me to exhibit the other relevant knowledge that I've gained from other classes. I would be implementing my own functions in C/++ from scratch (with the help of outside sources for algorithms, algorithm ideas, etc).

Methodologies to be Used


Iris recognition is a multi-step process:

  1. Obtain the image - before you can get the iris, you need to snap an image of the user's eyeball. Fortunately, there are pre-existent databases of eye images that preempt the need for a camera and many subjects. The UBIRIS.v1 database is free to the public and contains almost 2,000 images taken from 241 subjects over two distinct sessions. A password for the zip file must be obtained from the authors before you can use the images.

  2. Extract the Iris data - the real meat of the entire process. Once you have the Iris data extracted from an eye, it can be stored in a database with other information relevant to the user. In order for the recognition process to feel "natural" to a user, the iris must be extracted quickly and accurately. Speed primarily relies on the algorithms used. Accuracy depends on accounting for various noise sources, and the algorithms used. Yes, a lot of this stuff is algorithms.

  3. Archive iris samples in a database - first, the iris data must be obtained through a controlled environment: you bring the user into a clean room, tell them to open their eyes wide and hold still, and snap some pictures of their eyes. This reduces the amount of noise in the image, giving a very good image to compare against. Give the images to the software to process, and off you go.

  4. Compare new irises and generate acceptance/rejection - once the database is built, every user desiring access to a high-security area must be scanned and matched to the database. This is where the noise-resolving algorithms come into play: you have to account for half-blinks (showing half the iris), full blinks (no iris visible), blurry images (due to movement), dust and scratches on the camera lens, awkward eye angles (for example, they looked out the corner of their eye instead of straight on), items on the user's face (bushy eyebrows, eyeglasses, sunglasses, etc), varying lighting conditions, and lots of other things that just degrade the image and make iris extraction harder.



Once you have step 2, Iris Extraction, implemented properly (being fast and accurate), the rest should be relatively easy. Iris extraction itself is tricky: you have to first detect the outer edge of the iris, then the outer border of the pupil (wouldn't want the pupil to interfere with the iris data). While this would be simple for people, who see a picture of an eye, it's rather involved for a computer, which sees nothing more than a field of different-colored pixels. There's no inherent similar-color groupings, difference-of-color edges, etc; the computer has to be told to do these things using algorithms. Even once you have that, of course, the pupil is often off-center from the iris, so iris extraction isn't as simple as extracting circles from the eye, you have to use relatively complicated math to get the right data. Once you have the iris data, there is a "Gabor filter" that you can put it through to get a black-and-white image that is, theoretically, unique to every iris. This Gabor image was not present in my Biometrics project, but I may include it in this project to improve accuracy.

Anyway, before I close this post, the method of iris extraction I used in the biometrics project:

  1. Prep the test images for use (convert to greyscale and remove reflections in the pupil.) Useful for development of the process, but I'll have to deal with noise through the program sooner or later.

  2. Detect the pupil center/width

    1. Convert image to black-and-white using im2bw() with a threshold of .3 (threshold is a number between 0 and 1, indicating the brightness of a pixel.)

    2. Convert the B&W image to a Euclidean distance transform using bwdist() (each pixel receives a new value equal to the distance from that pixel to the nearest non-black pixel. This means the pixel in the middle of the pupil will have the highest value.)

    3. Convert the image to greyscale (this makes the pupil greyscale again, the pixel in the middle of the pupil having the highest possible value, making it white)

    4. Sweep the image for this middle-of-the-pupil pixel

    5. Sweep left from the middle pixel until a pure black pixel is encountered

    6. Sweep right, either from the middle pixel or from the far-left obtained above, until a pure black pixel is found.

    7. The left and right sweeps give us the width of the pupil, half of which is the radius.



  3. Go back to the original greyscale image (probably by saving a copy before detecting the pupil)

  4. Detect the iris center/width

    1. Use im2bw() again with a threshold of .75, then turn into greyscale

    2. Starting at pupil center, sweep left until a pixel with value less than .05 of the maximum value, then sweep right under the same criteria. This gives us the width of the iris, half of which is the radius. Sweep back left the distance of the radius to get the iris's center.

    3. To be more accurate, also sweep up/down to find the vertical center.



  5. "Unwrap" the iris into a rectangle by taking samplings at various radii from the pupil to the outer iris edge

  6. We now have the iris data.



Of course, some steps will have to be refined, but the basic process will be the same. In the previous project, I used simple circles to sample the iris, but for the capstone I'll have to try to understand the more complicated equations behind properly extracting the data. And, of course, I'll have to find out how that Gabor filter works.

No comments:

Post a Comment