One of the main challenges faced by iris recognition systems is to be able to work with
people in motion, where the sensor is at an increasing distance (more than 1 m) from the person.
The ultimate goal is to make the system less and less intrusive and require less cooperation from the
person. When this scenario is implemented using a single static sensor, it will be necessary for the
sensor to have a wide field of view and for the system to process a large number of frames per second
(fps). In such a scenario, many of the captured eye images will not have adequate quality (contrast or
resolution). This paper describes the implementation in an MPSoC (multiprocessor system-on-chip)
of an eye image detection system that integrates, in the programmable logic (PL) part, a functional
block to evaluate the level of defocus blur of the captured images. In this way, the system will be
able to discard images that do not have the required focus quality in the subsequent processing steps.
The proposals were successfully designed using Vitis High Level Synthesis (VHLS) and integrated
into an eye detection framework capable of processing over 57 fps working with a 16 Mpixel sensor.
Using, for validation, an extended version of the CASIA-Iris-distance V4 database, the experimental
evaluation shows that the proposed framework is able to successfully discard unfocused eye images.
But what is more relevant is that, in a real implementation, this proposal allows discarding up to
97% of out-of-focus eye images, which will not have to be processed by the segmentation and
normalised iris pattern extraction blocks.