In the last decade, interest in fingerprint-based biometric systems has grown significantly . Activity on this topic increased in both academia and industry as several research groups and companies developed new algorithms and techniques for fingerprint recognition and as many new electronic fingerprint acquisition sensors were launched into the marketplace.
Nevertheless, before this initiative, only a few benchmarks have been available for comparing developments in this area and developers usually perform internal tests over self-collected databases. The lack of standards has unavoidably lead to the dissemination of confusing, incomparable and irreproducible results, sometimes embedded in research papers and sometimes enriching the commercial claims of marketing brochures.
The aim of this initiative is to take the first steps toward the establishment of a common basis, both for academia and industry, to better understand the state-of-the-art and what can be expected from this technology in the future. We decided to "dress" this effort as an international open competition to boost interest and give our results larger visibility. 15th ICPR 2000 was ideal for this purpose. Starting in late spring 1999, when the FVC2000 web site was set up, we broadly publicized this event, inviting all companies and research groups we were aware of to take part.
From the beginning, we premised that the competition was not an official performance certification of the participant biometric systems, as:
In FVC2000, four different "sensors" were used to cover the recent advances in fingerprint sensing techniques. In fact, databases 1 and 2 were collected by using two small-size and low-cost sensors (optical and capacitive, respectively). Database 3 was collected by using a higher quality (large area) optical sensor. Finally, images in database 4 were synthetically generated by using the approach described in . Each database was split into a sequestered "test" set of 800 images (set A) and an open "training" set of 80 images made available to participants for algorithm tuning (set B).
In March 2000, after several months of active promotion, we had 25 volunteering participants (about 50% from academia and 50% from industry), far more than our initial expectation (so we had a lot of work to do!). By the end of April, the training sets were released to the participants.. After the submission deadline (June 2000) for the executable computer programs, the number of participants decreased to 11 (most of the initially registered companies withdrew). Some of the withdrawals were undoubtedly due to the lack of time necessary to make algorithms "compatible" with FVC2000 images, but most were probably caused by the misalignments that some participants found between the performance measured on the FVC2000 training sets and their internal test sets; whilst this is not necessarily a problem for an academic research group, it could play negatively for a company releasing accuracy numbers significantly different from those measured at FVC2000. Perhaps, we did not stress strongly enough in our "Call for Participation" that FVC2000 is a test of relative technology performance, not intended to predict performance in a real environment.