Rank One stands alone with top-tier performance in NIST FRVT Ongoing benchmark

On July 27th the U.S. National Institute of Standards and Technology (NIST) released the first Face Recognition Vendor Test (FRVT) update since COVID-19 paused reporting. The report includes 221 face recognition algorithms from roughly 150 different organizations across the world (the bulk of whom are from Russia and China). This benchmark also includes the ROC SDK v1.22, which was the only FRVT submission to receive top marks in both accuracy and efficiency.

Accuracy and Efficiency

The accuracy of FRVT algorithms is measured on constrained scenarios such as mugshots and border crossing imagery, and unconstrained (or “wild”) scenarios such as photojournalism. Accuracy is reported as False Non-Match Rate (FNMR) for various False Match Rates (FMR). Rank One’s accuracy metrics on these benchmarks are:

Screenshot from 2020-08-26 11-09-43

The efficiency of FRVT algorithms is measured as the template generation speed (time needed to initially process a face image or video frame), the template size (memory required to represent facial features of a processed face image), and the comparison speed (time needed to measure the similarity between two facial templates). Rank One’s efficiency metrics on these benchmarks are:

Screenshot from 2020-08-26 11-13-58.png

Both the accuracy and efficiency marks put Rank One in the top echelon of vendor performance. In fact, of the 221 algorithms in the report, no vendor is able to deliver the same balance of top-tier accuracy and efficiency as Rank One. 

To better illustrate how well Rank One’s algorithm stacks up in accuracy and efficiency, the following table compares Rank One to other Western-friendly vendors by assessing the performance rank for each metric. The rankings are separated into quintiles, where the first quintile contains the top 20% of all algorithms for a given category. The second quintile contains the algorithms in the top 21% to 40% of algorithms. The third quintile is those ranking 41% to 60% of algorithms. The fourth quintile is those ranking 61% to 80% of algorithms. Finally, the fifth quintile is the worst 20% of all algorithms in a given category. 

Screenshot from 2020-09-01 12-14-36Figure 1. Listed are the performance ranks out of the 221 algorithms for each Western-friendly vendor across different FRVT performance categories (via Tables 5 to 16 in the July 27, 2020 FRVT Ongoing report). Each performance rank is color coded based on the quintile of performance rank. 

It is important to point out that face recognition vendors NEC, Amazon, Microsoft, and Clearview AI have never submitted their algorithms to FRVT Ongoing for analysis in the past 3+ years of the benchmark, despite the benchmark being the preeminent source for comparing face recognition vendor capability. Therefore, these algorithms should never be used for facial verification.

The Rank One algorithm is in the first quintile for 7 of the 8 performance metrics analyzed. The next closest vendor has 5 metrics in the first quintile. Further, Rank One is in the top two quintiles in all performance metrics. By contrast, every other vendor had at least one metric in the 4th quintile or lower. The following chart summarizes the how many metrics fell into each quintile by vendor: 

Screenshot from 2020-08-26 11-17-16

Figure 2. A tally of the number of performance metrics that ranked in each quintile. The Rank One algorithm has 7 or 8 metrics in the first quintile, and all metrics in the top two quintiles. All other vendors had at least one metric in the fourth quintile or lower.

It is clear from the performance rankings that the Rank One algorithm is the only Western-friendly solution to deliver top-tier accuracy and efficiency, with no other vendor coming close.

Racial Bias

There is a significant amount of false and misleading information regarding the performance of face recognition algorithms as a function of race. While many factors can influence the accuracy of face recognition algorithms, top-tier FRVT-validated algorithms are highly accurate across all races. 

The most reliable benchmark on accuracy as a function of race is the breakdown provided on the FRVT mugshot data as (i) it reduces confounding factors that can influence face recognition accuracy, such as camera quality or occlusions, and thus provides a more isolated inspection of accuracy as a function of race, (ii) the ground truth identity labels have been cleaned up and validated (as demonstrated in previous FRVT report change-logs), and (iii) it relates to the identification of persons of interest in criminal investigations which typically use mugshot arrest records as the gallery database. 

Screenshot from 2020-08-26 11-18-48Figure 3. The Decision Error Tradeoff (DET) curve of the Rank One algorithm on “Black” (B), “Whilte” (W), Male (M), and Female (F) subjects on mugshot data, as provided in Figure 79 of the NIST FRVT Ongoing report. The lower left corner of the plot represents lower error rates, and the upper right corner high error rates.  The Rank One algorithm was measured as being most accurate on “Male Black”, and second most accurate on “Female Black”, with “White” subjects exhibiting lower relative accuracy.

As shown in Figure 3, the demographic cohort with the lowest error rate using the Rank One algorithm is “Male Black”, followed by “Female Black” as the second lowest error rate. “White” subjects exhibited higher error rates than both of the “Black” cohorts. This is obviously quite different from the common media narrative that face recognition algorithms do not work on persons of color. The truth is instead that face recognition algorithms are highly accurate on all races and skin colors. 

A separate analysis is performed by NIST in FRVT Ongoing Figure 193 that inspects only the Type I error (false match rate) of algorithms on Visa imagery from different geographic origins. Copied below in Figure 4, this chart plots the False Match Rate of the worst performing geographic cohort for each algorithm. Rank One has the 12th lowest error rate of the 221 algorithms evaluated. 

Screenshot from 2020-08-26 11-20-18

Figure 4. The error rate of the worst performing geographic cohort for each FRVT vendor is plotted in the above chart (copied from FRVT Figure 193). Rank One had one of the best performances in this analysis, indicating that it was one of the most racially / geographically balanced algorithms submitted to NIST FRVT. 


When building face recognition systems, it is critical to have top-tier accuracy and efficiency that enables system deployment across various hardware platforms. When analyzing all the Western-friendly algorithm vendors, only Rank One delivers top marks for both accuracy and efficiency

Contact us today to begin evaluating the ROC SDK!

Like this article? Subscribe to our blog or follow us on LinkedIn or Twitter to stay up to date on future articles.

Popular articles: 


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s