Facial recognition software: security tool or security threat?

Facial recognition software: security tool or security threat?

Facial recognition technology has been around for years, but it’s only been relatively recently that it’s really started to move into the mainstream and it’s undeniably exciting the first time your phone logs you in just by looking at you. But while the tech industry’s steps towards making facial technology ubiquitous makes huge commercial sense as it opens up all kinds of business opportunities, it won’t necessarily be easy as it also raises questions and concerns.

The case for facial recognition software

There are jobs that rely on being able to recognise and identify others and these are frequently still done by people. However, people can only remember a certain number of faces accurately and can easily make mistakes identifying somebody from their face alone. This should not be the case for computerised facial recognition software. So, a key case for using facial recognition software is to ease the burden on humans, which, by happy coincidence, could potentially have a significant financial benefit on organisations as it would mean that they could stop relying on humans to do a task that computers can do more accurately.

As well as not always accurately recognising faces, with people there is also the issue of speed, which crops up with increasing frequency in the modern world. The longer a security check takes, the more stressful it can become both for the checkers and the checked, especially if either party is under time pressure, like needing to catch a flight for example. Airports, therefore, are an obvious case for integrating facial recognition software alongside human checks, but events like large conferences can also benefit from this kind of software to welcome and check in guests, avoiding the long queues and tense waits that can characterise that kind of event.

The case against facial recognition software

Right now, a large part of the case against facial recognition software is that it is not yet as good as it could be and so it does have a concerning habit of throwing up false positives. Advocates of the technology would argue that this will improve as the technology matures, which is a fair point, but the only real way for the technology to mature effectively is to deploy it in a real-world environment to see what issues that highlights. Then, steps can be taken to fix the problems that arise.

That, however, means that the people who use it will be inadvertent user testers. Another issue that some have brought up is about whether facial recognition software will ever be harder to deceive than a well-trained human or if, in fact, it might even be easier. People generally have an excellent grasp of the concept of deception and will instinctively be alert to it (in some situations more than others). At this point in time, computers do not and although artificial intelligence may develop to the point that they do, it’s not there yet.

This all indicates a case for starting to utilise facial recognition systems without yet relying on them for complete coverage. Whether they are logging in users to a server or checking people against the passports they carry, backing up the digital experience with human expertise could provide the best of both worlds until the software is capable of running the show.