ABIS: The Spanish police will use automatic facial recognition | technology

The National Police, Civil Guard and regional bodies will soon have a new tool to fight crime: the automatic facial recognition system. The ABIS (Automated Biometric Identification System) program, which uses artificial intelligence to identify suspects in a few seconds from any type of photo, is currently in the database creation stage, according to ministry sources. Beta tests have already been carried out with it, and once it is ready it will be used in police investigations, initially only for serious crimes. The internal design ensures that it will in no way be used for acts of surveillance or direct identification of people in public spaces, although independent analysts consulted by EL PAÍS believe that the system does not offer all the required transparency guarantees.
The ABIS algorithm, called Cogent, was developed by the French military technology company Thales. The system compares the image entered by the agents, extracted for example from a security camera, with the images available in the system to find matches. The database against which the photos will be compared will consist of about five million reviews of facial photos of detainees and suspects already on file, according to the Interior Ministry (other sources speak of 5.6 million photos of 3.9 million people arrested). These files are formatted so that the tool can read them.
To this data box will be added the photos of those who were caught from the moment they started using the system. They pointed out from the National Police that in no case can civil database records be used, such as those containing photographs of identity documents which the police also have access to. The interior has been working on the project for at least three years, which has suffered several delays.
Each person has a unique arrangement of facial features. In the first stage, facial recognition systems extract the face from the image using a technology called computer vision; Locate a face in the photo. Then, they apply an algorithm to that face to get a pattern to represent it and distinguish it from others. AI systems make it possible to search for this unique, unchanging pattern for each individual over the years in large-scale image banks and provide similar results. Each algorithm (each provider) has its own formula for tracking patterns and finding matches.
As EL PAÍS has learned, the Spanish Agency for Data Protection (AEPD) is in contact with the Interior “to address various projects of the Ministry that could have an impact on data protection”, including ABIS. The agency, which was not aware of the existence of the project until July, must determine whether or not processing this type of personal data poses an acceptable risk to the rights and freedoms of citizens. Can police keep people’s facial data forever or should time limits apply? Under what assumptions can the system be used? Who has access to that data? What guarantees are in place for the provided use of the tool?
Once the database, made up of records provided by different police forces (such as Guardia Civil and Mosos) is finalized, workstations will be deployed to the Central Services of the Scientific Police so that they can check in real use cases. The internal division does not say when it will be operational, but according to sources familiar with the process, it could take months to develop.

The application of automatic facial recognition systems in police work is making its way in Europe, where many countries, such as France, the Netherlands or Germany, have already conducted pilot tests or already have tools in use. This technology will begin to be used at the beginning of next year at EU borders to register only non-EU citizens entering EU territory. In the UK, it went even further, and the police placed vans with cameras equipped with these systems in front of the entrances to the London Underground.
In the United States, which is one of the countries where this technology is widely used, several cities have decided to ban its use after protests by the Black Lives Matters movement, which identifies facial recognition as an element of policing. Other countries, such as Russia or China, are taking advantage of the surveillance capabilities of this technology. Large cities of the Asian giant are flooded with cameras containing live facial recognition systems capable of finding any citizen in a matter of hours.
Revolutionary tool
Inspector Sergio Castro, of the Forensic Anthropology Department of the Madrid Scientific Police General Center, leads the team of seven people who will initially be responsible for managing the ABIS tool. He points out that “it is likely that if the regime succeeds, they will reinforce us with more troops, otherwise decentralization will occur.” Once operational, the operating parameters and users of the system (ie whether or not the various bodies will have their own equipment to use ABIS) will also be determined.
Castro does not hold back his enthusiasm when he talks about the new tool they have put in his hands. His department has two main methods for identifying suspects: fingerprint analysis and DNA analysis. Facial recognition would open up a third avenue, which is also non-invasive: unlike the other two, it does not require the acquisition of physical samples of the subject.
Until now, when there was no candidate or suspect, the camera images of the bank where the robbery took place were of little use. It was pointless to start looking for who appears in the footage without evidence to narrow the search. This is where automatic facial recognition tools come in. When you provide a photo of someone, the system commands the police report photos [unos cinco millones, según interior] From most similar to least similar. The operator then goes to the top positions looking for a match,” Castro points out.
The agent’s work is key: depending on how clear the photo is and the degree of face masking (glasses, beard, posture differences, etc.), the correct photo may be thirty. It is always the person, not the computer, who decides whether or not there is similarity. If we find a match, we are talking about a potential candidate. The inspector indicated that an investigation could be opened for evidence. This process may or may not end in detention, depending on the evidence found.
/cloudfront-eu-central-1.images.arcpublishing.com/prisa/EK4EQZ4VWRDHFKFSGBFWSYMGYY.jpg)
In the second step, if the candidate for investigation or arrest is to be validated, a forensic study is carried out, just as has been the case so far with fingerprints or DNA. “My team will do an individual study of the subject provided by the automated tool. We are seeking very high reliability, because our expert can identify a sentence, and for this a lot of image quality is needed,” he confirms. In 90% of cases the identifications they have to make are judicial requests; The rest are ex officio requests from other police departments, who have photos of the perpetrator and need to confirm conclusively whether or not he is the person they are investigating in order to report later to the courts.
The database that will contain the facial image records of all the suspects is the same one in which the fingerprints and DNA samples are already stored. These last two types of personal data are shared with European partners under the Schengen Information System (SIS). Brussels intends to include facial data in the same package in the future. “The Spanish ABIS can connect to European databases, such as Eurodac, EU-Lisa or VIS, due to the design of the corresponding links. It is not an isolated system, but linked to EU countries,” Thales sources explain.
Risks of biometric technologies
Algorithms fail. And it’s not the same that they’re wrong when recommending a movie rather than identifying a suspect. The case of American Robert Williams was the first documented case of illegal detention due to a facial recognition system: the tool confused him with another and agents, far from checking whether he looked like the suspect, carried him to jail. These systems are trained on data from white people, so they fail a lot more often with blacks and Asians. There are federal studies proving that this technology is 100 times more likely to confuse blacks than whites.
The draft AI regulation being negotiated in Brussels takes an approach based on the potential risks involved in applying these technologies. Facial recognition falls into the “high risk” category, though the door is open to its use as long as it is “for the purposes of preventing, apprehending, or investigating serious crime or terrorism.” Tools of “indiscriminate surveillance” are expressly prohibited, and therefore, in principle, these systems cannot be placed on the streets to identify people. According to ministry sources, this is by no means the intention of the Ministry of Interior.
The application of algorithms in public affairs should be audited and monitored. According to the interior, the system, which was developed by Thales, has been validated by the Civil Guard and the National Police Force. The sources pointed out that “scientific and criminal specialists from the state security agencies and bodies have participated in the verification work.” Cogent’s algorithm, the heart of the ABIS system, has also passed the vendor test of NIST, an independent US organization. “The guarantee is that the evaluated algorithm meets the required standards and requirements for different use cases,” says the interior. NIST does not say that algorithms are good or bad. In addition, the organization proposes several assessments with different objectives, and we do not know which assessments they refer to, ”says Carmela Troncoso, professor at the Federal Polytechnic School in Lausanne (Switzerland) and author of the Safe Tracing Protocol used in tracing applications.
Gemma Galdon, director of Eticas Consulting, an audit consulting firm, doesn’t think that’s enough either. “According to European regulations, the fit of high-risk technologies must be justified and what is expected to be achieved with them. It is also necessary to know what precautions have been taken to avoid algorithmic biases: these systems have been shown to identify white people better than the rest, so it is necessary to prove that they do not make mistakes with black people,” he explains.
You can follow country technology in Facebook s Twitter Or sign up here to receive The weekly newsletter.