Home Bussiness The Fraud-Detection Business Has a Dirty Secret

The Fraud-Detection Business Has a Dirty Secret

0
The Fraud-Detection Business Has a Dirty Secret

But first, Ćurčić needs to know how the system works. So far, the government has denied his requests to share the source code on intellectual property grounds, claiming it would violate the contract they signed with the company who actually built the system, he says. According to Ćurčić and a government contract, a Serbian company called Saga, which specializes in automation, was involved in building the social card system. Neither Saga nor Serbia’s Ministry of Social Affairs responded to WIRED’s requests for comment.

As the govtech sector has grown, so has the number of companies selling systems to detect fraud. And not all of them are local startups like Saga. Accenture—Ireland’s biggest public company, which employs more than half a million people worldwide—has worked on fraud systems across Europe. In 2017, Accenture helped the Dutch city of Rotterdam develop a system that calculates risk scores for every welfare recipient. A company document describing the original project, obtained by Lighthouse Reports and WIRED, references an Accenture-built machine learning system that combed through data on thousands of people to judge how likely each of them was to commit welfare fraud. “The city could then sort welfare recipients in order of risk of illegitimacy, so that highest risk individuals can be investigated first,” the document says. 

Officials in Rotterdam have said Accenture’s system was used until 2018, when a team at Rotterdam’s Research and Business Intelligence Department took over the algorithm’s development. When Lighthouse Reports and WIRED analyzed a 2021 version of Rotterdam’s fraud algorithm, it became clear that the system discriminates on the basis of race and gender. And around 70 percent of the variables in the 2021 system—information categories such as gender, spoken language, and mental health history that the algorithm used to calculate how likely a person was to commit welfare fraud—appeared to be the same as those in Accenture’s version.

When asked about the similarities, Accenture spokesperson Chinedu Udezue said the company’s “start-up model” was transferred to the city in 2018 when the contract ended. Rotterdam stopped using the algorithm in 2021, after auditors found that the data it used risked creating biased results.

Consultancies generally implement predictive analytics models and then leave after six or eight months, says Sheils, Accenture’s European head of public service. He says his team helps governments avoid what he describes as the industry’s curse: “false positives,” Sheils’ term for life-ruining occurrences of an algorithm incorrectly flagging an innocent person for investigation. “That may seem like a very clinical way of looking at it, but technically speaking, that’s all they are.” Sheils claims that Accenture mitigates this by encouraging clients to use AI or machine learning to improve, rather than replace, decision-making humans. “That means ensuring that citizens don’t experience significantly adverse consequences purely on the basis of an AI decision.” 

However, social workers who are asked to investigate people flagged by these systems before making a final decision aren’t necessarily exercising independent judgment, says Eva Blum-Dumontet, a tech policy consultant who researched algorithms in the UK welfare system for campaign group Privacy International. “This human is still going to be influenced by the decision of the AI,” she says. “Having a human in the loop doesn’t mean that the human has the time, the training, or the capacity to question the decision.” 

Despite the scandals and repeated allegations of bias, the industry building these systems shows no sign of slowing. And neither does government appetite for buying or building such systems. Last summer, Italy’s Ministry of Economy and Finance adopted a decree authorizing the launch of an algorithm that searches for discrepancies in tax filings, earnings, property records, and bank accounts to identify people at risk of not paying their taxes. 

But as more governments adopt these systems, the number of people erroneously flagged for fraud is growing. And once someone is caught up in the tangle of data, it can take years to break free. In the Netherlands’ child benefits scandal, people lost their cars and homes, and couples described how the stress drove them to divorce. “The financial misery is huge,” says Orlando Kadir, a lawyer representing more than 1,000 affected families. After a public inquiry, the Dutch government agreed in 2020 to pay the families around €30,000 ($32,000) in compensation. But debt balloons over time. And that amount is not enough, says Kadir, who claims some families are now €250,000 in debt. 

In Belgrade, ​​Ahmetović is still fighting to get his family’s full benefits reinstated. “I don’t understand what happened or why,” he says. “It’s hard to compete against the computer and prove this was a mistake.” But he says he’s also wondering whether he’ll ever be compensated for the financial damage the social card system has caused him. He’s yet another person caught up in an opaque system whose inner workings are guarded by the companies and governments who make and operate them. Ćurčić, though, is clear on what needs to change. “We don’t care who made the algorithm,” he says. “The algorithm just has to be made public.”

Additional reporting by Gabriel Geiger and Justin-Casimir Braun.

LEAVE A REPLY

Please enter your comment!
Please enter your name here