IBM has been hired by the UK government to assist the Department for Work and Pensions (DWP) in integrating artificial intelligence into its services across the country. The Nexus AI program, which has a £9 million one-year commitment with options to extend up to £27 million, is designed to “explore, deploy, and support” new tools.
The irony is though, that the department that helps people find employment is introducing the technology that some people are afraid will replace their jobs. The plan calls for rapid “tech spikes,” horizon scans, and proofs of concept, with IBM assisting teams in “building beta minimum viable products” and post-launch support systems, according to a notice seen by The Register.
With a commitment to a “value-led, responsible, secure, and firmly human centered” approach that “keeps humans in the loop,” the DWP claims to seek “safe acceleration.” However, activists are concerned. Big Brother Watch cites Universal Credit risk models as evidence that the digital welfare system already employs machine learning in ways that may result in automated bias. Jake Hurfurt, its head of research, argues that people have a right to know how these technologies affect their lives.
What’s Nexus AI hoping to accomplish?
With Nexus AI, you can quickly test concepts, create little pilots, and expand the successful ones. Essentially, the DWP can ask that IBM conduct a brief experiment (a “tech spike”), scan the tech landscape, or develop a proof of concept that turns into a basic tool to help employees.
The objective is to improve services without reducing human oversight by accelerating tasks, identifying trends in data, and minimizing busywork. The DWP’s own terminology really focuses on algorithmic ethics: teams must “keep humans in the loop on any decision” and decisions must be “responsible, secure, and firmly human-centered.” This implies that the final decision on everything pertaining to a person’s finances, benefits, or next steps should be made by an adviser rather than a model.
This type of precaution is important for a big team that handles millions of claims.
Why are civil liberties organizations worried?
Big Brother Watch issued a warning in July that the benefits process’s biased systems profile millions of people annually. The group showed concern about new tools in development and called attention to a Universal Credit “Advances” machine learning model that it says is “riddled with algorithmic bias.”
“The DWP is hiding behind a wall of secrecy and refuses to disclose key information that would allow affected individuals and the public to understand how automation is used to affect their lives, and the risks of bias and to privacy involved.” Jake Hurfurt said. For him, the rollout is alarming.
Transparency, equity, and the capacity to question a judgment are the reasons algorithmic ethics are important. The DWP insists that adoption would be “value-led, responsible, secure, and firmly human-centered,” stating that the goal is to “make people’s lives easier and build a smarter, more efficient state.”
Reducing risks with good practice
The line between positive and negative effects is very thin. But when used properly, AI can identify basic mistakes, expedite paperwork, and free up staff members to they can concentrate on more complicated issues.
The fundamentals of good practices are using plain language notices when AI is employed, human review for significant decisions, simple appeal procedures, and frequent, independent bias testing. In order to prove that the system is fair, they also need evaluating the effects on every group.
The Register pointed out that there’s still not many specifics about the projects. Without converting humans into data points, Nexus AI could improve services’ speed and clarity if IBM and the DWP do this correctly.