Beneath the tragedy and political schadenfreude, the Robodebt fiasco was a failure to adequately regulate AI; the industry seems in no mood to fix that.
Robodebt has become a byword for government incompetence and intransigence, and with good reason; the failures of the system resulted in massive injustice, and at its worst, may have driven people to suicide. At its heart, however, it was simply an effort to use software to save costs and improve government debt collection.
That costs saving was largely achieved by removing the human element from the decision-making process – automating the decision as to whether or not a debt notice was issued, based on a mathematical averaging of income and data matching techniques. It was, as we now know, a disaster; but was it just the first of many?
One of the great challenges facing society, when it comes to Artificial Intelligence, is how do we regulate it? Some of the most powerful aspects of the AI revolution are how quickly it moves and how innovative it is – both of which rely on the field being free to evolve and develop with little restriction. That said, the Robodebt failure is a perfect example of what can happen when this technology lacks human oversight and regulation.
How we approach the question of regulation in the AI field is probably going to determine how society itself evolves; we cannot afford to drop the ball on this. The best solutions will likely be achieved by regulators working with the tech industry, but are the tech industry on board with that?
In an interview with Time Magazine1 last year, Microsoft CEO Satya Nadella seemed to feel that regulation wasn’t really his issue, and nor was responsibility. Asked about slowing down the training of AI, he noted that, “…ultimately it’s for the regulators and the governments involved to make these decisions.”
Pushed on the need for caution in the evolving AI world, Nadella said: “…trying to say, ‘now is the time to stop’ doesn’t seem the right approach.” Indeed, at every turn in the interview, he sidestepped any regulation and fixated on growth, the power of AI and that regulation really had nothing to do with him. This should worry us all, greatly.
This is part of a larger narrative in the AI field, along the lines that AI is developing its own intelligence, and thus its creators cannot be held to account for what it does. In other words, yet again, somebody else’s problem: Dr Frankenstein brought the monster to life, but you can’t blame him for what it did.
Robodebt showed us that AI can have real-world consequences – disastrous ones. It isn’t the first time a computer program has run roughshod over human rights, and it won’t be the last.
What we need to do now is get the tech industry into the tent and collaborating on regulation, rather than shrugging its shoulders and looking the other way. AI will likely give great power (and even more wealth) to the tech industry; we need to make sure that comes with great responsibility; we have, after all, just seen what happens when it doesn’t.
Footnotes
1 Time, VOL. 201, NOS 21-22, 2023
Share this article