- How worried should we be about the Wuhan coronavirus?
- If you fake being nice at work, your career will go nowhere: Study
- Peter Dutton received a $200,000 sports grant five months before the election
- “This is for you” Annabella Sciorra testifies that Harvey Weinstein raped her
- The simple life: The fallacy of our national stereotype
Australia is set for yet another robo-debt ordeal, as the system is now purportedly auditing the childcare rebates, which could leave families with massive debt notices.
Federal MP Amanda Rishworth raised concerns over the weekend that Australia could be headed for another robo-debt ordeal after the government reportedly confirmed the Australian Taxation Office (ATO) will use data matching to audit childcare rebates.
The Government has confirmed it will use automated ATO data matching to enforce their complex and onerous childcare subsidy system, that could leave families with large unexpected debt notices #robodebt #auspol https://t.co/Us7hZAqLZx
— Amanda Rishworth MP (@AmandaRishworth) June 29, 2019
Government agencies increasingly use automated tools to make or facilitate decisions that affect citizens’ lives, but it’s not always appropriate for important decisions to be made by a computer.
In the European Union, the General Data Protection Regulation (GDPR) prohibits certain types of decisions from being solely automated. It also creates rights for individuals who are affected by automated processing.
We need similar safeguards in Australia for high stakes automated decisions made by government agencies.
The rise of robotic decisions
The trend toward automation of government processes is accelerating in line with the government’s commitment to digital transformation.
Automated tools are now used to make or facilitate decisions in a range of government agencies, including decisions about welfare, tax, health, visas and veterans’ affairs. Centrelink’s employment income confirmation system, known as “robo-debt”, is a high-profile example of what can go wrong with automated decision making.
Automation can improve the consistency and efficiency of government processes. But if there is bias or error in the computer program or data set, a flawed decision-making logic will be applied systematically, meaning large numbers of people could be affected.
Guidelines aren’t enforceable
The government has previously published guidelines on automated government decision making, including Best Practice Principles in 2004, and the Better Practice Guide in 2007. Both reports provide important advice about how to design automated systems to align with the values of public law.
But the recommendations in these reports aren’t enforceable. They also fail to create legal protections for those affected by automated decisions.
In May, there was public consultation about an artificial intelligence (AI) ethics framework for Australia. It highlighted the need for updated ethical principles to apply to new AI technologies. It also recommended a range of tools for improving the design of AI systems, including impact and risk assessments.
But, again, these recommendations will not be enforceable, even if they are included in the final framework. The current draft stops short of restricting the use of AI for certain types of decisions.
A new legal framework is needed
In contrast to Australia’s non-restrictive approach, legislative controls on data protection and automated decision making included in the GDPR are an example of best practice.
Article 22 of the GDPR is of particular interest for Australia. Unless specified exemptions apply, it prohibits the use of solely automated processing for decisions that produce legal or other significant effects for individuals.
To avoid this prohibition, decisions require meaningful human involvement and oversight. Having a human “rubber stamp” a decision made by automated outputs is insufficient.
Similar protections are needed in Australia, particularly for government decisions that affect individual rights and interests. Such safeguards would limit the types of government processes that can be fully automated.
“Robo-debt” would require meaningful human involvement under the GDPR
Let’s take a closer look at “robo-debt” to see how a prohibition on solely automated decision making might work.
The robo-debt system uses an automated data-matching and assessment process to raise welfare debts against people who the system flags as having been overpaid. Someone who receives a debt discrepancy notice can respond by giving income evidence to Centrelink. If no information is provided, an algorithm generates a fortnightly income figure by averaging income data from the ATO.
Of course, many welfare recipients have variable income as they are engaged in casual, part-time or seasonal work. It’s not surprising that the reliance on averaged data has led to a high number of reported errors. Receiving incorrect robo-debt notices has contributed to stress, anxiety and depression for many people.
One former member of Australia’s government review tribunal has described the system as a form of “extortion”.
This morning a friend told me she has just rcvd notification of debt. She is in major panic. Disabled and lives from day to day. Only a fur baby to keep her company. So we’ll try to help her. Disgusting. Fascist. Uncaring. Immoral. https://t.co/LFcCJF4MhC
— MansMan 🏳️🌈 (@MansMan31660180) July 1, 2019
If Australia had GDPR-type protections, meaningful human involvement would be required before an automated debt notice was sent. Manual review by human decision-makers is important to ensure that a welfare debt is in fact owed.
There should also be restrictions on fully automating other high stakes decisions by government agencies. Decisions about visas and tax debts, for example, ought to be overseen by humans.
The private sector needs regulating too
Automated decisions made by private bodies that have significant impacts on individuals require legal safeguards too. Such protections are already included under the GDPR.
It’s a big day today: I had a scientific manuscript rejected by a robot. Thread. The bot detected ❝a high level of textual overlap with previous literature❞. In other words, plagiarism. 1/6
— Jean-François Bonnefon (@JFBonnefon) June 18, 2019
Similarly, in the United States, a bill for an Algorithmic Accountability Act has been proposed. If this bill is passed, it will require certain companies that use “high-risk automated decision systems” to conduct algorithmic impact assessments.
Australia’s non-binding guidance on automated decision making is a step in the right direction, but it needs to be bolstered by legislation that restricts the types of decisions that can be fully automated. This is particularly important for government decisions with serious consequences for individuals, like robo-debt and auditing of childcare rebates.