The Robodebt Royal Commission

The following post explores the relative egregiousness of the robodebt scandal, the Royal Commission report’s structure and contents, and the earliest stage at which the capability of creating something like robodebt arose. It also considers automated decision-making and the extent to which the Commission omits to consider the actual technical workings of robodebt.

The post goes on to consider the problem of subject matter experts, as well as the problem of the ‘proof point’ solution for robodebt, both of which are not considered in the Commission’s report. Next, the post notes the way in which the Commission’s report does not take a strict or legalistic approach to the unlawfulness of robodebts — at least not beyond anything already said in the Federal Court of Australia.

Finally, the post briefly touches on the AAT firing scandals that the Report essentially omits. Despite all this, the article concludes with some compliments for what is obviously an exceptional output from this Royal Commission — one that chronicles an unprecedented Australian governance failure and recommends 57 ways it might be redressed.

Yesterday, the long-awaited report of the Robodebt Royal Commission (‘the Report’) was tabled in Parliament. Including its title pages and other matter, it runs to 1,039 pages’ length and comprises three volumes. Before it was released, I did a short spot on ABC News Radio (the AM program), which is available here.

One snippet of the long recording that was included had me saying that robodebt was

the most scandalous social security governance problem in the twenty-first century and likely all of Australian history.

On reflection, I should have said what I have said many times before: that robodebt is the most scandalous welfare governance problem — or better, failure — in modern history (and perhaps even all of human history).

The most scandalous welfare governance problem in modern history?

The basis for such an audacious claim are the following considerations:

  • the scope of the program, which affected more than 500,000 people;
  • the financial quantum associated with the program, which could be exemplified in the figure of 1.763 billion — the amount that had been tallied as repayable to government (as robodebts) to the date the scheme was ended following the class action;
  • the minimal actual monetary repayments by government to citizens, which totalled at least $855 million AUD. That figure is based on the settlement sum approved by the Federal Court of Australia in the class action of $112 million AUD, less the $8.413 million AUD in legal costs payable to Gordon Legal and their agents, plus the repayments promised and delivered outside the class action, of $751m AUD (more on which below);
  • The largely unanalysed but clearly very significant toll on humans, including recipients (some of whom died by suicide, and some two thousand of whom died of unknown causes while recipients of robodebts) and others involved (eg, family, extended family, and Centrelink staff). As the Commission’s submissions show, there can be no doubt that the anxiety, stress, trauma and mental despair endured by those affected has had and will continue to have a scarring effect on their mental lives, especially in relation to their trust in others, and more especially in relation to their trust in governments and other external support systems.

But even the above considerations seem to understate the case, especially in respect of the true financial situation of the government. (They also understate the mental health impacts; however, I will not deal with that in any substantial way in this post.) And, with respect, even the Royal Commission’s chapter on economic costs, chapter 14 (pp 400-419), does not seem to give a very clear picture of the government’s financial liabilities with respect to its citizens. That figure is best attained by reviewing the settlement orders — an extract of which is below:

As can be seen above, the government was ordered to pay $112m AUD to the class members, which, less legal costs, amount to about $104m AUD. But, as is noted at paragraph 9, this settlement payments was on top of the $751m AUD the government had already received or recovered from welfare recipients under the robodebt scheme and which it had already promised to refund independently of the class action. And so, on that accounting, the total amount of actual repayments from government to citizens would be about $855m AUD. On this figure alone, a question may be posed. In economic history, has a government ever been legally obligated (and ordered) to pay a higher sum than $855m AUD to its citizens for an unlawful scheme it had implemented? I am yet to find anything remotely close to this figure.

But there is more to say here. For if one takes the view, as I do, that government economies are constantly working around a nominal ‘budget’ figure of debts and liabilities, which is probably best instantiated by the seemingly magical CRF (Consolidated Revenue Fund) number (on which see my slides here), then it is possible to speak of the total government ‘liability’ to its citizens as even greater still. Put more simply, the $1.763 billion AUD comprised a figure that was based on what the government had already effectively calculated as an asset in the positive side of the ledger. It had sent out debt notices to the value of $1.763 billion. That effectively means that the government may have somewhere recorded, at least in draft form, a mark-up in its CRF ledger to the tune of some $1.763 billion AUD. When it had to write off that amount, the government had to effectively subtract that asset from its budgetary accounts. It was money received and lost, at least on paper.

Sure, one might say that the government may have only had to have paid, in real money terms, the $751m AUD that it had already promised to repay back into the private sector (that is, to private individuals) — citizens who had already paid their alleged (but false) debts to the Commonwealth. However, the Commonwealth did not simply ‘lose’ this money, which was money constituting the ‘real’ ill-gotten gains of the venture. No, the government also lost what may have been ‘projected’ or even ‘recorded’ assets — money that was recorded as an asset of the government for budgetary purposes — at the time the debt notices were issued uncontested.

Of course, whether or not these debt notices were marked-up in the actual budgets for the relevant financial years appears to have gone unstudied in both the Commission’s Report and in the Federal Court proceedings. Similarly, the precise way in which the government recorded the ‘savings’ it was making along the way is not precisely explained in the report or elsewhere. (Here I refer not to the initial budgetary savings projected of more than $4b AUD, but the ‘to date’ figures of achieved savings as the program continued to roll out.) But for the purposes of this analysis, it can probably be said that the Commonwealth had to effectively ‘repay’ — at least in budgetary terms — the total amount of $1.766b AUD to its citizens. Now, if one adds to this amount the 112 million AUD that was ordered to be paid to the class members in the Prygodicz class settlement, the nominal total of liabilities that the Commonwealth owed to its citizens under the robodebt scheme comes to some $1.875 billion AUD.

Interestingly, the Report offers a few tables to give insight into how many of these debts were referred to debt collectors, and how many non-robodebts were subject to the same external agency tactic. However, even this data is ambiguous, because DHS did not keep track (unsurprisingly) of which debts were subject to income averaging and which were not.

In any case, I am unaware of any figure existing in the historical record that is even remotely comparable to the total figure of $1.875b AUD when it comes to liabilities owed to citizens of a sovereign government — let alone liabilities that are so owed because they arose from a policy blunder, an error of governance, law, and fact (ie, mathematics). I have tried to research the question, but the academic literature does not seem to have produced anything like a catalogue of government liabilities that can be easily referred to. This is a research question I would like to continue pursuing. But if one compares this figure to those associated with typical criminal frauds, there is really no comparison.

For instance, in March 2018, the Commonwealth Fraud Prevention Centre issued a statement regarding ‘Australia’s largest ever prosecuted tax fraud.‘ The quantum of the gains are variously said to be $135 million AUD and $63 million AUD. More recently, however, the Plutus Payroll fraud, often described as ‘one of Australia’s biggest ever tax frauds’ is said to have had a value of about $105 million AUD. Suffice to say that these ‘biggest ever’ frauds bear very little comparison to the scale of the quantum of moneys involved in the robodebt scandal.

Of course, a better analysis would compare the robodebt figures to other government fraud scandals; and it appears the only real comparator in recent times is the Dutch government’s very similar scandal in which it wrongly accused more than 20,000 families of child benefit overpayments. Now this government scandal was admittedly very, very big. The payments to citizens was in the order of about €500m (£450m) in compensation, about €30,000 for each family; and this amounts to about 821.6m AUD. But it is notable that these payments of the Dutch government to its citizens appear to have involved ‘compensation’; they were not just ‘repayments.’

By contrast, the Australian class action did not feature any real compensation; there was only ‘ersatz interest’ accruing for the time that had elapsed during which the government had held the robodebtors’ money (similar to bank interest). And so even though the Dutch example is clearly quite significant, it is both less than half the size of the Australian government’s liabilities (calculated on the $1.8b AUD figure), and does not consist merely of repayments. Also notable, of course, was that the Dutch government actually resigned over this social security scandal. And, to be fair, the Dutch scandal also involved institutional racism — a most pernicious aspect of that governmental failure. And yet, it is still possible to consider the robodebt scandal to have been more scandalous stil — perhaps precisely because the government or responsible ministers did not accept that it was unlawful, much less resign while in government (a la the Dutch).

The structure of the Commission’s Report

The first volume comprises 2 sections, including an introduction and nine chapters. The first of these sections offers a timeline and overview of the scheme, describes the phases of robodebt, and then charts a detailed legal and historical background to its various iterations (chapter 1). The second section, which runs from chapter 2 to 9, offers a detailed chronology of the scheme, from its conceptual development (stretching back to 2010-11, as discussed below) through to its implementation, and finally through to its finalisation: the ‘end of robodebt’ in 2019 (chapter 9).

In contrast to this chronological account, the second volume focuses on effects of the scheme, and comprises 5 sections (sections 3 to 7) and 13 chapters (chapters 10 through 23). Section 3 analyses the effects on individuals and vulnerable people. But it also examines other matters: the roles played by advocacy groups and legal services; the effects on human services staff; the economic effects — read ‘costs’; and the failure in the budget process (ie, the new policy proposal or NPP) that allowed the scheme to pass through the process of approval, including through the Expenditure Review Committee (see esp p 88ff).

Section 4 focuses on automation and data matching, identifying many serious problems. For instance, something this section takes up is one of the first things that enabled robodebt to be conceived — an occurrence that occurred well in advance of the conventional robodebt timeline beginning in 2015. This was the strategic decision to preserve historic PAYG data, which began in 2012. Before 2012, PAYG data had been destroyed after a time, which meant that historical debt recovery was impossible. I will discuss this aspect in more detail below.

Section 5 deals with the debts themselves — how they were raised, and how they were collected. Section 6 deals with the so-called ‘checks and balances’ around robodebt — those agencies, like the legal advisors within and without the departments, the AAT, the Commonwealth Ombudsman and the OAIC (Office of the Australian Information Commissioner), that should have done more (eg, General Counsel of DHS), should have been allowed to do more or to have had more influence (eg, the AAT, or at least some members of it), or should have had more independence from the government departments with whom they worked (eg, the Ombudsman and OAIC). One of the notable aspects of the Commission’s hearings was that, in the course of evidence gathering, the Commonwealth Ombudsman was to adopt a legal position that neither Ombudsman himself, nor the Ombudsman’s officers, past or present, could be made legally subject to the Royal Commission’s compulsive orders. As the report notes at page 573,

The Ombudsman took the position that current and former officers could not be compelled to give evidence to the Commission, pursuant to s 35(8) of the Ombudsman Act. The Commission did not entirely accept this view but agreed to proceed on the basis that the Ombudsman would assist the Commission voluntarily. Current and former officers gave evidence by statement to the Commission, and in person at its hearings.

It is not possible to determine whether the Ombudsman’s failure to accept compellability affected the number of witnesses who gave evidence, or whether the evidence given in writing by those who did volunteer their evidence was different to what would have been given if subject to legal compellability. (For instance, did the voluntary written evidence answer all the questions put by the Commission?)

Lastly, section 7 (chapter 23) is devoted to improving the Australian Public Service. Over 8 subsections, it makes 8 recommendations:

  • an immediate full review of whether the existing structure of the social services portfolio and Services Australia (rec 23.1);
  • the creation of an induction program for all staff (rec 23.2);
  • the introduction of schemes that are customer-centric with testing to put recipients at the forefront (rec 23.3);
  • the re-establishment of two bodies disbanded around the robodebt years: first, the Administrative Review Council — an administrative decision-making training resource creator that was defunded in 2015 (rec 23.34; and second, subject to feasibility, a ‘college’ or training facility like the one that had been established in 2001 but disbanded later to train and skill up Centrelink staff (rec 23.5);
  • a requirement that senior executives must spend time in aq front-line service delivery role (rec 23.6);
  • the introduction of a statutory power to enable the APS Commissioner to hold an inquiry into the conduct of any Agency Head (rec 23.7); and, finally
  • the development of standards for documenting important decisions and discussions within government departments (rec 23.8).

Following this chapter, the Report includes it closing observations, which are quite pellucidly opprobrious and forceful, and includes paraphrase of Shakespeare’s Hamlet. (At p 655, the Commissioner writes this of robodebt: ‘If ever there were a case of giving an unproportion’d thought his act, this was it.’)

And following the closing observations, there is, in volume 3, an appendix. It includes a glossary, a ‘Dramatis Personae,’ a section summarising all the legal advices circulated; the three academic articles, one by Peter Hanks KC and two by Emeritus Prof Terry Carney, that shaped the way the legal advices and proceedings that followed; and, finally, the Solicitor General’s advices, which was notoriously released some 48 days before the government decided to settle the Amato matter. For noting, the academic articles were:

This appendix also makes a list of the submissions and relates some data about them — where they came from, on whose behalf they were made, and so on. It also offers a review of the 76-odd AAT1 decisions that were identified as having concerned robodebts, previously unpublished. This is really quite a treasure trove of historical information, and I would be keen to study it in more detail. Finally, it contains a staff list.

The historical gold mines first discovered in 2012

Before robodebt, there was not a great deal of looking back in time to find stores of unrepaid debts. And while the Report does not appear to give more details for this historical situation, it seems likely that a general policy of ‘finality‘ and fairness meant that governments had previously decided not to seek to recover debts more than something like 5 years old if they had not been discovered or actioned before the current year. Equally, previous governments may have not even thought to do so, either for fairness or practical or other reasons. After all, individuals generally only have to keep their tax records for five years, other than in exceptional cases. And similarly, it appears that many businesses (ie, employers) would probably only have been legally obligated to have kept the taxation records for their employees for something like five years too, or maybe a little more in certain circumstance.

Information on the ATO website about how long employers are expected to hold tax information. See here.

These would include records about what their employees were paid, how much tax was withheld, and how many hours they worked. Perhaps in consequence, it would be disanalogous for DHS or Services Australia to ask recipients to verify income that they are alleged to have earnt more than five years ago — that is, in circumstances where neither the recipient nor the employer was required to preserve any tax records that could be used to verify the facticity of any such allegation.

The preservation of this data, so the Report emphasises, resulted in what were regarded as ‘gold mines’ from years previous (p 457). This historical dataset really allowed the robodebt scheme not only to access data previously unavailable, but to project recoupments in a way that was irresistible to government ministers seeking to reduce the amount of money circulating in the private sector: ie, attaining a ‘budget surplus.’ The Report does not comment on the revelation of this evidence: namely, that one big part of the reason robodebt was possible was by virtue of changes to internal guidelines introduced in 2012.

The decision to preserve historical PAYG records within the social security system was facilitated by revisions to the Guidelines on Data Matching in Australian Government Administration that occurred in 2012 as a part of the ‘Enhanced Capability For Centrelink To Detect And Respond To Emerging Fraud Risks’ budget measure announced in the 2010-11 Budget (see p 457). And once this historical data was migrated from DSS to DHS, it was possible for DHS employees (like Ben Lumley) tasked with formulating savings measures to say to their supervisors that there were something like 1 million people who had debts. In other words, using this non-destroyed PAYG data, these staff members could say that there were 1 million historical ‘matches’ with their income averaging debt criteria: people whose incomes, when the averaging or smoothing criteria were run over this expanded database, showed discrepancies (presumed to be overpayments or debts) as against the ATO PAYG number on their corresponding annual tax return.

ADM

In the next part of section 4, which is chapter 17, the Report focuses on automated decision-making (ADM) — a phenomenon that seems to have become a zeitgeist in discussions of robodebt. But this term is contested or at least questioned by people like me and K from #NotMyDebtSupport, whose work was acknowledged in the hearings by #NotMyDebt’s Lyndsay Jackson and, in passing, by the Commissioner herself. Similarly, Asher Wolf (activist and a founder of #NotMyDebt), has noted that ‘Robodebt wasn’t AI.’ As she writes, ‘It wasn’t a project devoid of human input. On the contrary, it was a program powered by shonky spreadsheets and neoliberal ideology.’

I think people like me and K question the use of expressions like ADM (or AI), generally speaking (or at least speaking for myself), because when one examines the documents that comprise the calculators through which robodebts were raised, one soon realises that the level of sophistication is too low be aptly described by the term ADM, which generally connotes an innovative and sophisticated processes in most contexts. At one level, of course, this is just a matter of nomenclatural preferences. If one is happy to call the excel spreadsheets ADM or something like a cascading style sheet (CSS) an ADM system, then perhaps the term ADM would be suitable here too.

In any event, chapter 17 opens up the degree to which ADM was used, focusing not just on the calculation methods, but on the way in which the population of robodebtors was identified, how PAYG data matching was done, income data matched, and income data ‘verified’ and finally ‘notified’ to agencies, external organisations (such as debt collectors) and recipients. The penultimate aspect of the process, of course, is far from ‘automated’ in a complete sense. While the ‘verification’ step involved the automated issuance of a letter to the accused debtor (and this was often to an old address — unsurprisingly, when one considers there was no verification of the details), it also requested the debtor to manually ‘verify’ the debt themselves. As has been remarked on many times, robodebt outsourced the manual verification of any alleged debt to the recipient. Sometimes called the ‘reversal of the onus of proof,’ it was also a reversal of the demanding labour of administrative diligence.

For this section, the Commission relied heavily on the expertise of a consultant, a Dr Elea Wurth from Deloitte Risk Advisory, who was able to provide detailed process maps of the way in which the different iterations of the scheme were rolled out in terms of the point-by-point operations. As the Report notes, these maps provide ‘the best achievable understanding of how the Scheme operated on a technical level’ (p 471).

But on this point, my personal view is that the Commission missed an important aspect of the way debts were calculated within Centrelink’s internal calculators. In no part of the chapter on ADM does the Commission discuss the way in which the actual debt calculators functioned. These tools included the ADEX Debt Schedule, EANS report, MultiCal tool, and, from 2017, as one of the ‘improvements’ introduced by Minister Stuart Robert, the Net-to-gross earnings calculator. This is a significant omission, and the lack of evidence heard by the Commission about the operations of these tools appears to have deprived it of the opportunity to make substantive findings about whether Centrelink should be permitted to create its own tools that are essentially unregulated by law.

Still, the operations of these tools were indirectly described, perhaps unsurprisingly, not by consultant experts, but by internal experts — that is, people who had worked as staff within Centrelink for dozens of years. The best evidence about the reality of the ADM processes, in my view, came from Colleen Taylor; and, admittedly, it is she who is quoted in the ADM chapter when it comes to the Commission’s analysis of the ‘effects of automation’ at p 477.

Taylor allowed the Commission to deduce that the ADEX Debt Schedules and other tools were working in such a way that they could not identify that two employers with almost exactly the same names were likely to be the same employer. For instance, if an employer was registered under two names due to administrative error, even by the recipient — for instance, ‘Jim’s Mowing Pty Ltd’ and simply ‘Jim’s Mowing’ — then the calculator tools would produce two sets of income for the same person. In other words, the internal tools duplicated or doubled (and apparently sometimes tripled) the incomes of certain recipients, thus producing debt calculations based not only on income averaging, but on a duplicated dataset of income averaged ATO PAYG data.

But a narrower, better organised and even, with respect the Commission, a de minimus forensic analysis of the internal Centrelink tools would have allowed the Commissioner to make richer findings about how the internal calculators made this duplication error (and many other kinds of errors), as well as how often they did so and why. In other words, the question of automation or ADM was not thoroughly or even sufficiently answered by the Commission; and, regrettably, it is really a small yet substantive victory for the government that the extent of these problem was not revealed. Because as I will show briefly below (and have argued before), there is still a very significant problem that has yet been unremedied in relation to robodebts. And it seems to me that robodebts on the basis of at least some income averaging may be still being raised even today.

In any case, the Commission, perhaps because it relied on the expertise of an external expert for its analysis of the ADM question, never appears to have gotten to the bottom of what went wrong in relation to the common robodebt defects. Of course, the Commission was aware of these problems, and they were alluded to many times throughout the hearings; however, it was clearly not possible for the Commission to conduct its own investigations directly. The analysis could only be as good as the evidence; and, as noted above, in terms of the ADM question, the analysis of Dr Wurth, which focused on the overall systems of the iterations of robodebt (and not the internal tools) was the best achievable analysis of automation the Commission had.

In my view, the biggest missed opportunity with respect to this omission of the ground-level ADM through Centrelink’s internal tools was that the Commission could not draw what I assert is a crucial administrative law problem in twenty-first century governance operations. This is the problem of unlawfulness associated the creation of systems and tools that are not process-checked and operations-checked against the legislative requirements — as against, put simply, the law.

Administrative lawyers and academics spend a lot of time dealing with so-called soft law instruments as they appear in government departments: guidelines, guidances, practice notes, and so on. One such instrument in welfare was the Social Security Guide — essentially a user’s manual for those who have to apply the social security legislation but cannot do so by direct reference to the legal statute. But these instruments, problematic though they are, at least expressly attempt to refer back to the legal statutes; and they are generally developed and refined by lawyers so as to achieve maximum consistency and comportment with the actual legal rules reposed in the statutes.

By contrast, the internal tools appear to have no referability, and make no attempt to refer, to the legal statutes that would authorise their operations. This, in reality, was one of the fundamental problems of robodebt. The problem of this legislative ‘black hole’ could probably not be better exemplified than in the exchange below — an exchange in which the late Senator Kimberley Kitching asks Ms Musolino and then the then secretary of DSS Renée Leon for specific references to provisions in the social security legislation, and even to references to subsidiary legislation (regulations), that are said to authorise the scheme. But, of course, no such statutory references could be furnished.

Why can there not be a technical class of lawyers who are capable of auditing any internal government tool to ensure they comport with the actual provisions of the law? Certainly, in the case of robodebt, there was, and remains, no such class of lawyers. Such a legal team would be able to ensure that any digital tool or app created by a department to administer its functions is consistent with both the spirit and letter of the provisions of the authorising legislation in respect of the administrative function the tool purports to discharge. In other words, the future of administrative law should be expanded to include considerations of not just ‘soft laws’ but of ‘extended legal instruments.’ Extended legal instruments are those that give effect to the administrative functions of government but are not solely governed by human minds or composed merely of a decision-maker applying merely written rules. These extended legal instruments would include digital tools that have a degree of autonomy, such as calculators whose operations are not given expression in statute.

The irony of this recommendation does not escape me; and I am aware that it already concedes too much to the robodebt scandal in that it expects another layer of legal oversight to be created in circumstances where tools that are ab initio inconsistent with the prescribed statutory tools and are audited for consistency with the law when the tools should probably not exist in the first place. To be sure, any serious legal audit of any such tool in the case of robodebt would likely have simply found that the tools used to calculate the debts were not to be altered or amended but abandoned altogether. And that is because the preferable tools were right there in the legislation. The steps the department should have used to have determined whether a debt existed (or not) were already stepped out in the legislation itself, and this was the only ‘tool’ that any administrator would have needed. See, for instance, the detailed statutory instructions provided in the primary legislation, at section 1067G of the Social Security Act 1991 (Cth), for example, to enable decision-makers who were calculating overpayments and debts to determine the entitlements of recipients of Youth Allowance in that pursuit:

The social security legislation is built around operations such as the one above. The statutory calculators are in the primary law; and parliament created these laws to allow the departments to do the work of calculating entitlements and, by extension, the quantum of any overpayments and thus of debts. Sure, additional tools might be needed to make the application of the law more efficient. That, of course, is the rationale for the creation of the ADEX Schedules, the MultiCal, the Eans reports, and so on. It is also probably the rationale for using mathematical calculators as opposed to working out the mathematics ‘manually’ by hand. But this does not mean rules cannot be made around when calculators are permitted, or around which kinds of calculators are permitted, by reference to the overarching legal regime.

Indeed, this very kind of regulatory business is de rigueur — completely normative — in therapeutic goods law, where certain protocols for administering a therapeutic good are expressly approved, while others — those that are unapproved — are permitted only where a suitably qualified, licensed health practitioner makes an independent decision to administer the product in an expressly ‘unlawful’ (‘off label’) manner. In this respect, one can imagine social security practitioners being required to hold certain licences to perform operations that would be nominally unapproved. There could be a set of ‘authorised’ tools that are used in the authorised way for everyday use by frontline staff (those that have been audited by a technical legal team); but then there could also be a set of bespoke or custom tools that could be built around unusual cases — but only under the supervision of a ‘licensed’ or authorised (and preferably legally qualified) supervisor within the department. But, as it stands, none of the internal tools at all have been subject to any sort of serious legal audit. And, as I say, the report of the Commission does not attend to this work either.

And yet, in another way, this argument is not only ironic but indeed an example of magical thinking. That is because it was clear in the hearings (and now too from the Commission’s report) that the chief counsel of DHS was unlikely to have acted on any such technical legal team’s recommended changes to the internal tools — even if they had been audited and such recommendations had been made. As the Commission notes of Ms Musolino’s duty at p 242:

Ms Musolino’s duty as general counsel of DHS was to ensure that appropriate and documented legal advice was provided to DHS executives, including Ms Campbell and Ms Golightly…. The only rational explanation for Ms Musolino’s failure to give that advice is that she knew DHS executives, including Ms Campbell, did not want advice of that nature.

As can be seen from the above, in circumstances where legal advice advancing the position that the scheme’s methods (and by extension the internal tools) lacked a statutory source of power or authority, rendering them unlawful or without a basis in law, that advice was not provided to DHS executives in any event.

The problems of automation run far deeper than the Commission was able to discover

One of the most egregious errors that only a handful of commentators and community supporters appear to have identified is Centrelink’s use of what is called ‘residual income averaging’ in the apparently ‘remedied’ or ‘legal’ form of robodebting that occurred from at least 2019 and onwards — at least before the Amato settlement, and perhaps after it, and perhaps even still today. Debts that were calculated with the use of ATO PAYG data plus bank statements began being sought in 2019, after Minister Robert introduced this ‘more proof point’ improvement or refinement to ‘legalise’ robodebts. The basic problem of robodebts was that the evidence was insufficient to prove them. Of course, as I will discuss below, there was a formal and ‘deep’ legalistic basis for the unlawfulness of robodebts; but the most basic expression of the unlawfulness was an evidentiary unlawfulness.

In short, the amounts alleged to have been overpaid to the recipients could not be proved to exist when a decision-maker applied the requisite standard of proof in the a civil setting — be that standard one that was applied in the department itself (in a review by an Authorised Review Officer (ARO) or subject matter expert (SME)); in the AAT at tier 1 (unpublished) or tier 2 (published); or, as it happened in the Masteron and Amato cases, and finally in the class action (Prygodicz v Commonwealth of Australia (No 2) [2021] FCA 634) in the Federal Court of Australia. The solution, at least as the government saw it in 2019, was to attain better evidence to support the income-averaged (or income-averaged-with-adjustments) debt figure. But how could the government do this? With the help of an ‘additional proof point.’

Subject Matter Experts

Before coming to proof points, a quick note about SMEs is in order. The Report contains no analysis of the introduction of so-called subject matter experts (SMEs) throughout the robodebt period. SMEs are only mentioned in the body of the document twice (and once in the appendix in the AAT extracts), and then only in passing. And yet, the introduction of SMEs appears to have come as a result of the dearth of available AROs to review matters in the robodebt years.

With waiting times for a review often said to have stretched to to 12 months and longer — including after the settlement, and especially during the COVID-19 workplace mandates — SMEs did the work that AROs could not get to. They do not do formal reviews, but instead merely ‘quality check’ the debts. And they were introduced, it seems, to split up the process of internal review — and not only, I suspect, to allow reviews to occur more quickly, but also to ensure that the internal review process was more administratively burdensome for the recipient. (Or, at the very least, this was the effect.) Alas, the Report does not analyse their introduction or the effect thereof.

The real problem with this omission, though, is the substance of the extramural evidence about SMEs and what that evidence suggests about their importance in the robodebt saga. In October 2019, data was released by Services Australia in Senate Estimates about the number of debts that were varied or set aside under the Online Income Compliance (OIC) Programme, as well as the number of debts that were affirmed. Notably, SMEs were very busy in relation to both categories of action. As the table below show, SMEs set aside many more debts than AROs in all relevant financial years.

Senate Community Affairs Legislation Committee, Supplementary Budget Estimates 24 October 2019, Answer to Question on Notice, Services Australia, Question reference number: 77 (SQ19-000336): Link.

And yet, by contrast, SMEs also affirmed more debts than AROs in every relevant financial year too. I will not make any big claims here about this dataset; however, it appears to me that this data must lead one to the conclusion that many people appealed their robodebts only through the SME process. They did not go further to the ARO, much less the AAT1 or AAT2. And the SME process was not a formal review but only a ‘quality check.’ and my guess would be that the SME process did not test whether the debt was a robodebt or not (ie, used income averaging) but rather probably only picked up or accepted as a ‘quality’ failure the most glaring errors, such as where duplication was involved.

Senate Community Affairs Legislation Committee, Supplementary Budget Estimates 24 October 2019, Answer to Question on Notice, Services Australia, Question reference number: 77 (SQ19-000336): Link.

In the table above, for instance, one can see just how many SME decisions affirmed the original debts. Of course, many of these may not have been robodebts. But the figures suggest that the nature of the SME process would have been conclusive to many matters, robodebt or otherwise. And yet, as I have argued, the SME process was not very forensic; indeed, it was not governed by the formalities that attended the ‘formal review’ of an ARO.

Questions asked by Senator Rachel Siewert, the unsung hero behind so much of the robodebt movement, were answered on notice following sessions of the 2021 Inquiry Into Centrelink’s Compliance Program Public Hearing (an inquiry held by the Senate Standing Committee on Community Affairs). These answers shed some more light on the SME process. One answer notes that

The current process in Services Australia (the Agency) is that, prior to a referral to an [ARO] review, a quality check of the decision is undertaken by a [SME] — an experienced officer in the Agency who is independent of the original decision. The SME process may result in a customer not seeking an ARO review.

Inquiry Into Centrelink’s Compliance Program Public Hearing 29 March 2021 Question reference number: IQ21-000026

Another answer notes that the data about SMEs was unavailable:

How many people requested an ARO Review of their debt but were advised that Services Australia would need to undertake an explanation of decision, quality check or reassessment by an SME before a formal ARO review could take place?

Answer: The information requested is not readily available. The Agency’s internal review process aims to ensure customers are provided with explanations of decisions and are given opportunities to have reassessments without the requirement to apply for a formal review, although the latter remains available

Inquiry Into Centrelink’s Compliance Program Public Hearing 29 March 2021; Question reference number: IQ21-000027

In the end, the introduction of SMEs as a way to disincentivise appeals by imposing additional time and process burdens on recipients remains quite understudied. And yet, even today, there could have been many who repaid debts that were inaccurate following an unsuccessful ‘appeal’ to an SME, whether they were robodebts captured by the class action categories or some other form of inaccurate debt.

Proof points will fix it

As noted above, the central problem for government as it negotiated the settlement in 2019 was that the debts could not be sufficiently proven to the requisite civil standard without more probative evidence. So it happened, then, that, in 2019, Minister Robert announced that income averaging would continue, except now it would be effectively permissible because DHS would start to collect an additional ‘proof point’. (See below, an excerpt from a transcript of a door-stop interview in 2019 at which Minister Robert announced the ‘additional proof point’ correction.)

What was the nature of this proof point, then? It consisted of the recipient’s bank records for the relevant year. All recipients had to do was to supply their bank records — or, as the case may be, all the government had to do was request those records from the recipient’s bank (as was within DHS’s legal powers under social security law) — and the debt that was raised through the robodebt process would be either legitimised by those records; or, if the records served to vary or ‘correct’ the robodebt, then it would be legitimised after it was thus adjusted in accordance with those records.

And so debts calculated in this manner were not subject to the benefits of the class action settlement; and nor were they examined, questioned, or even — as far as I can see — raised in the Commission’s Report. To be sure, the Report refers to Minister Roberts’ statement about proof points (pp 314 and 315); but it does not actually question whether this truly did resolve the robodebt problem. It appears, regrettably, that the Commission did not have in mind to ask whether income averaging could still be applied in circumstances where bank record evidence had been provided. And why would the Commission have a mind to ask this? Presumably, of course, the use of bank records would legitimise the debts. But, as I have long argued, in current known practice, they do not.

I have written a detailed article elsewhere on the way in which the net-to-gross calculator that Centrelink applies to bank record evidence includes a form of ‘residual’ income averaging that makes any debt just as unlawful (in my strongly held view) as any robodebt. Given that I have written that article, I will not go into especially detailed analysis here. Suffice to say that Centrelink needs to convert the net income recorded in bank records (receipts of payments from employers) into a correct, reliable and true gross amount of fortnightly income in order to calculate an entitlement in accordance with the statutory calculators. That is because the entitlement is calculated on the gross amount (the pre-tax payment) rather than the net amount. But because almost all welfare recipients who work are employed on a casual basis and by employers who withhold taxes, almost all deposits in bank accounts are in net amounts. This simply means that Centrelink have had to devise a way to ‘gross up’ the net amounts that are recorded in the bank statements to come to an accurate calculation of the recipient’s entitlements and overpayments.

But, in short, the way that Centrelink calculates the relevant ‘tax rate’ is, in my view, wrong in relation to the prescribed statutory claulators. In other words, it is inconsistent with the legal requirements. This is because the tax rate is derived from a process that involves averaging. To work out the tax rate, Centrelink’s calculator calculates the percentage difference (or ratio) between the annual income reported to the ATO (via the same PAYG data used for classical robodebts) and the annual income total arrived at by totalling the net income receipts that appear in the recipient’s bank account over the course of the relevant year. In other words, the calculator creates an ‘averaged’ tax rate — a uniform figure that is applied to each income amount received each fortnight throughout the year. So, if the averaged tax rate is, for instance, 28%, then that tax rate will be applied to net receipts of whatever amount appears as a credit in the recipient’s fortnightly bank account, be it a fortnightly income of $10AUD (resulting in $2.80 AUD tax at 28%) or $10,000 AUD (resulting in $2,800 AUD tax at 28%).

However, this process completely ignores that there are important and mandatory taxation rules — beneficial ones — that would tend to make this kind of calculation a fiction. These include the tax-free threshold (which allows certain income below a certain amount go untaxed) and the fact that taxes are applied to casual workers on a exponential basis in accordance with their specific fortnightly income (ie, if you earn little you are taxed little and if your earn a lot you are taxed a lot). Thus, any ‘averaged’ tax rate will create an incorrect gross figure. Indeed, unlike tax ‘brackets’ which set up different tax rates that apply at different margins of a static salary figure, casual employees are taxed variably each fortnight (it is withheld by employers) and the rate at which they are taxed will vary as their income varies. For casual workers, the tax withholdings are not calculated at different margins, but instead calculated in increments of only two dollars.

Thus, the tax rate could be as little as 0% one fortnight (because the earnings were low) and as much as 50% another fortnight (where the earnings were very high). Averaging a tax rate does not work to give greater facticity to the debt. I have written a more detailed analysis of the way in which the net-to-gross calculator thus applies an averaged tax rate where it should not do here. As I argue, a properly calculated gross figure would have to be calculated by reference to the ATO’s tax tables, which are published every year and easily accessible to Centrelink staff. That is to say, it is arguable the ‘additional proof point’ fix can be consistent with the legal requirements; however, this would involve more manual calculation by Centrelink staff. They cannot rely on another misconceived calculator to work out an alleged debt.

Strict, legalistic unlawfulness not addressed by the Commission

In a paragraph above I noted the process by which discrepancy notification letters (effectively debt assertions) were created and then sent to welfare recipients once a ‘match’ was made between the recipient’s Centrelink records and the ATO PAYG data. Though the Commission does not bore down into the details, as I have said, it seems to me that the ‘Centrelink records’ that were subject to the Centrelink side of the match would have taken the form of an annual figure derived from the ADEX Det Schedule after the ADEX’s averaging formula had been applied to the data contained in the ADEX which had been derived from the recipient’s primary income record, which was contained in their EANS reports.

As has been generally said by all involved, this process had the effect of reversing to general legal burden of proof, which, in most matters — be they criminal or civil — is almost always borne by the ‘mover’ of any claim. it is borne, that is, by the person making the allegation — the accuser. Both in civil and criminal law, the accuser generally has to make good the claims they make through evidentiary proofs; and they must do so whatever the relevant standard may be. (There are exceptions, such as in strict liability or similar scenarios, where certain elements of an alleged offence or wrong may not require proof). In this case of robodebts, that standard was ‘the balance of probabilities.’ By sending these letters, the government was effectively demanding, without what would be expected legally of a litigant or debtee in any other setting, a sufficient evidentiary basis for the demand. But that, of course, is not quite the reason robodebts were unlawful. That, really, is only the evidentiary question. In other words, that question only related to extent to which robodebts were capable of meeting the relevant evidentiary threshold in a civil proceeding. (They were not.)

The reason they were unlawful is arguably even simpler. It is because the Social Security Act 1991 (Cth) (‘SS Act’) sets out — or, better put, prescribes — a calculation method for determining the correct entitlement of an individual. Once that method is used, an overpayment may be determined. The PAYG data matching method that was used in the various robodebt iterations completely ignored this prescribed statutory method and instead made up a completely off-piste and unlegislated method of calculating debts that had no referability to the legislation that created the social security system itself — the department, the payment types, everything. The statutory methods for calculating entitlements are said to be complex, but they really are not. They are all very clearly laid out in sections 1066 through to 1067 of the SS Act, as follows:

The contents of the SS Act: see here.

Interestingly, the Commission’s Report does not once mention the details of these calculation methods. It keeps the analysis of precisely why and how the robodebt scheme was unlawful relatively unparticularised. This should be unsurprising if one took the hearing as any indication of what would be said in the Report. The hearings did not bore down into the legislative problems either.

Nevertheless, the modules themselves and the way in which robodebts were calculated in a way that was at variance with those prescribed calculation modules is described in relatively specific detail in the appendix to the Report and specifically in the extracted summaries of the many AAT1 decisions (76 or so) that found the raising of robodebts to be lacking a basis in law because they were calculated in a way that was at odds with the calculators prescribed by law.

For instance, in the summary of one AAT decision below, the relevant module H that must be used in calculating entitlements under Youth Allowance is contrasted with the lack of any prescribed method in respect of DSP (Disability Support Pension) income. Accordingly, the decision-maker, member Michael Manetta, found that Youth Allowance debts were not lawfully calculable through income averaging.

Very notably, here, if one now looks up Michael Manetta’s name on Google, one recalls that member Manetta was actually ‘demoted’ in the Administrative Appeals tribunal (his wording — but, in my view, an apt verb) for, in his view, making too many decisions that were adverse to the government in social security matters. For instance, see this 2022 article on Manetta’s claims (pictured below).

The allegations contained in this article do not seem to have been substantively considered by the Royal Commission — that is, the allegation that the AAT Deputy President had penalised AAT members who were making decisions adverse to the government (ie, to DHS or Services Australia) by demoting them, which is to say disallowing them them to sit on and make decisions in the social security division.

However, the Report does briefly cogitate on the similar ways in which Emeritus Prof Terry Carney and Prof (then Ms) Renée Leon were both ousted from their roles on p 658 under a subheading ‘Prevention of scrutiny of the scheme.’ As the Report notes, there was evidence to suggest that Carney had been effectively fired (not reappointed) to the AAT after he had made decisions adverse to DHS, and that Leon’s position as secretary for Services Australia was abolished once she had directed the department to cease income averaging. But, in the end, the Report finds insufficient evidence to form a view that these firings constitution an evasion of scrutiny by those involved. This is an unfortunate position, because it is probably that if, for instance, the Deputy President of the AAT was compelled to give evidence, more evidence in respect of these questions could have been attained.

Don’t get me wrong

Despite my summary above, it is unarguable that the Commission has delivered a detailed, thoughtful, readable, authoritative, appropriate, critical, and effective Report. There is much more to say in its favour than I have done (or probably should do) here; and, in respect of the way in which it interrogates and condemns the conduct of individuals, I have commented on those aspects on a Twitter thread here. In that respect, the Report is incredibly strong. The legal and moral opprobrium it dispenses is suitably staid but stinging.

And, of course, the way in which the Commision, personified both statutorily and spiritually in Commissioner Holmes, has comported itself is unimpeachable. One can not even imagine an apprehended bias application being made in respect of Commissioner Holmes, notwithstanding the very same has occurred not too long ago (and in circumstances which are, by the standards of this Commission, probably quite shocking today.)

For now, I will continue to read the Report and expect to write at least one full-length academic article on the work of the Commission by the end of the year (either on robodebt and mental health or on the precursor to robodebt).