Inaccurate Automated Decision-Making: Some Issues of Lawfulness
In a new paper to be published in the Australian Journal of Administrative Law & Practice, entitled “Artificial Administration: Administrative Law, Administrative Justice and Accountability in the Age of Machines“, I bring together much of my previous scholarship on the topic of automation in public administration. Here is an extract about how automation, including artificial intelligence, can lead to illegality:
Administrative decision-makers must act on the basis of statutory authority and remain within the limits of that authority. Reliance on technology might, under some conditions, lead to inaccurate decision-making and thereby to unlawful administrative action.
To begin with, in general, information technologies “carry the biases and errors of the people who wrote them”. As such, if the data inputted is flawed, the outcomes will also be flawed. Consider an example from Canada. Canadian citizens who are married or in a common-law relationship can sponsor their partner for permanent residence status in Canada. Immigration, Refugees and Citizenship Canada (a government department) has developed a machine learning system for automatically approving sponsorship applications based on data gleaned from past positive determinations. The goal is to make decisions more efficiently. However, there is significant potential for bias. There are conventional marital relationships, running from courtship to engagement to a wedding ceremony to subsequent cohabitation and maybe to child-rearing. These might be thought of as ‘easy’ cases as far as sponsorship is concerned, because there will rarely be any meaningful suggestion that the relationship was not genuine. But such cases are only ‘easy’ because of prevailing social norms about conventional marital relationships. In this sense, a system based on past decisions is likely to be biased towards conventional marital relationships and hostile to relationships which do not fit prevailing norms. Of course, individual officers making decisions are not free from such biases themselves. And one can legitimately ask whether the efficiency gains generated by automating approvals of (one assumes) conventional marital relationships outweigh any harm from entrenching the bias in the system. Nonetheless, the potential for problems is clear.
It is useful to consider the decision of the Supreme Court of Canada in Ewert v Canada. Here, an Indigenous inmate challenged the use of psychological and actuarial risk assessment tools because the tools had not been trained on Indigenous populations and thus produced inaccurate results. There was a statutory hook for the inmate’s argument, as the applicable legislation required the corrections service to “take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to date and complete as possible”. The argument was that analytical tools trained on non-Indigenous populations could not satisfy the statutory requirement. The Supreme Court of Canada agreed. The tools generated “information” which was “used” by the corrections service. But the service had not taken “all reasonable steps”, especially in view of its independent statutory duty to ameliorate pressing social problems related to Indigenous over-incarceration: ensuring that tools used are “free of cultural bias” and would not “overestimate the risk posed by Indigenous inmates” was a reasonable step which, on the facts, the service had failed to take. It therefore failed to respect its statutory mandate.
Consider, moreover, the phenomenon of automation bias: individuals taking decisions on the basis of recommendations produced by a machine are more likely to follow the recommendations than to exercise independent judgment. Even if humans are kept ‘in the loop’ in theory, they may nonetheless become reliant in practice on technologies. And even situations falling short of automation bias strictly speaking may cause difficulties. A cautionary tale involves a UK government system which was withdrawn when its lawfulness was challenged before the courts. The UK system used an algorithm to categorise visa applications as red, amber or green, with ‘red’ applications requiring greater scrutiny (and only a 48.59% chance of being approved), ‘amber’ attracting lesser scrutiny and ‘green’ little at all (with a 99.5% success rate). Use of the streaming tool was attacked for breaching the Equality Act 2010 (as it discriminated on the basis of nationality by assigning some nationalities to the red rating) and for common law irrationality. Indeed, these grounds were mutually reinforcing: the more applications were assigned (on the basis of nationality) the more the tool became hard-wired to assign certain nationalities to the red rating. And once there, it seems that confirmation bias kicked in, with officers subjecting the red-rated applications to greater scrutiny. This was not a case of automation bias as such but it nonetheless illustrates how use of technology can steer decision-makers away from their statutory mandates even when a human is in or on the loop.
Subdelegation raises similar difficulties. The law of subdelegation deals with the extent to which a decision-maker may permit a third party to exercise a power entrusted by statute the decision-maker. A decision-maker identified and empowered by statute must always retain her discretion. Another body’s views may be taken into account, but the decision-maker identified by the statute must make the final decision on the merits. The key point here is that total reliance by a decision-maker on technology risks running afoul of the subdelegation principle. Permitting a machine to exercise a power would, on its face, be unlawful.
The automated system developed by Immigration, Refugees and Citizenship Canada for temporary residence visas and sponsorship applications illustrates some of the subdelegation risks. IRCC has developed a machine learning system that is capable of identifying applicants who are eligible for a temporary residence visa or to sponsor their spouse for permanent residence, on the basis of algorithmic analysis of past data on positive decisions. Positive eligibility determinations are made automatically – but negative determinations on eligibility can only be made by a human decision-maker. IRCC says that immigration “officers continue to make the final decision on each application”. With respect, this is potentially misleading. A decision on an application for a temporary residence visa or requires a conclusion on two matters, eligibility and admissibility. The statement suggests that officers have the final say on both eligibility and admissibility. However, it seems clear that where the systems determine that an applicant is eligible, this is the final say as to eligibility. Giving the officer the final say over admissibility does not displace the system’s final say on eligibility. This is a subdelegation issue as an important component of the decision on the visa or sponsorship application has been outsourced to a machine.
These risks can perhaps be mitigated by upstream involvement – such that the decision-maker could justify her position outside the loop by reference to her involvement in developing the loop in the first place – but most obviously require some downstream involvement. It would be more consistent with the subdelegation jurisprudence for the identified decision-maker to retain the final say instead of acting as a mere rubberstamp for a conclusion produced by a machine. When a human is in the loop or on the loop the difficulty will lie in determining whether the final decision was genuinely an independent exercise of discretion on the part of the human or whether it was tainted by automation bias.
 Taylor Owen, “The Violence of Algorithms” in Taylor Owen, Disruptive Power: The Crisis of the State in the Digital Age (Oxford University Press, Oxford, 2015),at p. 169.
 Melissa Hamilton, “The Biased Algorithm: Evidence of Disparate Impact on Hispanics” (2019) 56 American Criminal Law Review 1553.
 See generally, Algorithmic Impact Assessment – Advanced Analytics Triage of Overseas Temporary Resident Visa Applications.
 2018 SCC 30,  2 SCR 165.
 Corrections and Conditional Release Act, SC 1992, c 20, s. 24(1).
 2018 SCC 30,  2 SCR 165, at paras. 33-41.
 Ibid., at paras. 61-66.
 Linda Skitka et al, “Does Automation Bias Decision-making?” (1999) 51 International Journal of Human-Computer Studies 991.
 Rafe Jennings, “Government Scraps Immigration “Streaming Tool” before Judicial Review”, UK Human Rights Blog (online), 6 August 2020.
 Notice that in Canada, the designers of a similar automated system have undertaken to introduce safeguards to avoid this scenario:
Measures are also in place to mitigate against the potential risk that the triage function could influence officer decision-making. There is deliberate separation of officers from the system: officers are not aware of the rules used by the system, nor do they receive information about the analysis performed by the system. This separation mitigates the risk that officers could be unduly influenced by the system’s outputs (also known as “automation bias”). Additionally, an ongoing quality assurance process has been implemented to monitor whether officers make the same positive eligibility determinations as the system. This process ensures that biases have not been introduced by the system.
Algorithmic Impact Assessment – Advanced Analytics Triage of Visitor Record Applications, at pp. 6-7.
 See also Rebecca Williams, “Rethinking Administrative Law for Algorithmic Decision Making” (2022) 42 Oxford Journal of Legal Studies 468.
 Willis, “Delegatus non potest delegare” (1943) 21 Canadian Bar Review 257.
 Ellis v Dubowski  3 KB 621.
 R (New London College) v Home Secretary  1 WLR 2358.
 Algorithmic Impact Assessment – Advanced Analytics Triage of Overseas Temporary Resident Visa Applications, at p. 4.
 The subdelegation issue has arguably been cured in this particular context by the provision of statutory authority for automated decision-making: Immigration and Refugee Protection Act, SC 2001, c 27, s. 186.1.
 Jeffs v New Zealand Dairy Production and Marketing Board  1 AC 551.
This content has been updated on May 1, 2023 at 18:25.