Artificial Administration: Administrative Law, Administrative Justice and Accountability Mechanisms in the Age of Machines
I am very happy to say that my project on machine learning in public administration is going to be funded by the Social Sciences and Humanities Research Council. The most recent list of Insight grant awards is here. Regular readers will have seen some of this material before but I thought it would be worth sharing the “detailed description” of the project.
I am confident, by the way, that I will not need to resort to ATIPing governmental organizations: my outreach work since the grant application was filed last Fall has been very fruitful so far and there are many in government who are more than happy to share their experiences with machine learning.
We are now entering the world of ‘artificial administration’. Artificial administration is the “sociotechnical ensemble” of software and hardware that combines technology and process “to mine a large volume of digital data to find patterns and correlations within that data, distilling the data into predictive analytics, and applying the analytics to new data” (Yeung, 2018, 505). My term evokes the replacement or displacement of human decision-makers by automated procedures, a phenomenon which is already in evidence in everyday governmental decision-making in Canada. Such techniques have transformed the private sector in recent decades. Governmental bodies are now following suit and have already begun to rely on analysis of masses of data to inform the performance of their functions in areas ranging from border security, to taxation enforcement and to the immigration system. The central and pressing question of this research project is: when is it appropriate for administrative officials to rely on machines to take or assist in taking decisions which affect Canadians?
We live in the era of Big Data, a term that “triggers both utopian and dystopian rhetoric”, (boyd & Crawford, 2012, 663) as it carries a “kind of novelty that is productive and empowering yet constraining and overbearing” (Ekbia et al, 2015, 539). Already there are great debates about the role of information technology in modern life. The European Union’s General Data Protection Regulation is one well-known attempt to grapple with some of the difficulties of the Big Data era, as is the Government of Canada’s recent Directive on Automated Decision Making (Government of Canada, 2019). But these instruments are skeletal, to be fleshed out by legislation and judicial decisions. And there are more debates to come.
My view, which animates the present project, is that scholars of public law (a broad field, which includes administrative law, administrative justice and accountability mechanisms) have an urgent duty to contribute to these debates, for their outcome will shape public administration in the third decade of the twenty-first century and beyond. Four fundamental objectives underpin this research: to analyze (1) how information technology is or could be used in Canadian public administration; (2) to what extent the norms of administrative law and administrative justice constrain officials’ ability to rely on artificial administration; (3) how to structure accountability mechanisms to ensure that Canadians can have confidence in the use of artificial administration; and (4) whether designers of artificial administration can and should internalize the norms of justification, transparency and intelligibility generated by administrative law, administrative justice and accountability mechanisms.
Techniques such as algorithms, neural nets and predictive analytics certainly have “substantial potential to improve the scale and efficiency of government in the provision of public goods and services” (EU Commission, 2018, 1) but clarity is needed about where and how they can properly be used. Over the years, scholars of public law have developed legal controls on governmental action, sophisticated normative models of administrative justice and robust, transparency-inducing accountability mechanisms. Identifying how ‘artificial administration’ can be harnessed in a manner consistent with the norms of administrative law and administrative justice, and be subjected to rigorous scrutiny by appropriately equipped watchdogs is, accordingly, a critical question to be addressed in detail in this project. Furthermore, controversies about ‘artificial administration’ prompt reflection on a deeper question – which norms will shape the future of public administration: the solution-driven boosterism of Silicon Valley technologists or the culture of justification of scholars of public law? It has been suggested that “compromises” may have to be made between legal values and “choice of tool or software” (Zalnieriute, Bennett and Williams, 2019, 444). But compromise may well cut both ways: there is potentially as much an argument for the designers of artificial administration internalizing public law norms as there is for lawyers adopting technologists’ norms. Assessing the potential for the internalization of these norms is, accordingly, an important objective of this research.
Given our increasingly interconnected, data-rich world it comes as no surprise to learn that governmental bodies are following in the footsteps of private tech giants to find ways of using new technologies to streamline and enhance decision-making, leveraging the masses of data collected and created by public and private actors alike. Examples already abound. Canadian governmental institutions are increasingly using sophisticated information technology techniques to assist their decision-making. Many federal institutions “risk score” their services “to describe and compare the degree of risk involved with providing a service to a user” (Government of Canada, 2018, 20); the Canada Revenue Agency “uses predictive analytics to more precisely and rapidly address non-compliance, including to better understand taxpayer decisions and actions with respect to tax debt” (Steering Committee, 2014, 8); Employment and Social Development Canada “has greatly improved the effectiveness of its overpayment investigations by using a risk-scoring algorithm” (id); and, in respect of Citizenship, Refugees and Immigration Canada, media reports indicate the development of a “predictive analytics” system which could “identify the merits of an immigration application, spot potential red flags for fraud and weigh all these factors to recommend whether an applicant should be accepted or refused”; these models “are built by analyzing thousands of past applications and their outcomes” (Toronto Star, 2017).
Under the general rubric of “machine learning”, sophisticated algorithms, neural nets and other technologies carry great potential to improve governmental decision-making, by using automated systems to reduce cost (Kirkwood, 2017) and, further, by reducing or removing the influence of fallible human beings, beset by various cognitive biases, on decisional processes (Owen, 2015, Sunstein, 2018). The proposition that technology can improve the lot of the human race is not fanciful either, as rapid advances make possible new modes of deliberation and engagement, which can be assisted by government decision-makers committed to human flourishing (Thompson, 2013, Heath, 2014). Artificial administration seems destined to play a key role in day-to-day public administration in the twenty-first century. Critical questions arise, however, about how artificial administration can be implemented in a way consistent with existing public law norms and whether the designers of information technology systems can or should internalize these norms.
Several streams of literature flow into the present research project: (1) computer science/engineering and social science literature on machine learning; (2) literature setting out the basic principles of administrative law and administrative justice; and (3) literature examining the implications of machine learning for public administration. Appreciating these key insights will assist in the construction of solid foundations for the theoretical framework (and plain-language statement of principles) to be developed.
(1) Machine learning: Literature from computer science/engineering and social science teaches that machine learning is deliberately opaque. There are three layers of opacity: first, trade-secret protection, which ensures that private parties’ algorithms are shielded from public view, second, that “the ability to read or understand algorithms is a highly specialized skill, available only to a limited professional class”; third, with machine learning, no one can “reveal what the algorithm knows, because the algorithm knows only about inexpressible commonalities in millions of pieces of training data” (Dourish, 2018, 6-7; Burrell, 2018). Indeed, “as these systems become more sophisticated (and as they begin to learn, iterate, and improve upon themselves in unpredictable or otherwise unintelligible ways), their logic often becomes less intuitive to human onlookers” (Citizen Lab, 2018, 11; Pasquale, 2015), provoking “automation bias” on the part of those humans which use them (Skitka, 1999; Green and Chen, 2019).
When such techniques are used to interfere with individuals’ rights, interests and privileges the stakes are self-evidently high: “It is the capacity of these systems to predict future action or behavior” based on machine learning techniques which is “widely regarded as the ‘Holy Grail’ of Big Data” (Yeung, 2018, 509) but the use of such “anticipatory governance” means that “a person’s data shadow does more than follow them; it precedes them, seeking to police behaviours that may never occur” (Kitchin and Lauriault, 2018). The machine learning literature teaches that even when these techniques are used non-coercively, to gather information for the purposes of informing governmental decision-making, the implications may be troubling (Yeung, 2017). First, in general, information technologies “carry the biases and errors of the people who wrote them” (Owen, 2015, 169). As such, if the data inputted is flawed, prior human choices “become built-in” and the outcomes will also be flawed (Burrell, 2016, 3; Hamilton, 2019). Second, experts in machine-learning techniques will have much greater knowledge than elected representatives and members of the general public about how artificial administration operates. Whereas, while the internal workings of, say, the legislative process are extremely complex there is no reason that a patient outsider equipped with modest resources cannot master the minutiae. Machine-learning experts will not necessarily include some (and certainly not all) of those tasked with operating systems of artificial administration; they could well form a separate and superior caste (Danaher, 2016). Third, the privacy implications of acquiring the information to feed into systems of artificial administration are potentially troubling given the difficulties individuals face in providing meaningful, informed consent to data collection (boyd and Crawford, 2012, 672; Yeung, 2018, 514). The concerns of this stream of scholarship also contribute to the theoretical foundations of this project.
(2) Administrative law, administrative justice and accountability Central to administrative law, administrative justice and accountability mechanisms are norms of justification, transparency and intelligibility (Daly, 2016, Liston, 2017). In recent decades, courts and commentators alike have emphasised the virtues of a culture of justification in governmental decision-making (Hunt, Taggart and Dyzenhaus, 2001), and there has been a strong focus on “the justice inherent in decision making” (Adler, 2010, 129) and “those qualities of a decision process that provide arguments for the acceptability of its decisions” (Mashaw, 1983, 24-25; Dowdle, 2006).
In particular, administrative law is, first, designed to keep decision-makers within the boundaries of law. Courts have long played a role in ensuring that administrative decision-makers respect legislative commands and remain within the four corners of their statutory authority (Craig, 2016). Second, at various points, the doctrines of contemporary administrative law require administrative decision-makers to pay respectful attention to the individuals they serve (Allan, 2001; Allan, 2013). For instance, the requirement to give adequate reasons for a decision recognizes that in many circumstances an administrative decision-maker will have to justify to the individualhow it has exercised a given power. Third, whilst the increasing emphasis on reasons for decision can be seen as promoting individual interests, it has important instrumental effects as well, in promoting accuracy, consistency and transparency (Mashaw, 1976; Daly, 2016).
There are many available definitions of administrative justice. For present purposes, I use the term to denote “how government and public bodies treat people, the correctness of their decisions, the fairness of their procedures and the opportunities people have to question and challenge decisions made about them” (UK AJI, 2018, 5; Adler, 2003). In Bureaucratic Justice: Managing Social Security Disability Claims, Professor Jerry Mashaw set out three influential “models” of administrative justice (Mashaw, 1983). Further iterations have been suggested by Michael Adler (Adler, 2003) and Robert Kagan (Kagan, 2016) and will be discussed in detail in the project. Mashaw’s typology remains robust (Sainsbury, 2008) and is sufficient for present purposes: the bureaucratic rationality model features administrative decision-makers who gather relevant information – about applicants for benefits, taxpayers, or licence holders – and make decisions based on that information, generally in a cost-effective manner; the professional treatment model is associated with decision-making structures in which professionals, such as doctors or social workers, play an important, indeed dominant, role: the norms of their professional culture dictate how information is gathered and how the law is applied to the information gathered; and the moral judgment model is most closely associated with adjudication by an independent and impartial decision-maker. In general, reliance on artificial administration can shift the administrative justice model from professional treatment or moral judgment to bureaucratic rationality (Le Sueur, 2016), because when a decision-making structure “is coded as a program”, the result is that the technology “automatically determines the range of possible action” (Aneesh, 2009, 356; Burton-Jones, 2014, 76-77), pushing aside the human judgement which is often essential to public administration (Hertogh, 2010).
Finally, there is now a vast literature on accountability as a general concept (Dowdle, 2006) and significant contributions on particular accountability mechanisms, such as appeals (Daly, 2015), ombudspersons (Hertogh and Kirkham, 2018), parliamentary committees (Russell and Gover, 2018), regulatory agencies (Black, 2015) and watchdogs (Liston, 2019). This stream of literature provides a means of assessing when, where and how artificial administration can be deployed and the appropriate accompanying accountability mechanisms.
(3) Machine learning in public administration The extent to which machine learning can be incorporated into public administration, having regard to the norms of administrative law, has been discussed in a number of recent papers. Beatson, 2018 tackles tribunal adjudication; Cuéllar, 2016 and Coglianese and Lehr, 2017 provide an American perspective and Cobbe, 2019, a British perspective. Yeung, 2018 and Zalnieriute, Bennett and Williams, 2019 take a slightly broader view, covering administrative justice issues also. There has also been discussion of the possibility of audit as an accountability mechanism in respect of information technology (Pasquale, 2015) and the risk that this would create an algocracy, a caste of experts who might not themselves be capable of being held to account (Danaher, 2016). What these important works lack is the broad perspective proposed in my project, which will cover administrative law, administrative justice and accountability mechanisms to develop a framework for the integration of artificial administration in governmental decision-making.
An ambitious, ground-breaking research project like this one requires recourse to a variety of methodological approaches, each of which is essential to the success of the project.
Theoretical and interdisciplinary. A key component of the superstructure of this project is a theoretical interdisciplinary understanding of the nature of machine learning. This understanding is built in turn on insights developed by computer scientists and engineers, who work on the front lines of technological design, and scholars in digital humanities, who have illuminated the social effects of technology. My detailed literature review will ensure that these insights are incorporated into the theoretical framework for this project. Dr Reuben Binns, a computer scientist from the University of Oxford and Dr Jennifer Cobbe, a digital humanities scholar from Cambridge, are collaborators on this project and will provide robust, real-time peer review of the project’s output.
Empirical. To gain an understanding of the potential role of artificial administration, it is imperative to know exactly where, when and how information technology is being used in public administration. Moreover, this knowledge will contextualize the analysis to be undertaken in this project, because it makes clear the situations to which the norms of administrative justice and administrative law, and accountability mechanisms will have to be applied. The empirical analysis will have two facets.
(1) Our search strategy will involve using key terms to search various data bases, including Canlii,Lexis and Westlaw,covering the period from 2000 to the present day (a period which will be expanded as necessary), and Government of Canada websites. These databases contain decisions by courts and administrative tribunals and also information from other domains, such as computer science, engineering, as well as news-gathering organizations. For instance, legal databases sometimes reveal important judicial decisions which consider different uses of information technology, such as the decisions of the Supreme Court of Canada in May v Ferndale Institution  3 SCR 809 (use of a computerized algorithm to reclassify prisoners from minimum security to medium security, with a consequential impact on the prisoners’ liberty interests) and Ewert v Canada  2 SCR 165 (calculation of risk assessment of an Aboriginal offender by reference to an algorithm based on the general characteristics of offenders, not Aboriginal offenders). Legal databases from other jurisdictions comparable to Canada (such as common law jurisdictions like Australia and England and Wales) will also be searched, as they may provide lines of inquiry. The decision of the Full Federal Court of Australia in Pintarich v Deputy Commissioner of Taxation  FCACF 79, for instance, considered whether a taxpayer could rely on an automated letter erroneously waiving a tax liability. I anticipate that these searches will identify interesting examples from other jurisdictions which may be incorporated into the research project. For example, the United Kingdom is using automated technology and algorithms to determine whether European Union citizens resident in Britain prior to Brexit are entitled to status in UK law (Alston, 2018); and the Australian social welfare authorities have moved to automate the collection of overpayments of benefits (Carney, 2018).
(2) My research team and I will make access-to-information requests (ATRs) to selected Canadian government entities. It is necessary to make ATRs in order to access the details of governmental use of information technology in public administration. As with CitizenLab, 2018, my team will, first, identify a range of organizations who seem to be using artificial administration and, second, carefully draft and follow up on access-to-information requests. We will tailor our search strategy to identify those areas of Canadian public administration where requests would be likely to yield fruit. Consider, by way of an illustrative example, the Canadian Radio and Telecommunications Commission’s implementation of Canada’s anti-spam law. It is public knowledge that the CRTC asks Canadians to alert it to failures to comply with the law; the irresistible inference is that the CRTC is mining the data received in order to better target the use of its scarce enforcement resources. Knowing this basic information would allow us to make a targeted ATR. This CRTC/anti-spam model – information gained through search translated into an actionable ATR – will be replicable across other federal government entities.
Legal/doctrinal. Traditional doctrinal legal analysis is necessary to identify the constraints of administrative law and administrative justice on artificial administration. Leading works on the principles of administrative law and administrative justice will be consulted (Flood and Sossin, 2018, Adler, 2010). Moreover, targeted keyword searches in Canadian legal databases will reveal cases which have considered, directly or indirectly, the appropriate scope of artificial administration.
For example, under the ‘no-fettering’ principle, a decision-maker must consider individual cases (or groups of case) on their merits and decide whether or not to exercise her discretion (McHarg, 2017). Over time, however, a decision-maker might well develop a substantial body of practice. A desire to treat like cases alike might well lead the decision-maker to develop detailed guidelines for the exercise of her discretion. This would seem to provide significant scope for the use of artificial administration in areas where there are numerous applications which must be processed in an efficient and effective manner. Humans, having designed a process, could be left out of the loop altogether, allowing the machine to implement the process free from human oversight. Nonetheless, a decision-maker must typically listen to someone who asks for an exception to be made. This implies that an individual must be given the opportunity to contest the use of the guideline and perhaps the underlying methodology. In the context of artificial administration, it implies that an individual must be able to understand the technology and that the administrative decision-maker with whom the individual is dealing must also understand the technology and, in addition, have the capacity (in terms of expertise and authority) to modify it if necessary. It bears noting, moreover, that in these previous cases the individual concerned would have been able to understand and formulate complaints about the rigid policy being applied: this characteristic will generally not be present in respect of decisions generated by artificial administration. This example is one of many and illustrates how careful doctrinal analysis is necessary to identify the appropriate domain of artificial administration in contemporary Canada.
Weighing up the risks of and rewards from artificial administration requires governmental decision-makers to engage in a difficult balancing act, to profit from the efficiency and accuracy gains of artificial administration whilst avoiding or minimising the potential costs.
At both the collection and analysis stage choices have to be made by humans about the framework for the use of Big Data and artificial administration (Kitchin, 2014). There are “design choices” and they are “reflective of normative values” (Citizen Lab, 2018, 11). One of the “holy grails of machine learning is to automate more and more of the feature engineering process” but until then “there is ultimately no replacement” for human choices (Domingos, 2012, 84). These can be influenced by the norms of administrative justice and administrative law and subjected to appropriate accountability mechanisms. In addition, emerging legal frameworks inevitably need to be shaped. The GDPR, for instance, creates a right “not to be subject to automated decision-making”, but the contours of this right, applicable to public and private bodies alike, are unclear (Wachter, Mittelstadt and Floridi, 2017). The norms underpinning administrative law, administrative justice and accountability mechanisms can influence the fleshing out of the skeletal framework contained in the GDPR, the Government of Canada’s Directive on Automated Decision Making and other instruments. Theyprovide a set of principles, purposes or values which may influence the use of artificial administration (Kingsbury, Krisch and Stewart, 2005, 37-41). These are norms which technologists, and governmental officials adopting new technologies, can internalize so as to create artificial administration which is respectful of the demands of administrative law and administrative justice and which is subject to suitably robust accountability mechanisms.
But why should these norms be influential? For artificial administration to be effective, there must be a “sufficient level of social trust” (Government of Canada, 2018, 29). Injecting public law norms into artificial administration, in turn subject to robust oversight, would increase the likelihood of widespread social acceptance of information technology in public administration. These norms are normatively attractive, as they are focused on treating individual citizens with concern and respect. They have also stood the test of time, having been developed over many decades, often in response to changes in public administration; and in some cases, they have deep historical roots (Craig, 2019). Furthermore, legal norms have pedigree, having (as noted) influenced technological developments in the past, thereby enhancing the social acceptability of new technologies. And, last but not least, if artificial administration is implemented without regard for the norms of administrative law the decisions it produces will simply be unlawful. This research project will generate cutting-edge scholarship, to be published on leading global platforms, to inform academic and public debate and develop, in a bilingual, public-facing, plain-language report, a set of principles for artificial administration in Canada.
This content has been updated on July 8, 2020 at 03:41.