State Sanctioned Automated Decision Making: Is Public Sector Culture Violating Privacy Rights?

Updated: Dec 22, 2021

Emma Cooper, LLM Information Rights Law and Practice




Return to Previous Chapters


5. The Power Imbalance

6. GDPR Article 22

7. The Rule of Law

8. Opacity in Public Sector Use of ADM

9. A Culture of Indifference

10. Competency and Authority

11. Conclusion

12. Bibliography






5. The Power Imbalance

When Civil Society Group Big Brother Watch, lobbied the Special Rapporteur for extreme poverty of the United Nations, prior to his visit to the United Kingdom in 2018[1], they cited the absence of transparency in relation to the use of ADM in the case of RBV and other initiatives by public authorities as a concern. They complained that ‘vulnerable citizens are not … afforded an opportunity to consent to their data being shared or accessed and have no way of knowing exactly how, when or why their personal information is being used’[2].

In Denmark, Gladsaxe, was also met with strong public, political and academic criticism when a national newspaper exposed the intention of three local authorities to apply the automated points-based approach. An incredulity is evident in the colloquial notion that; if a single parent missed a dental appointment for their child, their family life might be investigated by the state[3].

Le Grand and New suggest that paternalism arises where the government intervenes in the lives of citizens, seeking to address their failures of judgement for their own good. When perceived as excessive, these interventions are regarded as being symptomatic of a ‘Nanny State’; a government that ‘tries to give too much advice or make too many laws about how people should live their lives’[4].

Given the nature of ADM deployment by public authorities – spanning health, policing, welfare and troubled families - it seems conceivable that the impact of the use of profiling and Automated Decision making as part of a paternalistic democracy is more likely (than those used in the private sector) to engage across all four interests specifically stated in Article 8 – private life, family life, home and correspondence.

The risk is compounded by the availability of vast data sets and the existence of varied and expanding technological means of surveillance and profiling, including wholesale interception of and access to communications, arbitrary use of facial recognition technology and the indiscriminate tracking and surveillance of demonstrators through the use of mobile phones[5].

Modern engagement with social media, has been categorized by some as a form of ‘voluntary servitude’ where personal data is knowingly surrendered as a kind of ‘entry fee’ to online society[6]. Citizens, on the other hand, have been labelled as a ‘captive clientele’ whereby those unhappy with a service being provided by public authorities are not in a position to use a different provider[7] and have ‘no realistic alternatives’ to accepting the processing of those in authorities[8].

The chilling effect resulting from surveillance is conceivably intensified when encountered as a result of public authority surveillance. Here, how we are perceived and profiled has the potential to impact our financial welfare, our family and our future opportunities rather than simply which advertisements we might be presented with as we browse online.

Outcomes of decisions made or supported by these automated systems can deprive citizens of welfare, determine custodial sentences, suppress political demonstration or award citizenship and the ‘clear imbalance between the data subject and the controller’[9] manifestly renders citizens a vulnerable group[10] who require particular consideration when it comes to the impact of such technology.

Public authorities in democratic nations increasingly introducing policies that seek to amend the behaviours of citizens, driven by a greater understanding of the cost of those behaviours for both the citizen and society[11].

The recent COVID-19 pandemic has raised early questions around the ethical and long-term impact of the use of AI and surveillance technologies that ‘amass personal data and share for community control and citizen safety motivations’[12].

The idea that erosion of public expectations of liberty, in the wake of significant events, is part of a mechanism to acquire wealth or power has been known somewhat conspiratorially as ‘Shock Doctrine’[13]. However, Initiatives like Test and Trace[14] certainly affect family life, liberty and autonomy and discussions are already taking place about the potential that they will feature in society beyond any state of emergency[15]

In considering whether Article 8 is engaged, the courts will often consider what might be “highly offensive to a reasonable person of ordinary sensibilities”; as to whether the person should have expected to enjoy privacy regarding the case at hand. The concern raised by Human Rights Group Liberty is the potential for ‘normalisation’ of technologies that can exercise control and limit freedom[16].


6. GDPR Article 22

Initially this chapter will draw out some of the contention surrounding certain elements of the General Data Protection Regulation, intending to demonstrate the ambiguity of Article 22 and therefore the potential weakness of the Regulation in providing safeguards for the fundamental rights of individuals impacted by automated decision making.

The remainder of the chapter will be limited to identifying what duties and safeguards the regulations offer to protect the rights of citizens subject to automated decisions rendered by public authorities more specifically. This will naturally take the discussion outside of the wider debate around Article 22 due to the presence of an exemption that applies to processing that is ‘authorised by law’.

It is noted that GDPR has been retained in UK law and the ‘UK GDPR’ combines with an amended Data Protection Act (2018)[17]. The text may refer to the Data Protection 2018 (DPA 2018) specifically as appropriate to highlight any divergence.

In October 2017, the European Data Protection Board adopted the Guidelines for GDPR Article 22[18] which concern profiling and automated decision making; intending to ‘address’ the risks posed by these activities[19].

The General Data Protection Regulations[20] (GDPR) sparked much debate about how effective the regulations would be in offering safeguards to protect the rights and freedoms of data subjects (individuals whose data is being processed in the use of ADM) and to what extent duties for Controllers (those exercising control over the purpose and manner of processing) were clear[21].

GDPR Article 22 provides that ‘The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’.

Articles 13 and 14 of GDPR require the provision of ex anti information by the Controller about ‘the existence of automated decision-making, including profiling… and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences’ and under Article 15, the data subject may request access to the same.

The presence of a number of qualifiers or conditions within the regulations, underpin the debate as to whether the provisions of GDPR collectively offer effective duties and safeguards in relation to profiling and automated decision making.

Whilst there was some initial debate about whether the ‘right not to be subject’ to automated decision making requires individuals to actively engage such a right[22], The EU Working Party 29 (Now the European Data Protection Board) guidance helpfully clarifies that the provision amounts to a general prohibition where such processing is solely automated and produces significant legal effects[23].

GDPR Recital 71 provides that ‘In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision’[24].

Although arguing that appropriate duties and safeguards are indeed present, the concerns are abridged succinctly by Gianclaudio Malgieri and Giovanni Comandé as;

‘businesses processing personal data having just a general duty to disclose information on the general functionality of profiling algorithms, in very limited cases (only in absolute absence of human intervention in the decision-making and only when it produces legal effects or very relevant similar effects) and anyway just as long as it does not affect trade secrets of data controllers’[25].

Gianclaudio argues, contrary to the above sentiment, that suitable safeguards and duties are found if those articles and recitals are applied systematically. They contend that ‘significant legal effects’ can be interpreted broadly, including those effects that might necessarily have been excluded as not meeting this threshold[26]. They assert that the ‘right not to be subject to solely automated decision making’ can be found through interpreting the requirement for human involvement to include more than ‘nominal’ intervention[27].


Sandra Wachter et al asserted that the absence of precise language, combined with the requisite to sew together an explanation right using different articles and recitals, renders the regulations at best ambiguous and at worst; ‘at risk of being ‘toothless’[28].

Regardless of one’s position in the above debate, the Regulations are indisputably less reassuring for citizens subject to automated processing performed by public authorities. This is because the general prohibition does not apply if the decision is ‘authorised by law’[29].

The UK Information Commissioner clarifies that ‘If you have a statutory or common law power to do something, and automated decision-making/profiling is the most appropriate way to achieve your purpose, then you may be able to justify this type of processing as authorised by law and rely on Article 22(2)(b)’[30], The caveat is then offered that; Controllers must demonstrate that it is ‘reasonable to do so in all the circumstances’.

Big Brother Watch complained about the dilution of the safeguards found in the EU GDPR through the enactment of DPA 2018, asserting that, despite their lobbying during passage, the 2018 Act does not provide comparable safeguards[31].

When automated decision making is not prohibited by Article 22 (1), because it is authorised by law, Article 22 (b) requires that there are ‘suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests’. This provision can be read in an unrestricted way, as requiring exploration of rights set out in the European Charter, for example[32].

The DPA 2018 does not include this reference and s 50(2) and (3) require, merely, that Controllers make the individual aware of automated decisions, after the fact, and allow one month for challenge. It does certainly appear that the qualifying elements of Art 22(2)(b) have indeed lost some potency in the ratification of DPA 2018.

Big Brother Watch also complained that the human involvement applied by case workers overseeing RBV appeared tokenistic, complaining that the authorities had suggested the prohibition did not apply to RBV because the system was merely ‘advisory’ when Big Brother Watch scrutiny revealed the system appeared to be regarded as ‘decisive’ in nature[33].

It is more likely that the prohibition simply does not apply because activities related to welfare payments and fraud reduction are governed by UK law, and systems involving Solely automated decision making and profiling that result in significant legal effects, when ‘authorised by law’, are not subject to the prohibition of Article 22 (1) and expressly permitted under Article 22(2)(b) and s 49 (1) DPA 2018.

The imprecise language of ICO guidance and the dilution of the DPA 2018 itself, falls short of the comparable Article 22 (b) safeguard, referring to internal justifications of purpose rather than wider consideration of the ‘data subject’s rights and freedoms and legitimate interests’[34].

In summary, the majority of ADM deployment by UK public authorities would appear not to be captured by Art 22 prohibition since they are largely linked to statutory duties, meaning that;

  • There is no prohibition on citizens being subject to solely automated processing producing significant effects.

  • There is no requirement for routine human intervention as part of general system operation.

The apparent safeguards provided by DPA 2018 are;

  • Public bodies are required to provide ex ante information about the existence, logic and consequences of the proces

  • Public authorities must justify the processing as an appropriate and reasonable way to achieve the purpose

  • Citizens are entitled to seek and receive human intervention ex post.

  • Citizens can challenge the decision ex post.Citizens can request a new decision that is not automated ex post.

Essentially, UK public authorities are provided with the imprecise notion of a requirement to ‘justify’ the processing without an obvious prerequisite to consider the impacts on the rights and freedoms of individuals in a broader sense.

Under DPA 2018 s 50 (2)(b)(ii), individuals have the opportunity to challenge and may request that the Controller reconsider or ‘take a new decision that is not based solely on automated processing’. However, there is no requirement to subject any justification to external scrutiny which would allow for challenge and public participation.

As lamented by Algorithm Watch, despite the socially consequential nature of the use of ADM by public authorities, transparency rules within the legislation ‘do not include mechanisms for an external deep look into the ADM systems necessary to protect group-related and societal interests such as non-discrimination, participation or pluralism’[35].

The burden for redress in the event of abuse or malfunction of these systems appears to be largely on the citizen; to seek human intervention, raise a challenge or request a new decision.

Sweel Leng Harris proposes that the Data Protection Impact Assessment might be considered a remedy[36]. S 64 DPA 2018 provides that Controllers identify the purposes and anticipated consequences of processing and to make an ‘assessment of the risks to the rights and freedoms of data subjects’.

In setting out the proposal, Harris draws on Janssen’s position that ‘rights and freedoms’ should be read as referring to the rights set out in the European Charter rather than just privacy rights as suggested by the ICO guidelines on completion of DPIAs[37].

Harris also suggestion that DPIA’s could be integrated with assessments of the impact on equality, given the overlap with the Public Sector Equality Duty under s 149 (discussed further in Chapter 7). Optimistically offering that, by combining the activities, ‘proper equality analysis of the potential for direct and indirect discrimination could be informed by technical information on the data processing, and assessment of the impact of the data processing on rights and freedoms could be informed by expertise in equality’.

However, as the following chapters will suggest, locating the requisite expertise within public sector and then encouraging collaboration is another challenge entirely.

Harris provides a compelling illustration of how ‘to improve the conformity of government data processing systems with rule of law principles’ through systematic assessment of human rights implications and environmental information regulation-inspired transparency and engagement[38].

The key point made by Harris is the importance of publishing such assessments which is not currently a requirement of UK GDPR. He points out that despite public authorities already being subject to frameworks that mandate transparency ‘the use and operation of data processing systems by government is not transparent at present.’[39].


7. The Rule of Law

In his 2013 speech, the Attorney General described a disconcertion when observing “some countries publicly proclaim adherence to the Rule of Law and Human Rights, whilst at the same time eroding those very same standards behind the cover of legislative processes”[40].

Accessibility is one of the principles set out by Lord Bingham underpinning the realisation of the core principle of law which he defines as; ‘all persons and authorities within the state, whether public or private, should be bound by and entitled to the benefit of laws publicly and prospectively promulgated and publicly administered in the courts’[41].

Summarily, the principles are;

1. The law must be accessible, understandable and predictable.

2. Questions of legal right and should be resolved by the application of the law and not discretion.

3. The laws should apply equally to all.

4. Ministers and public officials must not exceed the limits of their powers and exercise them in good faith.

5. The law must afford adequate protection of fundamental Human Rights.

6. The state must provide routes for resolving disputes which the parties cannot resolve themselves.

7. State adjudicative procedures should be fair.

8. The rule of law requires state compliance with international as well as national laws[42]

During the drafting of the Equality Act 2010, ‘Framework for a Fairer Future’ described the important role that public sector organisations play in promoting equality in society[43]. The paper notes that public sector bodies are in a prime position to effect change from the position of employer, commissioner and procurer of services for citizens. The Equality Act 2010 placed positive duties on public bodies that ‘focus on the way their spending decisions, employment practices and service delivery affect local people whatever their race, disability or gender[44]‘.

The Equality Act 2010, which consolidated various different equality Acts, intended to clarify equalities law and strengthen protections, however, the passing of the Act removed the explicit requirement for public bodies to publish ‘equality analysis, information, and engagement’[45].

Some community and voluntary groups understandably raised concerns that proper assessment of equality impacts would not be undertaken as a result. The government disagreed, asserting that the implicit Public Sector Duty under s 149 would be sufficient[46]. S 149 requires that, inter alia, authorities have due regard to eliminating ‘discrimination, harassment [and] victimisation’[47].

Scholars have positioned equalities law as a route by which the impact of ADM on fundamental rights might be managed. Since algorithmic decision making is a rule-based process – Cloisters place it squarely within the meaning of ‘provision, criterion or practice’ (PCP) and therefore falling under section 19 (1) of the Equality Act 2010. This means that activities that result in Prohibited Conduct[48] (direct discrimination, indirect discrimination, victimisation and harassment) fall foul of the legislation, insofar as the Protected Characteristics under Section 4 are concerned[49].

Many ADMs are applied unilaterally[50] and so will ostensibly satisfy Section 19 (2) (a). However, Section 19 (2) (b) and (c) require assessment as to whether particular individuals having a protected characteristic are put at a disadvantage and then further, if that disadvantage is permissible under S 19 (2) (d) because it is ‘proportionate to achieving a legitimate aim’.

Arguably, completion of ‘Equality Impact Assessments’ (EIAs) allow the public sector body to demonstrate the extent to which particular PCPs are ‘equal before the law’ in accordance with the Rule of Law and, certainly in the case of s 19 (2) (d), that they are exercising their power ‘in good faith’[51].

Perhaps ironically, the importance of being able to ‘draft [EIAs] with openness and candour’ has been offered as the reason for not routinely publishing them[52]. Consequently, Big Brother Watch and Cloisters both found that for the ADM models they researched, these equality assessments were generally not present for review, even where indirect discrimination, prima facie, was present[53].

R (Hurley & Moore) v Secretary of State for Business, Innovation and Skills[54], Lord Justice Elias found that the (9) ‘[T]he duty of due regard under the statute requires public authorities to be properly informed before taking a decision. If the relevant material is not available, there will be a duty to acquire it’[55].

Seemingly contrary to this duty, in the cases where such assessment are available, they appear to be somewhat tokenistic, simply indicating that the application of the system to all claims means that no impact should be present[56]. For RBV, of the published assessments, some appear to have been completed by finance managers[57], individuals unlikely to have knowledge of the potential for breaches of equalities law as a result of complex machine learning, for example.

Despite the supposed duty placed upon the state to scrutinise PCPs (including the deployment of ADMs) prior to deployment, much like GDPR Article 22, the Equality Act 2010 places vague obligations on the state to make ‘assessments’ or give ‘due regard’ without requiring them to be externally scrutinised.


Although the courts have found that the Equality Act 2010 ‘imposes a heavy burden upon public authorities’[58] to effectively discharge their public sector equality duties and ensure that evidence is available, the law does not explicitly require that public authorities carry out Equalities Impact Assessments, instead seeking to make a determination as to whether the Duty has been satisfied at Judicial Review[59].


The onus is again placed upon the citizen, barring actions brought by the Equality and Human Rights Commission[60], to obtain the necessary information to scrutinise the ADM such that public duty failures might be found, and then to seek redress through judicial review but, conceivably only after a violation and therefore potential harm, has occurred.


The rule of law, as Lord Bingham asserted, provides that ‘Questions of legal right and liability should ordinarily be resolved by the exercise of the law and not the exercise of discretion’. And yet, the law itself affords considerable room for discretion to public authorities in the deployment of ADM systems, despite posing a risk to the fundamental rights of citizens.

Cloisters explored the ADM technology applied to the Settled Status Scheme implemented by the UK Home Office, whose intention was to streamline the management of applications from individuals wishing to remain in the UK post Brexit and to reduce fraud and error[61].

To verify that a person has been resident for five years, in accordance with the required threshold, the Home Office uses their National Insurance number to analyse DWP and Her Majesty’s Revenue and Customs (HMRC) data. Whilst the exact data and processing parameters are unclear, the decision uses some of the thirteen categories of 38 data held by the DWP to make its assessment. The HMRC case worker will then use the data to make a final determination and, where required, supplementary information will be sought[62].

Cloisters highlighted, inter alia, the absence of an equalities assessment to explore the impact that the processing might have on women[63]. The system excluded child benefits or credit information of which women are much more likely to be the recipients. This makes it possible that women could suffer the disadvantage of having to produce supplementary information more regularly, a process that is described as ‘extremely time consuming’ and that can make citizens ‘nervous’.

The second Rule of Law principle laid out earlier in the chapter, that ‘questions of legal right and should be resolved by the application of the law and not discretion’ makes clear that decisions made in the exercise of statutory power should not be arbitrary. However, on being questioned in parliament about the exclusion of Child Benefit / Credit data from the Settled Status scheme (affecting around 60,000 people), Caroline Nokes MP stated ‘It was simply not one of the functionalities included. There is no hidden reason.[64]’ Moreover, it seems there was no reason.

ECHR provides that in managing human rights obligations, the state recognises a ‘fair balance that has to be struck between the competing interests of the individual and of the community as a whole’[65].

In Pretty v UK[66] for example, the notion of allowing a person to end their own life was weighed against the state’s positive obligation to ensure that the legislation provides suitable safeguards for the wider community. The decision found that permitting individuals to exercise such self-determination, without state interference, would fail to protect vulnerable individuals including those not able to make informed decisions about decisions around ending their life[67].

In Bridges[68], South Wales Police argued the public interest found in the prevention of disorder or crime was a legitimate exception to their negative obligations versus the countervailing argument of Mr Bridges’ privacy rights.

On appeal[69], deficiencies in the legal framework were highlighted, that the data protection legislation and the Surveillance Camera Code of Conduct[70] requires only the presence of a law enforcement purpose and determination that the surveillance is considered to be necessary.

The appeal judgement found that the scope for AFR had been set “impermissibly wide” allowing for too much discretion with regards to who was targeted and where the technology should be deployed[71].

The discussion put forward the need for a framework permitting less discretion in these areas but the court disagreed with the 2015 dissenting judgement of Lord Kerr. In Beghal v Director of Public Prosecutions[72]. Kerr asserted that authorities exercising self-restraint, where legal constraints are insufficient, is not enough to establish legality and that legality should be judged by its potential reach[73].

Instead, Lord Justice Singh said, ‘we consider that what must be examined is the particular interference with Article 8 rights which has arisen in this present case and in particular whether that interference is in accordance with the law[74]’. He drew on Munjaz v United Kingdom[75] to establish that locally drawn policies can offer sufficient clarity to protect individuals from interference with their Article 8 rights[76].

The judgement did not appear to benefit from any substantial weighing of a ‘pressing social need’[77] that might be found in considering privacy interests of the community or society as a whole and reflecting on the framework for legal application.

This predominantly ‘individualistic’ framing of privacy expectations is explored and challenged by Mead[78] whose study of UK Misuse of Private Information (MoPI) cases illuminates the propensity for decisions to focus more on wider social value of competing ECHR elements such as public safety, rarely recognising the value of privacy as a comparable social instrument, rather than purely a personal concern.

Encouragingly though, following a somewhat scathing response[79] to the appeal judgement from the Surveillance Camera Commissioner who accused the Home Office and Secretary of State of being ‘asleep on the watch’ despite ‘fruitlessly and repeatedly been calling upon to update the Surveillance Camera Code[80], a new code was published in 2020. The code includes safeguards intended to limit police discretion when considering who will be placed on a watchlist and where and when their conduct is to take place. The code acknowledges that ‘state intrusion in such matters [FRT] is significantly increased by the capabilities of algorithms which are in essence’[81].

Data protection legislation, human rights legislation and equalities law all provide public authorities with tailor made exemptions, caveated with abstract duties of assessment, justification, and balancing exercises. This appears to deepen the power imbalance between citizen and state risks a ‘system of oppression and tyranny camouflaged by what purports to be a legal framework[82]

These self-justifications and echo-chamber assessments are carried out behind closed doors and generally only subject to external scrutiny in the courts. Mr Justice Sales is reported to have indicated that the courts are somewhat burdened by having to ‘spell out the practical content of the duty’[83] which again points to ambiguity and room for discretion by public authorities.

The application of paternalistic discretion can result in arbitrary application[84] of privacy inhibiting technologies across the private and public domains which results in individual intrusion which is often weighed up against significant countervailing interests without considering of collateral intrusion or the societal privacy interests.

It has been rather optimistically suggested that a manifest advantage arising from the use of ADM systems in public sector is that they have placed authorities’ decision making into the public domain, shining a light on decisions that ‘up to now, had been taken far out of citizens’ sight’[85].

This suggestion, put forward in a study by the European Parliament[86] , that the disquiet caused by widespread deployment of state sanctioned ADMs is somehow a ‘silver lining’ to the implied ‘cloud’ of entrenched state opacity, appears somewhat specious.

At the very least it appears to minimise the conceivable impact that these systems already have on ordinary citizens and, to the contrary, this paper argues that the responsibility placed on the state is characterised by, guarded self-regulation and that the burden for intervention and scrutiny rests with vulnerable citizens.

As demonstrated Harris’ suggested remedy, potential self-regulatory or regulatory remedies to the risks to fundamental rights posed by state sanctioned ADM, rest on a requirement for public authorities to conform to the rule of law which, is manifestly absent under the existing framework

LJ also found that the public must not be ‘vulnerable to public officials acting on any personal whim, caprice, malice, predilection or purpose other than that for which the power was conferred[87]’ and yet UK citizens have been impacted by the deployment of ADM whose scope for discretion were unreasonably wide[88], whose operators withheld meaningful information about the logic applied without comprehensible justification[89], Controllers who have disregarded advice and complaints about significant system 42 errors[90] and those who produced tokenistic or incompetently drafted impact assessments[91].

The following chapters will consider whether non-conformance with the Rule of Law, rather than being the result of ambiguous legislation, complex technology or the absence of a framework within which to operate, is the result an entrenched culture or attitude that is unique the UK public authorities.


8. Opacity in Public Authority Use of ADM

Aside from accessibility and understandability being a key principle in the Rule of Law, it is indisputable that being able to comprehend how an ADM system works is an important part of managing the risk that decisions pose to citizens[92].

The EU Parliament Report “Understanding algorithmic decision-making: Opportunities and challenges”[93] explores a number of interchangeable and overlapping terms in relation to how stakeholders might achieve such understanding.

The report defines ‘understandability’ as ‘the possibility to provide understandable information about the link between the input and the output of the ADS.’, comprising of Transparency and Explainability[94].

Transparency is about the availability of information, such as the ADM code, to internal or external stakeholders and, perhaps, the public. Explainability requires the provision of information beyond the ADS itself- ideally tailored to the recipient such that the information is meaningful[95].

The ICO provides that explanations around AI decisions fall into two categories, ‘process-based explanations which give you information on the governance of your AI 43 system across its design and deployment; and outcome-based explanations which tell you what happened in the case of a particular decision.[96]’.

Meaningful explanations can help stakeholders improve the system, demonstrate compliance with regulations and allow individuals that are subject to its decisions to express their views[97].

Explainable ADM systems can support understanding of the ‘cause and effect relationship’, demonstrate compliance with regulatory frameworks and assist stakeholders to identify and correct any bias (See Competency and Authority) [98].


In 2015, Jenna Burrell identified 3 potential types of algorithmic opacity[99] which have since been explored and contextualised further by Jennifer Cobbe[100];

· Technical Illiteracy / Illiterate Opacity which is seen to occur when systems are understandable only by those to those who can read and write computer code’[101].

· Intentional Opacity as a method of self-protection by corporations, seeking to protect trade secrets and competitive advantages[102]

· Intrinsic Opacity which arises when experts or system developers struggle to understand, given the complexity of technology, particularly where there is machine learning[103].

Later that same year, The Legal Education Fund published the aforementioned Cloisters’ Joint Opinion in which they describe the deletion of the FRT images by South Wales as another type of opacity to which they do not give a name[104].

Cloisters commend the attempt to minimise data collection and storage to comply with the data protection principles. As previously discussed, however, it creates problems 44 with being able to assess how the system has behaved and how decisions were made, including any bias or discrimination that may have occurred[105].

This type of opacity could be referred to as;

· Compliance Opacity where information that could support understanding of the system or output are destroyed, deleted or unavailable to comply with data protection principles.

Also in 2016, Stohl et al discussed;

· Strategic Opacity as resulting from providing inappropriate or superfluous information such that information that the actor wishes to conceal may be hidden in plain sight[106].

· Inadvertent Opacity is described where the appropriate information is available ‘but it is rendered meaningless because of recipients’ cognitive limitations—or … information overload’[107]

Most recently, in 2021 Cobbe et al introduced the concept of;

· Unwitting Opacity whereby it simply does not occur to stakeholders to record pertinent information about ADM processes because they are potentially ‘unaware of their relevance for meaningful accountability’[108]

In relation to their aforementioned lobbying of the Special Rapporteur for Extreme Poverty, Big Brother Watch undertook a Freedom of Information (FOI) campaign, ‘asking every local authority in the UK … for information about their uses of artificial intelligence (AI), algorithms and automated decision-making tools in the provision of their services’[109].

This involved sending many hundreds of initial requests and subsequent clarifying or refining requests[110]. Big Brother Watch identified the various obstacles they faced in obtaining the information they required, including Intentional Opacity (‘reluctance to disclose’) and some Illiterate Opacity (‘many replies asking for definitions’).

However, many authorities’ FOI Officers were ‘unfamiliar with the practice or even the concept of automated decisions’ and despite providing definitions and explanations, many claimed that no such technology was in place, only for Big Brother Watch to discover that they were in fact deployed[111].

This opacity appears, prima facie, not to be neither strategic, intentional or intrinsic. The issue here appears not to be whether the parties are unable to explain the machinations of ADM or are seeking to conceal its existence. It is rather that they do not appear conscious of its existence and therefore have not been internal communications to ensure that staff, even those tasked solely with facilitating transparency (FOI Officers), are not suitably prepared to provide the necessary information. This scenario appears to conflict with the well-established transparency obligations and processes for public sector bodies[112].

There could be an argument that the opacity is unwitting, that the authorities simply did not realise the importance of making the information available. Big Brother Watch observed, however, that key information was often scattered across different departments or held by private companies rendering it unavailable. This, combined with an alleged ‘wilful blindness’ and ‘apathy’ towards the detail of the system[113] and the impact on the rights of citizens suggests something altogether different.

The results of the Big Brother Watch campaign could point to a type of opacity, perhaps particular to public sector; and one that might best be referred to as “Neutral Opacity”. Here, due diligence has been apathetically disregarded, or perhaps accountability has been shifted to a non-visible or inaccessible actor. This type of opacity appears distinct from Intentional Opacity in that it is characterised by an absence of any intention, rather a kind of indifference to or evasion of accountability.

The following chapters will explore a number of factors that could underpin this type of opacity, including whether it could be rooted in entrenched public sector culture.


9. A Culture of Indifference

In discussing modern technology and the legal and regulatory frameworks[114] within which it advances, Nemitz describes the de-centralisation of power from the government that has its roots in the Youth Movement in the 60s. His description of the eventual re-centralisation of power into the hands of the five modern-day key technology providers (The Frightful Five) makes a chilling read[115].

This power is found, he explains, in their development of technology in black box form -making it difficult for legislators to regulate, their ability to finance and lobby for regulation that suits their aims and their apparent enduring disruption of democratic process through legislative action. He warns that the ‘fiascos of the Internet, in the form of spreading of mass surveillance, recruitment to terrorism, incitement to racial and religious hate and violence as well as multiple other catastrophes for democracy’[116] which were preceded, he explains, by a failure to attribute or assume responsibility.

Arguably ADM use by public authorities, who do not adequately assume responsibility, as potentially suggested by the proposition of Neutral Opacity in Chapter 8, do risk a further concentration of power to the hands of technology providers.

Of some concern, are the reports that UK technology companies are exploiting the lack of accountability within public sector. West Midlands Police and Crime Commissioner’s Strategic Adviser reportedly told the Guardian, when discussing the ‘quiet’ scrapping of programmes like RVB, that he had concerns about businesses ‘pitching algorithms to police forces knowing their products may not be properly scrutinised.’. The presence of a West Midlands Police ethics committee, who scrutinise AI projects ‘may have deterred some data science organisations from getting further involved with us’, he claims[117].

While the consequences for UK public authorities may not necessarily equate to a ‘catastrophe for democracy’, at the very least, it already poses a risk to the Rule of Law and the remainder of this chapter will explore the notion that the above coined Neutral Opacity finds its roots in an entrenched public sector culture, the impact of which extends beyond Algorithmic Opacity and results in avoidable failures, violations to fundamental rights and community disenfranchisement.

There are, of course, examples of technological innovations are used by public authorities to empower citizens, for example, by support them to care for themselves at home rather than in a healthcare setting[118].

Whilst, this is arguably true, it cannot be ignored that the inherent paternalism of modern democracies means that automated or otherwise, significant decisions are routinely made over which individuals are able to exercise little control[119].

Personal autonomy has been categorised as having three main elements;

1. That the individual is competent and able to assess information and identify options.

2. The individual has efficiency; the ability to actually select an option and achieve their goals

3. The individual is able to express authentic desires that are free from coercion or manipulation[120].

Drawing on this bioethical framework, the right to autonomy can be a negative one, in the sense that the individual should not be forced into something. It can also be a positive action where individuals are actively supported to exercise autonomy.

Whilst paternalism may often be considered as the antithesis to autonomy[121], it could be argued that some ADM systems are ‘paternalism for the sake of autonomy’[122]. FRT, for example, could be argued to support the autonomy of the community whereby locating and apprehending suspects who are identified through the technology allows for greater enjoyment of public spaces, made safer and more accessible to the community through these means.

Nevertheless, in the case of ADM systems deployed when authorised by law, it is clear that individual negative autonomy cannot be exercised. Citizens do not have the right not to be subject to automated decision making and so are subject to a kind of ‘weak paternalism’.

‘Strong paternalism’ is said to occur where a person clearly has competence, efficiency and has expressed desires that are overridden for their own benefit[123]. By contrast, ‘weak paternalism’ is categorised by the individual’s lack of autonomy and where ‘an agent intervenes on grounds of beneficence or nonmaleficence only to prevent actions that are substantially nonautonomous’[124]. This might be, for example, where a person is not competent or does not have sufficient information to be efficient[125].

Crime reduction and administration of justice or welfare certainly have ‘grounds of beneficence or nonmaleficence’ such as to qualify as paternalistic interventions. The absence of information available to citizens (see Opacity in Public Authority) renders them incompetent and inefficient and therefore unable to exercise positive autonomy, having already had negative autonomy removed by Article 22 (b).

Without a full understanding of the technology and its uses and potential impacts, it is conceivable that citizens are rendered incapable of exercising autonomy through 49 expression of authentic desires. It follows then that constitutional democracy, which serves to ‘express the will of the people in a form obligatory for everyone’, is undermined; How can the people establish their ‘will’ or ‘authentic desires’ in the absence of comprehension?

The disquiet around state sanctioned ADM and their effect on citizens as ‘vulnerable stakeholders has resulted in calls for an holistic approach that extends beyond the technology itself and includes the socio-technological framework for the model and the ‘political and economic environment surrounding its use’[126]. The debate includes contemplating how accountability and ethical principles might be incorporated into the design and deployment of ADM systems and it has been suggested that this might include allowing individuals to ‘shape its design and operation’[127].

Previous chapters have demonstrated the burden placed on the individual to raise challenge and appeal decisions made about them using this technology. As part of the Algorithmic Fairness and Opacity Working Group (AFOG) Workshop at UC Berkeley School of Information, Jenna Burrell asked ‘how can those who are subject to algorithmic classification be better supported to understand how these systems work and the role they play within [them]?’[128].

Burrell explores a number of mechanisms by which commercial organisations engage ‘users’ in the development of systems, such as ‘flagging’, where email account holders are able to flag inappropriate classifications themselves by recategorizing emails labelled as spam[129].

The paper recognises the weakness of this approach since the user is effectively just being ‘put to work’, and as a result is removing the burden on the operators to improve the system themselves. The individual is not affecting the policy of labelling, only managing the effect[130].

One could argue that a similar approach is manifest in the current UK legislative framework, placing the onus on the citizen to ‘flag’ the effects of ADM, seek judicial review and develop legal precedent, rather than obligating the authorities to external scrutiny before the fact.

The paper does suggest that a ‘user advocacy function embedded in business teams help steer decisions in ways that preserve autonomy’[131]. On this reasoning, the rise of co-production in UK government would appear to represent an encouraging move from paternalistic service delivery to a more collaborative relationship between citizens and public authorities. Co-production is described by the Local Government Association as;

‘… focused around a relationship in which professionals and citizens share power to plan and deliver support together … people are no longer passive recipients of services, but are equal partners in designing and delivering activities to improve outcomes’[132]

In the case of state sanctioned ADM, this approach could increase citizen competence and support the expression of authentic citizen desires and goals (based on acquired understanding), underpinning the concept of positive autonomy described above.

Certainly, in Finland, the government developed Elements of AI course is an integral part of the Finnish AI programme. It is an online course that explains the basic concepts and some of the social consequences of AI, with almost 100,000 Finns having enrolled by 2019[133].


In the UK however, there appears to be some way to go to achieve to fully engage citizens. Concerns about ‘tokenism’[134], although predominantly in relation to health research co-production, conceivably apply in a broader sense to public sector. The imbalanced power relations between those in authority and the public can render co-production a merely symbolic effort to engage citizens and make the transition from ‘a consultative paternalistic model to a collaborative partnership model’ a difficult one[135].

In fact, far from being co-produced, ADMs deployed by UK public authorities appear to be designed and developed with little or no engagement from those whom it impacts the. In the case of RBV, local authorities have unquestioningly concealed the inner workings of ADM from the public at the request of the DWP, ostensibly to prevent ‘gaming’ or manipulation of the system. Burrell notes that ‘preventing ‘gaming’ [of the system] may not necessarily mean maximizing concealment’ and gives example of Wikepedia as a fully transparent platform that demonstrates this notion[136].

The suggestion that the public sector is shaped to respond to the directives of Westminster rather than the people is not a new one and rests on the legacy of Margaret Thatcher’s public reform legacy of curtailments issued to local authorities by Westminster in relation to their organisation and management, result in services that are driven by providers rather than citizens[137].

Moreover, it has been considered that the ‘captivity’ of citizens, unable to choose alternative services to those delivered by public authorities, creates a ‘chronic lack of incentives for the public sector to become more …responsive to the wishes of [citizens]’ and that this absence of competition leads to ‘arrogance and atrophy’[138].

In 2018, it was discovered that the DWP had been underpaying an estimated 70,000 benefits recipients for years. The Public Accounts Committee noted that the DWP failed to create a process that implemented its own legislation and then failed to subject that process to scrutiny[139]. It disregarded staff, the public and experts when concerns were raised and appeared to completely ignore the ‘painfully obvious’ errors that were being made, taking more than six years to correct them. These inactions amount to what the Committee described as a ‘culture of indifference’.

Some might consider the experience of Big Brother Watch and the findings of the Public Accounts Committee to be symptomatic of what the media have referred to as ‘institutional indifference’, blamed for the failures and suffering of Windrush and Grenfell[140] and categorised by a ‘lack of official interest in what happens to people’ who are sanctioned, disregarded, unprotected, delayed or underpaid; particularly if they are poor and / or part of the BME community[141].

In 2010, the 2020 Public Services Trust, in making the case for a more integrated public service, attribute indifference to a far more innocuous cause, as a bi product of ‘government fragmentation’[142]. The report describes the ‘highly siloed professional or organisational compartments’ that are endemic of public sector. It is described as a structure that allows cases to ‘fall through the cracks’, for responsibility and accountability to be avoided and for ‘boutique bureaucracy’ to prevent systematic application of processes and create gaps in provision[143].

‘Managerialism’ is considered another legacy of Thatcher’s ‘New Public Management’[144]. Its features include increased strategic and operational management, focus on assessing output of public services and performance criterion and standards. In his exploration of the Thatcher Legacy, Dorey highlights the reported[145] burden of Managerialism as preventing public service professionals from attending to essential services – academics, nursing etc. It is conceivable that this ‘endless cycle’ of box ticking, form-filling and audits could result in the kind of apathy and reticence encountered by Big Brother Watch.

In 2020, Bureaucracy was still being reported as being burdensome for public sector workers – suggested an entrenched culture. The 2020 ‘Busting Bureaucracy’ report cited overly-administrative processes for procurement, complex regulations, information management and data requests paper as cause for malaise and ironically proposed the use of AI to reduce bureaucracy (in the form of remote monitoring software that prevents unnecessary patient appointments[146]).

Lack of accountability is highlighted as a cause for concern, in the 2019 report by the National Audit Office, when exploring the ‘Challenges in Data Use Across Government’[147]. The report identifies a ‘culture of tolerating and working around poor-quality data’[148] of ‘silo working’[149] and that ‘[w]ell-publicised misuse of data has increased concerns and undermined efforts to communicate benefits’[150]. The report bemoans a lack of leadership, illustrated not least by the commitment to appoint a UK chief data officer in 2017 that had yet to be fulfilled when the report was issued[151], in fact the role was not fulfilled until January 2021[152].

Reflecting then, on the observations presented above, it is possible that the experiences of Big Brother Watch during its FOI campaign are perhaps symptomatic of an entrenched public sector culture. The absence of completed impact assessments, lack of familiarity or concern regarding the impact of ADM and distribution of key information across various departments and absence of any real public engagement could conceivably be suggestive of an entrenched public sector culture that is characterised by a lack of leadership and integration, poor data quality, indifference, bureaucracy, apathy and paternalism.


10. Competency and Authority

ADM is often deployed by public authorities with the ambition of removing human bias or preconceptions from the process and creating something more fair, accurate and consistent[153]. Studies have shown that profiling models can be accurate[154], but there is evidence that errors can occur with significant effect.

Caruna et al described a health sector ADM system which sought to predict risk levels in patients with pneumonia and therefore whether they should remain at home or go to hospital for treatment[155]. The system predicted that asthmatic patients were at lower risk of dying from pneumonia. The data set was biased because those patients generally received intensive care that reduced their risk of dying from pneumonia.

Doctors were able to identify and correct the bias by applying their own experience were able to identify this anomaly in the system output[156].


Guido Noto La Diega argues that human trust is misplaced in the use of algorithms which ‘Dehumanise’ decision making’[157] and notes a UK case involving 20,000 divorced couples whose financial terms for their divorce were potentially miscalculated by the software as a example[158]. Noto La Diega puts forward the argument that human decision making can be trusted because human beings tend to emulate one another and so their decisions are consistent and predictable[159].

Of course, consistency does not always equal fairness. There is also the argument that, consideration for the circumstances of particular cases lends itself to genuine fairness. In his discussion on the fairness of international law, John Tasioulas considers the application of certain frameworks as consisting of ‘culture-specific, value-constructs’ foisted upon adherents[160]. He asserts that unilaterally applying values in a world that is categorised by diversity risks ethnocentrism.

An example of how ADM can apply unilateral rules to the detriment of individuals can be found in a report published by the Association for Computational Linguistics in 2019. The Computer Scientists undertaking the investigation found that automatic toxic language identification tools used in social media to flag and remove offensive content were biased towards removing the social media posts of African American individuals. Common phrases in the African American English dialect (AAE) were seen to be labelled by one particular toxicity detection tool as far more toxic than general American English equivalents, regardless of their being regarded as non-toxic by AAE speakers[161].

These cases make plain the value of human oversight and scrutiny for ADM systems but, as Working Party 29 (now European Data Protection Board) Guidelines suggest, it should be carried out by someone who has the ‘authority and competence to change the decision’[162].

Competence and authority appear particularly important given the psychological phenomenon of ‘Automation Bias’ described by Jennifer Cobbe. She describes the wealth of evidence to suggest that humans tend to trust decisions made by machines and defer to them willingly and generally without challenge[163].

The Working Party 29 Guidelines warn that human involvement cannot be ‘fabricated’ but the apparent ‘tokenism’ of human oversight was at the core of concerns raised by Big Brother Watch regard York Council’s approach to RBV. Big Brother Watch described the use of the tool as ‘decisive’ rather than ‘advisory’ because the main risk that was identified through their impact assessment, having found no legal or equalities implications, was that staff should be trained to ensure they trust the risk scores produced[164].


Whilst systems like RBV are ‘authorised by law’ and therefore not subject to the Article 22 (1) prohibition on solely automated decision making (by virtue of Article 22 (1) (b), the Recital 71 does allow individuals to ‘obtain human intervention’ and express their viewpoint, obtain explanation of the decision reached and raise challenges.

The second requirement for ‘meaningful assessment’ provided through human involvement, according to the WP29 guidelines, is that the human should be competent.

In 2016, Jenna Burrells, explored the opacity of machine learning algorithms, describing the reading and writing of computer code and the development of algorithms as a ‘specialised skill’ that is ‘inaccessible to the majority of the population’ since it involves language that differs notably from human language[165]. More recently, in 2021, a partnership of employment and skills specialists that included the Learning and Work Institute produced a report on the digital skills gap. The report identified software coding in its definition of Advanced Digital Skills that are in shortage in the UK and that the average age of those holding Advanced Digital Skills is below thirty[166]. Despite the demand for these types of skills increasing, the number of those training in this specialist area is declining[167].

In their 2019 report, the Institute for Employment Studies found that, possibly as a result of austerity measures reducing investment in the workforce, young people are half as likely to be employed public sector roles than older counterparts[168].

It is conceivable then that, in the context of the UK skills gap affecting these specialist skills, public authorities deploying ADM systems may find it more difficult that private sector bodies to engage individuals that are suitably skilled to support the design, assessment and oversight for of ADM systems being deployed.

Another feature of modern public services is ‘marketisation’; originally part of the New Public Management[169], subsequent reforms have placed pressure on services to reduce operating costs and meet quality targets through a number of measures including outsourcing services to private providers. Seeking to encourage competition and therefore drive ‘efficiency, effectiveness and economy’[170], the assertion is that the ‘involvement of private firms in public services … results in the best allocation and delivery of services at any given cost’.

It is clear from the provisions of Article 22 and Recital 71 that GDPR confers an ex post opportunity to request human intervention in the use of ADM systems where the purpose for the system is authorised by law. There is also clear evidence that human review of the output of ADM systems can serve to flag inappropriate classifications resulting from unliteral application of rules.

It is possible that a digital skills gap, potentially felt more acutely in public sector, combined with public sector outsourcing policies mean that the those working with ADM systems simply do not have the competency and authority required to challenge or explain the ADM them, potentially resulting in tokenistic human involvement and unilateral deference to automated decisions.

There is a spatial chasm between those deploying and operating the ADM and the, likely outsourced, developers of the system. Those who understand the system well enough to recognise anomalies are not those who are tasked with assessing the risks to the fundamental rights of those subject to the decisions. This potentially deepens opacity and blurs the lines of accountability.


11. Conclusion

The enduring appetite of the state for registration of its citizens and modification of their behaviour has resulted in vast data sets drawn from all facets of citizen life including 58 health, welfare, crime, political activity, travel, military service, childcare, marriage and divorce.

It is manifest in the emerging case law and academic discussion that the use of ADMs by public authorities can engage with fundamental rights on multiple fronts and have both individual and societal privacy implications including affecting anonymity, autonomy, identity and self-determination.

The profiling of citizens can result in predictions of behaviour that, when underpinning decisions made by public authorities, can deprive the individual of moral reflection and identity revision, even when accurate.

Profiling and automated decisions can be based on structural bias or the preconceptions of the designer or operator and can perpetuate stereotypes and penalise marginalised communities.

The pervasive ‘observation’ of citizens, whether through the creation of digital profiles or the surveillance in the physical sense, can modify behaviour and force identity management, providing little opportunity for anonymity in public.

The power imbalance between the state and the citizen renders the citizen a vulnerable stakeholder whose rights and freedoms warrant careful consideration with regards to the deployment of ADM by public authorities.

The opacity surrounding the design and operation of ADM systems has been attributed to a number of potential factors including the complexity of the technology, a lack of awareness, strategic concealment for commercial reasons.

There is evidence, as presented above, that there are barriers to accountability for the design, operation and the impact of ADMs by UK public authorities which are distinct from hitherto proposed opacity types.

The current legal framework provides much room for public authorities to exercise discretion and to make decisions that are arbitrary and not expertly evaluated, and where rationales are produced, the process does not include the involvement of suitably competent individuals. Furthermore, they generally are not subject to public scrutiny 59 before the technology is deployed. The presence of any public engagement is potentially tokenistic and hampered by a pervasive power imbalance and paternalism.

The burden of holding public authorities accountable appears to rest largely with vulnerable citizens who do not have the requisite information to express authentic desires and exercise the necessary autonomy to effect challenge.

Contrary to the Rule of Law, discretion is exercised to authorities that are potentially entrenched in a culture characterised by a lack of leadership and skills, poor integration, indifference, bureaucracy, apathy and paternalism.

As a result, any effective remedies or safeguards proposed must consider the potential predisposition for inertia and, as such, include mandated transparency and engagement, co-production, a collectivistic approach to privacy and upskilling of both the public and the authorities to allow for effective challenge and exercise of autonomy.







References


[1] Big Brother Watch, 'SUBMISSION TO THE UN SPECIAL RAPPORTEUR ON EXTREME POVERTY AND HUMAN RIGHTS AHEAD OF UK VISIT NOVEMBER 2018' (Ohchr.org, 2021) <https://www.ohchr.org/Documents/Issues/EPoverty/UnitedKingdom/2018/NGOS/BigBrotherWatch.pdf> accessed 25 May 2021. [2] Ibid 7 para 3. [3] Automating Society (n 29). [4] 'Nanny State' (Dictionary.cambridge.org, 2021) <https://dictionary.cambridge.org/dictionary/english/nanny-state> accessed 25 May 2021. [5] Privacy International, 'Mass Surveillance | Privacy International' (Privacyinternational.org, 2021) <https://privacyinternational.org/learn/mass-surveillance> accessed 25 May 2021. [6] Alberto Romele and others, 'Panopticism Is Not Enough: Social Media As Technologies Of Voluntary Servitude' (2017) 15 Surveillance & Society. <https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/not_enough> accessed 1 May 2021. [7] Peter Dorey, 'The Legacy Of Thatcherism - Public Sector Reform' [2015] Observatoire de la société britannique para 17 < https://journals.openedition.org/osb/1759> accessed 12 May 2021. [8] European Commission, 'Guidelines On Consent Under Regulation 2016/679 (Wp259rev.01)' (2016) 6 para 2