Tanishka Goswami
Shikhar Aggarwal
Ayan Gupta
Over the past few years, police forces across States in India have started employing artificial intelligence technology. These ‘predictive policing’ softwares aim at overhauling the system of maintaining crime databases. The process entails collection and analysis of data regarding previous crimes for statistically predicting areas with an increasing probability of criminal activity, or for identifying individuals who may indulge in such activity.
On the face of it, the idea seeks to harness the power of data and geospatial technologies for preventing crimes. However, issues like biased policing practices, lack of transparency and disregard of people’s rights complicate the use of predictive policing. This is exacerbated by the current state of police functioning in India, which is affected by high levels of understaffing, lack of proper situational training, increased political interference, and the absence of a concrete distinction between their duties to investigate offences and/or to maintain law and order.
We discuss these issues in a four-part approach. In the first part, we examine the consequences of the shift in onus of decision-making from individuals to unaccountable predictive softwares perceived as ‘flawless’. In the second part, we consider two contrasting judicial approaches towards predictive technologies, and argue against the uncritical acceptance of these technologies as part of the criminal justice system. We address the question of whether arrests made on the basis of predictive policing output qualify as ‘reasonable’. Finally, through the third and fourth parts, we explore the adverse consequences of these technologies on access to justice and due process rights, while also suggesting possible safeguards to ensure democratic accountability in their implementation.
Assistance or Legitimization: How Predictive Policing validates Discrimination
In order to tackle criminal behaviour with limited resources at their disposal, police officers are inevitably pushed towards exercising subjective discretion. This subjective discretion effectively places them as ‘gatekeepers’ of the criminal justice system, wielding great power to determine who enters the system as an accused. However, permitting interventions through predictive software basically hands over the very basis of this discretion to technology. Having been used for far longer, predictive technologies have come to play a managerial role in the U.S. They are seen as a panacea to police ineptness, preventing not only prevalent biases, but also perceived inefficiencies of their investigatory practices. These technologies determine where officers are stationed, whom they arrest or trail, and who they should look out for. In doing so, they become an integral part of police work instead of merely being assistive.
Early indications suggest that predictive policing software may play a similar role in India. Delhi Police’s Crime Mapping Analytics and Predictive System (CMAPS), which determines crime ‘hotspots’ in the city, is used to decide how much force is required and where it needs to be employed. Similarly, the Hyderabad Police has created the Integrated People Information Hub, a multifaceted ‘360 degree’ profile of all residents, to ‘predict’ people to be arrested, based on 24/7 surveillance. The National Crime Records Bureau also plans to utilise its annual data to predict crime, and to enable ‘patrolling, deployment and surveillance’.
These examples show how modern predictive policing systems produce outputs that seemingly require no (or minimal) interpretation. In doing so, they seek to justify police actions under the cover of ‘objective’ underlying information. The Delhi Police, for instance, claims that the CMAPS “would be capable of geographic and environment profiling of crime…and produce predictive models based on these trends to assist officers to plan and deploy police forces.” Hence, while none of these systems necessarily make the decision themselves and are marketed as ‘assistive’, the commonly-associated perception of computers and big data sets as ‘flawless’ and ‘unbiased’ contributes to the uncritical acceptance of their ready-to-use ‘suggestions’. As a result, potentially discriminatory actions are legitimised because they arise from ‘unbiased’ technology.
An oft-discussed consequence of these developments is that policing decisions are influenced by biased historical data. The Status of Policing in India Report (2019) indicates that police personnel already demonstrate a significant bias against the Muslim community, migrants and non-literate people. As a result, the data driving predictive software ends up encoding patterns of discriminatory police behavior. This perpetuates social profiling of individuals, thereby ingraining implicit biases in justice-delivery. To illustrate, the Pardhi community in India, once notified as a ‘criminal tribe’ under the Criminal Tribes Act, 1871, continues to be persecuted regularly owing to their status as ‘history-sheeters’ as per police records.
This state of affairs also points to ‘due process’ concerns arising out of newer predictive policing systems. In the next two sections, this article explores other critical dimensions of the debate, with specific focus on transparency and democratic accountability.
Can Technology Suspect Reasonably? The Procedural Conundrum of Predictive Policing
A major concern associated with the use of predictive policing is that law enforcement agencies may use such technologies for replacing traditional policing techniques, and not just to augment them. In the U.S., predictive software are used for making arrest-related decisions based on risk assessments considering the probability of criminality and recidivism. Under the Fourth Amendment, arrests are subject to the ‘reasonable suspicion’ standard laid down by the U.S. Supreme Court in Terry v. Ohio(1967). To justifiably stop a person, the police must be able to “point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion.” This standard resonates with Section 41(1)(b) of the Code of Criminal Procedure, 1973 (‘CrPC’) in India which empowers the police to arrest individuals upon the existence of reasonable suspicion suggesting their commission of a cognisable offence. Such reasonable suspicion must be grounded on definite facts placed before police officers, which they must consider before taking an action. As the Apex Court’s approach in Arnesh Kumar v. State of Bihar(2014) reflects, compliance with Section 41 CrPC furthers the fundamental right against arbitrary arrest and detention.
In this regard, predictive policing may enable police officials to form ‘reasonable suspicion’ by presenting definite facts (such as risk assessment scores) from which inferences can be drawn for arrest-related decisions. But as noted above, the reliability of predictive softwares remains under a cloud, amidst extensive proof of how data analytics essentially compound societal biases, and fail to offer reasons behind particular outcomes. Hence, this technology-driven process grossly lacks the minimum level of reasoning required by law, which the solely human-run process claims to offer (at least theoretically). How must constitutional democracies use these technologies then? Two Constitutional Courts in two different jurisdictions – one in Wisconsin, United States and the other in Western Australia – have responded differently to this question.
In State v. Loomis, the Wisconsin Supreme Court considered a due process challenge concerning the use of predictive policing. The accused-defendant contended that the use of COMPAS (‘Correctional Offender Management Profiling for Alternative Sanctions’) violated his right to get an individualised sentence, and unconstitutionally relied on risk assessments factoring in a defendant’s gender. The Court dismissed the challenge, ruling that the use of such individual factors was necessary for promoting accuracy in the outcomes of predictive software, thereby satisfying due process requirements. In doing so, the Court exhibited an uncritical acceptance of predictive technologies.
First, the Court approved the use of COMPAS based only on its stated neutral objective. This approach disregards tensions arising from the use of unreliable data in flawed criminal justice systems operating in hierarchical societies. Second, the Loomis Court equated accuracy and neutrality with ‘due process’, without examining the methods using which COMPAS’ algorithm processes data and whether they are discriminatory. In fact, the Court itself conceded to not having such knowledge, and yet approved the use of predictive software. Thus, the Court conceptualised no scope for interpretation in discussing the outcomes offered by predictive technologies. Any errors in outcomes were seen as mere aberrations, and not a challenge to the very design of these softwares.
In stark contrast is the approach of the Supreme Court of Western Australia in DPP v. Mangolamara (2007). Here, the Court was concerned with the evidentiary value of reports generated by tools predicting recidivism during sentencing. Instead of presuming the accuracy of these reports and their compliance with due process requirements, the Court insisted on independent evidence to establish the presumptions underlying the use and operations of predictive technology. It highlighted that the appropriate social context of minority aboriginal populations and other marginalised social classes was absent in the data relied upon by predictive tools. This enhanced the risk of inaccuracy in outcomes. The Court thus observed that rules of evidence reflected a form of ‘wisdom based on experience’ and such automated tools did not generate enough confidence, as they did not constructively consider socio-economic circumstances of the people being profiled.
In doing so, the Court recognised that predictive policing delegates the capability of suspecting ‘reasonably’ to AI software. It did not presume that the data or its processing was necessarily neutral and instead inquired into how the software works. Therefore, it treated predictive policing tools with a degree of suspicion. Since both Loomis and Mangolamara were concerned with sentencing considerations, it cannot be said that they present a direct parallel for the use of predictive technologies by the Indian Police. Nonetheless, the overall degree of suspicion or trust with which they approach predictive policing can guide us on how constitutional democracies like India must approach the use of such technologies. In Loomis, the Court’s uncritical approach legitimises not only the use of potentially biased data but also the secretive methods by which it is processed. On the contrary, Mangolamara represents an approach that acknowledges: first, the already unequal context of societies and policing; and second, the resulting inevitability of data (processed or raw) to not be facially neutral.
In this context, an approach that trusts predictive policing systems to the same degree as the Loomis Court can be extremely liberty restrictive. Given the seemingly ‘objective’ reliance on prior collected data, the use of predictive software would meet the low threshold for what constitutes ‘reasonable suspicion’ sufficiently. In fact, in a system where the police utilises criminal procedure to their advantage to bypass scrutiny by Magistrates, predictive software would carry grave implications for justice delivery.
Evidently, the contrast illustrated in these two cases helps us understand that predictive technologies should be approached with a critical eye to ensure maximum liberty. However, the questions regarding how this must be done remains; especially as the judiciary functions as a “fragmented mechanism for ensuring the constitutionality of policing practices and do[es] nothing to assure democratic accountability or sound policymaking.” How this hinders access to justice in a system that remains in a perpetual state of crisis is something we discuss in the next section.
Criminal ‘Justice’ and Hindered Accessibility amidst Disparate Outcomes
In a system already marred by issues such as poor infrastructure, understaffed Courts, lack of training and general awareness regarding legal processes, predictive policing hampers access to justice. These technologies (called ‘black-box’ owing to their opacity) do not transparently reveal the reasons behind predictions, since citizens and law enforcers lack the necessary information regarding their operational aspects. This is made worse by the fact that not all predictive policing tools are similar in nature – while CMAPS entails crime ‘hotspot mapping’ by collecting data from ISRO satellites, the Jharkhand Police has created a multi-disciplinary group to develop a ‘data mining’ software as a building block for its predictive policing project.
It needs to be recognised how Indian procedural criminal law provides for Magistrates to intervene at several critical junctures during the pre-trial stages, i.e. in terms of issuing search warrants, granting remand/bail, and recording confessions. However, several factors including implicit caste biases, chronic backlog of cases, and lackadaisical attitudes towards safeguarding the rights of the accused preclude Magistrates from effectively utilising their powers to intervene at this stage. When outcomes of predictive software are employed to base reasonable suspicion, this tendency to not apply their minds and intervene in police investigations (except those ordered under Section 156(3) CrPC) will be further exacerbated. In their article titled ‘Is Big Data Challenging Criminology?’, Moses and Chan note how the lack of understanding of the potential consequences of different AI software causes judicial officers to deem the outcomes they present as sufficient input for decision-making.
Given how prisons in India are bogged down by booming under-trial population and increasing levels of incarceration, it is not difficult to imagine a situation where seemingly objective investigative outputs of predictive policing software are deemed acceptable by Magistrates without much scrutiny. In fact, courts in India have failed to address concerns associated with the use of technology (such as DNA analysis and odontology), as well as reliability and validity of output generated by forensic disciplines at a systemic level. Blind reliance on algorithms has also become a major hindrance in bail hearings in the U.S., especially since the use of predictive technologies was mandated as part of a pre-trial bail reform package. Hence, it is difficult to envisage how individuals at the receiving end of the criminal justice system will fully understand and appreciate the operations of software functioning on such diverse technology sets. Having discussed these concerns, it ought to be examined whether (and if yes, how) the State’s interest in using predictive policing for its purported ‘efficacy’ can be balanced against the overarching societal need to uphold due process and rule of law.
Conclusion – How Predictive Policing may be Employed
The fact that the use of predictive policing needs to be accompanied with transparency and accountability can hardly be overstated. Thus, a two-pronged approach is necessary to fully understand the constitutional implications of predictive policing: first, addressing the due process concerns raised by the introduction of such technologies with respect to procedural criminal law; and second, undertaking public engagement and community participation through measures like citizen review boards and consultation on data audits, to ensure democratic accountability.
In relation to the first, it can be seen that Loomis highlights a judicial tendency to not safeguard due process rights that arise from the use of algorithmic evidence. That said, any use of predictive policing ought to accept that the algorithm does not predict future events, but only the statistical risk of their occurrence. It should be recognised that the true value of predictive policing only lies in providing situational awareness, i.e. information necessary for acting upon what the software considers as a ‘risk’. Accordingly, these tools should vary in sophistication based on the scale of operations and area under the jurisdiction of a department.
At the same time, prosecutors may be required to mandatorily disclose the use of predictive policing software by the police for arresting and prosecuting individuals. As the software may contain data relating to alternative suspects too, it can be argued that their risk assessment scores and other related data generated by the predictive software must be disclosed as well. The burden would then fall on the police to justify why it has arrested the concerned individual and not anyone else. The genesis of this proposition can be traced to the U.S. Supreme Court’s dicta in Brady v. Maryland(1963), whereby it ruled that the prosecution is obligated to present all material, whether inculpatory or exculpatory, to the defence and before a Court. The Supreme Court of India, in Sidhartha Vashisht v. State (NCT of Delhi)(2010), V.K. Sasikala v. State(2012) and P. Gopalakrishnan v. State of Kerala(2019), has understood this obligation as part of the right of defence of an accused in furtherance of a ‘just, fair and transparent investigation’.
On the accountability front, constitutional democracies require that computer systems employed by their institutions are not secretive. In this regard, it should be noted that even predictive policing software not including racial data have been found to produce racially disparate results, despite their claims of producing only unbiased results. In this context, what needs particular attention is not just the dataset, but also its processing. Given the pervasiveness of bias in big data, it is important for civil society and citizens to maintain their critical eye to such claims, and expose them to democratic scrutiny. Internal oversight mechanisms can be installed in predictive technologies, similar to ‘privacy by design’requirements imposed by robust data protection regimes, in order to ensure functional oversight. Despite internal safeguards, unless predictive technologies are subjected to democratic consultations, their impact on marginalised communities will remain uncertain and most likely oppressive.
Finally, it needs to be seen how the use of such technologies may be made constitutionally compliant, and whether they can be made compliant at all. Biases ingrained in predictive policing may not reflect a new issue altogether, but they only represent a larger malady plaguing policing across the globe. While inherent biases can be mitigated only through increased sensitisation and overhauling the priorities of a criminal justice system, a step towards improved policing would be to recognise how outcomes generated by such software necessarily require application of mind. It must also be accepted that predictive policing cannot conclusively determine the potential criminality of an individual. Acknowledging biases embedded in source data should be accompanied with robust feedback mechanisms, whilst subjecting the use of such technology to periodic review.
Ayan Gupta is a 3rd year student at National Law University, Delhi and a Death Penalty Research Fellow at Project 39A. Shikhar Aggarwal and Tanishka Goswami are both 4th year students at National Law University, Delhi and also work as Death Penalty Research Fellows at Project 39A.