SHOULD JUDGES BE REPLACED BY ARTIFICIAL INTELLIGENCE?
top of page

SHOULD JUDGES BE REPLACED BY ARTIFICIAL INTELLIGENCE?

Updated: May 17, 2023


Should judges be replaced by artificial intelligence?

SHOULD JUDGES BE REPLACED BY ARTIFICIAL INTELLIGENCE?


The consensus around AI is split between the idea that technological advancement is good and should be used to improve humans ease of life by automating tasks, and the idea that AI is a dangerous pandora’s box. Should judges be replaced by artificial intelligence? The latter follows the argument that making humans more dependant on technology is a bad idea, especially regarding highly important, discretionary tasks of the judiciary that affect other humans. Perhaps there is validity on both sides. Maintaining that advancement in technology is always, a force for good, although it should be used in a just manner. One would point to the harnessing of nuclear fission energy as an example.


A. CURRENT ROLES AND CAPABILITIES


As of now, there is not much use of AI in the judiciary or courts of England and Wales, especially in comparison to other professions. This is pointed out by Taylor & Osafo (2018). However, China has over 100 robots in courts across the country Lauw & Bracher (2018), although Zhou Qiang has stated that AI was still a resource, and could not yet replace a Judges’s expertise.


Kugler (2018), describes how computer programs are currently used in US courts, and notes that AI is already being used to analyse documents and data during the legal discovery process, as it can scan through these materials quicker and cheaper than humans can. He also points out how AI now has the power to influence the outcomes of cases, citing the American case State v Loomis. 881 N.W.2d 749 (Wis. 2016). Here, an AI system analysed data about the defendant and made recommendations on the length of the sentence to the judge.


In 2016, a study was carried out involving an algorithm that considers legal evidence and moral questions to accurately predict the results of real cases. The program examined data sets for 584 ECHR cases, and for each case, was able to come to a decision by analysing the information. 79% of its verdicts were the same as the ECHR's rulings, according to Information age, Ismail (2016).


Cellen-Jones (2017) in a BBC news article describes another study involving “Case Cruncher Alpha” (a program developed by law students), which competed against 100 commercial lawyers to predict whether the Financial Ombudsman would allow a claim in miss-sold PPI cases. Out of the 775 submitted, CCA had an accuracy rate of 86.6%, while the lawyers only had 66.3%. AI technology was over 20% more reliable than trained commercial lawyers.


It must be noted that these programs had no budget and were developed merely by non-computer-science students, in their spare time. When considering what could be achieved with financial backing, better qualified IT (and legal) experts with experience, working on a full-time project, it is clear that AI taking on at least lower court roles requiring only basic legal application is a very reasonable possibility.


Furthermore, Randy Goebel, a University of Alberta computer science Professor, worked with Japanese researchers to develop a program that can pass the Japanese bar exam. Now, the team is working to develop AI that can “weigh contradicting legal evidence, rule on cases, and predict the outcomes of future trials,” according Snowdon (2017) in CBC. Considering the above, Goebel’s project would seem to be a realistic aim.


Beyond this, Katz (2014) points out that AI can be used in highly specialised cases that require experts to testify. Where Judges are required to evaluate an experts use of complex science and methodology, that is completely outside their realm of expertise, expert AI programmes can digest that information and act as a “technical advisor”, while allowing the judge to be the gatekeeper for admissibility, with the jury still functioning better informed. She presents the expert programs in use in multiple fields, such as MYCIN and INTERNIST/CADUCEUS, used by physicians for diagnosis & treatment, as examples.


B. ETHICAL CONCERNS OF A.I.


In 1997, the Law Commission formulated that, in the absence of evidence to the contrary, machinery and computers should be presumed by the court to be in working order, and therefore reliable. Mason (2017) takes the view that this is a lazy and intellectually dishonest presumption that lacks justification and makes Judges “blind to the facts of causation”. He argues that computers cannot be presumed to be functioning correctly from the offset, and cites two incidents of AI operated cars involved in collisions. His argument is centred around the idea that only humans have the unique discretion to be able to perform judicial roles, as AI will always have the propensity to fail as any machine inevitably does.


Dean (2013) poses the moral question of whether AI adjudication should be available only with the parties' consent, or whether such could be imposed upon one’s case, regardless. It could be justifiably argued that if one wants a human Judge (who can consider their circumstances “humanly”), to hear their case, it would be to uphold the rule of law and the right to a fair trial under article 6 of the ECHR, to allow them that right. The solution would be that it fits within the right to appeal a decision, made by AI, for consideration by human eyes.


He also asks whether a computer could take into account normative questions of what the law should be, competing societal values, and the always evolving accepted cultural norms that influence outcomes. The question is interesting, as it is framed as one inquiring into the possibility (can this be done?), but the fundamental intention is moral and ethical (should it be done?). As for the former interpretation, AI will most likely be able to take into account and perform the above considerations, as is admitted by Warren (2016). For the latter, should non-human entities decide on subjective issues concerning human values, and represent society? Such is the argument of Fulda (2012).


C. ADVANTAGES OF FUTURE USERS


Morris (2015), points out how AI could bypass human flaws by operating in positions of power and influence without bias. He argues that it can be programmed to only look at relevant factors, in this case legal factors, and not sex, race, background etc., which may subconsciously affect the decisions of human Judges. This point is tied in with the granting of bail and sentencing. This requires calculating a defendants likelihood of reoffending. An algorithm designed to, would be able to analyse the facts of the case and all variables and statistics around the area, precisely calculating the probability of reoffending, to issue the appropriate sentence. This calculation is certain to be more reliable than a human’s.


There are also the practical advantages, which has led to the rise in AI automation in other professions and industries. The programs mentioned above are able to complete tasks much quicker than their human counterparts. Holme (2017) argues that AI software like IBM Watson would be much more efficient for contract discovery and analysis, by analysing documents faster and flagging relevant parts without missing anything (due to human error). He states that this would reduce contract review time in due diligence by 60%, only using humans to set up and oversee the functioning of the computer. While not in reference to Judges, this supports the time effectiveness of AI tech in legal tasks.


Another practical reason, concerns costs. According to the GOV.uk Judicial Diversity Statistics (2018), In England and Wales as of 2018 there are 2,978 Judges, and 1,703 of these are tribunal judges. Salaries range from £110,335 at group 7, up to £257,121. This means that even to only replace district judges, chairmen of employment tribunals, and lower level circuit judges - those who only apply the law to small standard cases and do not create binding precedent - with AI programs who are capable of applying the law in this way, would save hundreds of millions of pounds in tax money annually.


D. SOLUTIONS


IBM published “ethical use guidelines”, containing five fundamental rules that should be adhered to when utilising AI technology. The three I believe hold value are “accountability”, where the coders must always be held accountable for their creations, “value alignment”, where AI must be programmed along “human interests”, and “explainability”, whereby AI’s decision making process must be explainable, and AI must be able to explain its process at all times.


Firth-Butterfield (2017), supports imposing regulations on AI’s development so that it is never in the position of creating precedent which overrides that of humans. He also cited IBM’s “ethical use guidelines” as an example. To further stress the importance of this point, the European Union is set to release a series of guidelines for AI use, later this year.


Holme (2017), who as noted, is supportive of AI taking up legal roles merely in non discretionary contract analysis, argues that “human input from professionals with an understanding of legal…. issues” is required alongside AI to monitor and supervise it like any other tool. If input and oversight would be required for these tasks, such would seem imperative for judicial roles.


CONCLUSION


In acknowledgement of the above, it is likely that at some point, AI will assume certain judicial roles. When considering the advantages, this technology should be supported. A solution that would address and remedy the ethical issues, as well as keep the aforementioned advantages intact, follows:


It would be logical if all lower level judges were replaced with AI, while higher division courts retain highly qualified judges.This would keep the financial and time saving advantages, as most Judges come under this category and most cases are not taken beyond this stage. If one is not satisfied and wishes for a human judge to hear their case, an appeal may be submitted, whereby a human judge would review the application. At this point, cases would be heard at higher courts with human judges. This reconciles concerns raised by Dean, as people still have a right to appeal to a human reviewer, and their high profile decisions will be made by humans. It also rests the issue of AI judging what the law should be and representing societal values that govern humans, as the courts which carry this out, by making binding precedents, would remain in human hands.


Furthermore, its Judicial functioning in court or deciding on cases, should be overseen and monitored. For this, instead of a judge, only one with legal (LLB preferably) and computer science/programming training would be needed. Therefore any malfunctions, or faults in reasoning, can be identified and corrected. This responds to concerns raised by Mason that AI is prone to failure like any machine, and possibilities of rogue reasoning processes.

BIBLIOGRAPHY


CASES


State v Loomis 881 N.W.2d 749 (Wis. 2016)


ONLINE SOURCES


Bracher, P. & Louw, A. (2018), “Chinese court gets robot assistant”, Financial Institutions Legal Snapshot.


Cellan-Jones, R. (2017), “The robot lawyers are here - and they’re winning”, BBC.


Corfeild, G. (2018), “Shall we have AI judging UK court cases? Top beak ponders the future”, The Register.


IBM (2018), “Everyday Ethics for Artificial Intelligence”.


Ismail, C. (2016),“How do you plead? Could AI soon be the new Judge”, Information Age.


(2018). “Deloitte Insight: Over 100,000 legal roles to be automated”, Legal IT Insider.


Ministry of Justice. (2018), “Judicial Diversity Statistics 2018”, Courts and Tribunal Judiciary.


Snowdon, W. (2017), “Robot judges? Edmonton research crafting artificial intelligence for courts”.


WGS Observer, (2018), “Could an AI ever replace a judge in court?”.


JOURNALS


Baksi, C. (2016), ““Virtual lawyers”: rise of the machines?”, CILEx Journal, Vol. July, No. 8, 10-11.


Cross, M. (2018) “MoJ urged to test robot judges and online legal advice”, Law Society Gazette, Vol. 115, No. 25. 10.


Dalke, Dean L. (2013) ‘Can computers replace lawyers, mediators and judges?’, The Advocate. Vancouver Bar Association (Canada), 71(5), pp. 703–710.


Holme, D. (2017) “Using artificial intelligence: not pie in the sky”, PLC Magazine, Vol 28, No. 4, 92-93.


Firth-Butterfield, K. (2017) ‘ARTIFICIAL INTELLIGENCE AND THE LAW: MORE QUESTIONS THAN ANSWERS?’, The SciTech Lawyer. American Bar Association, 14(1), pp. 28–31.


Fulda, S, J. (2012) “Implications of a logical paradox for computer-dispensed justice

reconsidered: some key differences between minds and machines”, Artificial Intelligence and Law, Vol 20, No. 3, 321-333.


Griffin, P. (2018) “Artificial intelligence and the future of work”, Employment Law Journal, 190 (May), 22-24.


Katz, Pamela S. (2014) ‘Expert robot: using artificial intelligence to assist judges in admitting scientific expert testimony.’, Albany Law Journal of Science & Technology. Albany Law Journal of Science & Technology, 24(1), pp. 1–45.


Kerrigan, C. & Lammiman, K. (2016), “Law firms, artificial intelligence and smart contracts”, Butterworths Journal of International Banking & Financial Law, Vol 31, No. 3, 164-167.


Kugler, L. (2018) ‘AI Judges and Juries.’, Communications of the ACM. Association for Computing Machinery, Inc., 61(12), pp. 19–21.


Morris, R. (2015) ‘A Summary of the Twenty-Ninth AAAI Conference on Artificial Intelligence’, AI Magazine. La Canada: Association for the Advancement of Artificial Intelligence, 36(3), pp. 99–106.


Nakaoka, K. (2017), “Ethical guidelines for development of artificial intelligence”, Computer Law & Security Review, Vol 33, No. 3), 407-409.


Scheutz, M. (2017) ‘The Case for Explicit Ethical Agents’, AI Magazine. American Association for Artificial Intelligence, 38(4), pp. 57–64.


Mason, S. (2017), “Artificial Intelligence: oh really? And why judges and lawyers are central to the way we live now - but they don't know it”, C.T.L.R., Vol 23, No. 8, 213-225.


Taylor, D. & Osafo, N. (2018) “Artificial intelligence in the courtroom”, Law Society Gazette, Vol. 115 No. 13, 24.


Warren, Z. (2016) “Artificial intelligence in law: separating fact from fiction”, Legal Week, June 7th.

83 views

Recent Posts

See All
bottom of page