ChatGPT and the law: Would humans trust an A.I. judge? Yes.
Artificial intelligence judging has become a reality. Last month, a Colombian judge used ChatGPT to generate part of his judicial opinion. Colombia is not alone. Estonia has piloted a robot judge, and the United States and Canada increasingly use A.I. tools in law.
These recent events have sparked a debate about “unethical” uses of A.I. in the judiciary. As the technological hurdles to A.I. judging recede, the remaining barriers are ones of law and ethics.
Would it be fair to citizens for an A.I. judge—an algorithmic decision-maker—to resolve disputes? This is a complex legal and ethical question, but one useful piece of data is the views of citizens themselves. We conducted experiments on a representative sample of 6,000 U.S. adults to examine this question. And the results are surprising: Citizens don’t always see A.I. in the courtroom as unfair.
This result—human judges are not always seen as fairer than A.I. judges—defies conventional wisdom. Commentators have long seen the administration of justice as a distinctively human enterprise. The task of judging calls not only for knowledge and accuracy but also a respect for the dignity of the parties involved. If A.I. were incapable of conveying such an attitude, then human judges would have an inimitableprocedural justice advantage over machines.
At first blush, our results support this intuition that human judges are fairer. Ordinary citizens generally evaluate A.I. judges as less fair than human judges. In our first study, participants evaluated one of three scenarios: a contract dispute, bail determination, or criminal sentencing. Summing across all scenarios of the first study, human judges received an average procedural fairness score of approximately 4.4 on a 7-point scale. A.I. judges scored very slightly below 4. We call this perceived difference the “human-A.I. fairness gap.” All else equal, people evaluate legal proceedings before a human judge as fairer than legal proceedings before an A.I. judge. The human-A.I. fairness gap persists across diverse legal areas and issues.
Advertisement Advertisement Advertisement AdvertisementHowever, we also discover that this human-A.I. gap can be partially offset by increasing the A.I. judge’s interpretability and ability to provide a hearing. A hearing affords a party the opportunity to speak and be heard. A decision is interpretable if it can be presented in a logical form and if it is possible to grasp how changes in inputs affect outcomes. Both a hearing and an interpretable decision enhance ordinary judgments of fairness, whether the decision-maker is a human or an A.I. Strikingly, a human-led proceeding that does not offer a hearingand renders uninterpretabledecisions is not seen as being fairer than an A.I.-led proceeding that offers a hearing and renders interpretable decisions.
AdvertisementThis is surprising since one might have believed a hearing in front of a machine to be hollow and meaningless. For ordinary citizens to feel they have been listened to seems to require a decision-maker possessing the uniquely human capacity for empathy. Yet, we find that a machine described as being able to recognize speech and facial expressions and trained to detect emotions can enhance people’s perceptions of procedural justice.
Similarly, much of the legal-ethical discourse over A.I. has revolved around interpretability of algorithms. Often, the debate implicitly assumes that comparable decisions by humans areinterpretable. However, commentators have noted that humans are quintessential black boxes. Human decision-making is not always transparent to the decision-maker, never mind other humans. And we find that people do care about the interpretability of both human and A.I. decision-making.
Advertisement AdvertisementHow do we get from these findings to the conclusion that the human-A.I. fairness gap might one day be offset? Well, even today, full hearings in front of human judges are not always feasible because of resource constraints. For example, an asylum hearing will often only last several minutes. The same is true for bail hearings. Similarly, human judicial decisions are not perfectly interpretable. Human legal opinions vary in their readability, and A.I. tools can already provide highly readable text. It is not clear that A.I. tools can currently produce more interpretable judicial opinions than humans, but their ability to pass as legal reasoners is impressive. For example, ChatGPT recently passed four Minnesota Law School exams.
AdvertisementFinally, our studies suggest that the human-A.I. fairness gap is mainly driven by the belief that human judges are still more accurate than machines. However, there are and increasingly will be domains where machines will be demonstrably more accurate than humans, such as tumor classification. And experts predict that A.I. will exceed human performance in other fields over the next century.
AdvertisementPopular in News & Politics
- Sure Sounds Like the Supreme Court Is About to Give Trump a Big Win!
- A Supreme Court Justice Gave Us Alarming New Evidence That He’s Living in MAGA World
- The Anti-Defamation League Has Become a Threat to Jewish Safety
- Ten Years Ago, His Book About Civilizational Collapse Got Unexpectedly Popular. He’s Back With a Little Bit of Hope.
There are many other factors that may influence citizens’ evaluations of human and A.I. judges. Both humans and A.I. have their advantages. On the question of accuracy, one consideration is whether the administration of justice is reliable or random. Human asylum adjudication has been described as akin to “roulette.” The grant or refusal of asylum depends very much on who among the human judges hears the case. Insofar as predictability matters for perceptions of judicial fairness, variability between human judges may count against them as adjudicators for some kinds of cases.
Advertisement AdvertisementEven without considering such additional factors, simply adding a hearing and increasing the interpretability of A.I.-rendered decisions reduces the fairness gap. As such, some human judicial decisions today may be seen as less fair than advanced A.I. ones. And future developments in A.I. judging along the dimensions we have identified could even result in A.I. proceedings being accepted as generally fairer than human proceedings.
Of course, people’s ordinary intuitions about the fairness of A.I. judging do not fully resolve the underlying ethical and legal concerns. People can be mistaken about fairness or manipulated into believing a procedure is fair when it is not. But the opinions of those subject to the law should be taken into account when designing adjudicative institutions. And in some circumstances, people see having your day in human court as no fairer than having your day in robot court.
An expanded version of this work appears in the Harvard Journal of Law & Technology.
Tweet Share Share Comment(责任编辑:资讯)
-
TL;DR:Secure access to your favorite free porn sites with a VPN. The best service for unblocking por ...[详细]
-
李慰农烈士遗像中国梦板块中国山东网青岛频道8月17日讯(记者 刘淑红 通讯员 刘春梅) 近日,青岛八大峡街道李慰农公园焕然一新,开园纳客。重新开放的李慰农公园增添了更多红色党建主题元素,让往来游客在游 ...[详细]
-
29日,网曝杨洋以1000万片酬签约《汉之云》后要求翻倍,剧方接受后又无故拒演,毁约的同时,还对《择天记》报价7000万,并以8000万片酬接下《武动乾坤》。对此,贾士凯向媒体表示从来没有坐地涨价,也 ...[详细]
-
近日,关于“3月1日起,个人存取现金超5万元要登记”的消息引起了不少市民的关注和热议。有市民表示,虽然自己很长一段时间都没办理过5万元现金存取款业务,但也十分关心此事对自身带来的影响。记者了解到,央行 ...[详细]
-
This photo, carried on Wednesday, shows the North test-firing a 240mm multiple rocket launcher with ...[详细]
-
闈掑矝鍓嶆咕淇濈◣娓尯绀句細淇濋殰涓績鎺ㄨ鍔炰簨鈥滀笉瑙侀潰鈥濓紝纭繚鏈嶅姟鈥滀笉鎺夐摼鈥漘涓浗灞变笢缃慱闈掑矝
銆€銆€涓轰繚闅滃尯鍐呬紒涓氶『鍒╁宸ュ浜э紝鏈夋晥闃叉帶鐤儏锛屽尯绀句細淇濋殰涓績鎺ㄥ嚭浜虹ぞ鍜屽尰淇濅笟鍔?ldquo;缃戜笂鍔炪€佺數璇濆姙銆侀偖瀵勫姙銆侀绾﹀姙”绛夊绉 ...[详细] -
由文章执导,包贝尔、宋佳、朱亚文、焦俊艳领衔主演的爱情喜剧《陆垚知马俐》已于7月15日暑期档全国公映。21日下午,主演包贝尔、宋佳、焦俊艳来到青岛与观众分享电影的台前幕后故事。电影《陆垚知马俐》讲述了 ...[详细]
-
3月2日,副省长杨兴平到成都市、雅安市调研疫情防控、养老托育等工作。他强调,要加快推进当前疫情处置,毫不放松抓好常态化疫情防控,持续查堵漏洞,切实织密织牢疫情防控网络。要高度重视养老托育服务工作,全面 ...[详细]
-
Webb telescope just snapped image of huge black hole gobbling material
Black holes are misunderstood.They're almost inconceivably dense objects, which grants them immense ...[详细] -
春节过后,理财市场需求升温,各类金融机构也纷纷推出各类金融产品。不过,这些金融产品在通过网络营销时,却暗藏各种“套路”,消费者还须提高警惕。近日,银保监会发布风险提示,提醒消费者要注意防范侵害金融消费 ...[详细]