Automatic Justice: Shaping the Legal Mind of Tomorrow

Smart computing is changing the nature of legal work even as the profession struggles to understand its scope. Machines sophisticated enough to communicate intelligibly and naturally with human hosts, technology with the processing power to wrangle big data are enhancing the way attorneys do their jobs and affecting the way they think.[1] Law practices are now set up in paperless offices, cases litigated in hi-tech courtrooms, research done almost exclusively online, demanding higher levels of technical competency and professional responsibility.[2]

The vocabulary of technology is filling the legal landscape: algorithms, analytics, artificial intelligence (A.I.), automated decision-making, avatars, big data, cloud computing, code, cognitive computing, computer-aided, computer-generated, creative computing, cyborg, data driven, data mining, data science, data trails, deep learning, electronic discovery (e-discovery), expert systems, machine learning, metadata, mobile technology, mosaic theory, natural language, neural networks, paperless and virtual offices, pattern matching, predictive analytics, robotics, self-replicating technologies, smart data, smart technology, source code, and supercomputers. So, time worn lexicons and practice libraries are infiltrated with the latest computer terminologies and technical manuals.[3]

The work of lawyers, judges and government officials increasingly relies on the processing power of microchips.[4] So, the Bartleby of tomorrow is taking shape today.[5] From document assembly to document drafting, the borderlands of decision-making, data analysis, and communication will mark the progress of law and raise new questions for the administration of justice.[6] And the breadth of information competence will need to expand with each new generation of technology.[7]

This article collects recent and notable works on the automation of lawyering, the administration of law and legal thinking.[8]

BOOKS AND REPORTS

Amplifying Human Potential: Towards Purposeful Artificial Intelligence (Infosys 2017)
“Infosys commissioned independent market research company Vanson Bourne to investigate the approach and attitudes that senior decision-makers in large organizations have towards A.I. technology and how they see the future application and development of A.I. in their industries. The study also sought to measure and score organizational maturity, to create an index and set of profiles for the countries examined.”

Code-Dependent: Pros and Cons of the Algorithm Age (Pew Research Center 2017)
“Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment.”

Discussion Leader’s Guide: Procedural-Fairness Video Scenarios (NCSC)
“The National Center for State Courts has prepared four video scenarios that can be viewed and then discussed to supplement training programs on procedural fairness (also called procedural justice) for judges and court staff. These videos explore how procedural-fairness principles may best be deployed in situations judges and court staff face as litigants encounter the court system.” See, e.g., “The Computerized Judge (9:15): A judge hearing a proceeding to terminate a mother’s parental rights sits in a modern courtroom, where he accesses the court file on one computer, the court calendar on an iPad, and texts about emergency warrant requests on an iPhone. This leads to a motion for mistrial based on the judge’s inattention.”

Future of Jobs and Jobs Training (Pew Research Center 2017)
“As robots, automation and artificial intelligence perform more tasks and there is massive disruption of jobs, experts say a wider array of education and skills-building programs will be created to meet new demands. There are two uncertainties: Will well-prepared workers be able to keep up in the race with A.I. tools? And will market capitalism survive?”

National Artificial Intelligence Research and Development Strategic Plan (National Science and Technology Council 2016)
“Artificial intelligence (A.I.) is a transformative technology that holds promise for tremendous societal and economic benefit. A.I. has the potential to revolutionize how we live, work, learn, discover, and communicate. A.I. research can further our national priorities, including increased economic prosperity, improved educational opportunities and quality of life, and enhanced national and homeland security. Because of these potential benefits, the U.S. government has invested in A.I. research for many years. Yet, as with any significant technology in which the Federal government has interest, there are not only tremendous opportunities but also a number of considerations that must be taken into account in guiding the overall direction of Federally-funded R&D in A.I.. . . . This National Artificial Intelligence R&D Strategic Plan establishes a set of objectives for Federally-funded A.I. research, both research occurring within the government as well as Federally-funded research occurring outside of government, such as in academia. The ultimate goal of this research is to produce new A.I. knowledge and technologies that provide a range of positive benefits to society, while minimizing the negative impacts.”

Preparing for the Future of Artificial Intelligence (National Science and Technology Council 2016)
“As a contribution toward preparing the United States for a future in which Artificial Intelligence (A.I.) plays a growing role, we survey the current state of A.I., its existing and potential applications, and the questions that are raised for society and public policy by progress in A.I.. We also make recommendations for specific further actions by Federal agencies and other actors. A companion document called the National Artificial Intelligence Research and Development Strategic Plan lays out a strategic plan for Federally-funded research and development in A.I..”

Robots in Law: How Artificial Intelligence is Transforming Legal Services (Ark Group 2016)
“Although 2016 was a breakthrough year for artificial intelligence (A.I.) in legal services in terms of market awareness and significant take-up, legal A.I. represents evolution rather than revolution. Since the first ‘robot lawyers’ started receiving mainstream press coverage, many law firms, other legal service providers and law colleges are being asked what they are doing about A.I.. Robots in Law: How Artificial Intelligence is Transforming Legal Services is designed to provide a starting point in the form of an independent primer for anyone looking to get up to speed on A.I. in legal services. The book is organized into four distinct sections: Part I: Legal A.I. – Beyond the hype Part II: Putting A.I. to work Part III: A.I. giving back – Return on investment Part IV: Looking ahead. The first three present an in-depth overview, and analysis, of the current legal A.I. landscape; the final section includes contributions from A.I. experts with connections to the legal space, on the prospects for legal A.I. in the short-term future. Along with the emergence of New Law and the burgeoning lawtech start-up economy, A.I. is part of a new dynamic in legal technology and it is here to stay. The question now is whether A.I. will find its place as a facilitator of legal services delivery, or whether it will initiate a shift in the value chain that will transform the legal business model.”

SCHOLARLY ARTICLES

Accessing Law: An Empirical Study Exploring the Influence of Legal Research Medium, 16 Vand. J. Ent. & Tech. L. 757 (2014)
“The legal profession is presently engaged in an uncontrolled experiment. Attorneys now locate and access legal authorities primarily through electronic means. Although this shift to an electronic research medium radically changes how attorneys discover and encounter law, little empirical work investigates impacts from the shift to an electronic medium. This Article presents the results of one of the most robust empirical studies conducted to date comparing research processes using print and electronic sources. While the study presented in this Article was modest in scope, the extent and type of the differences that it reveals are notable. Some of the observed differences between print and electronic research processes confirm predictions offered, but never before confirmed, about how the research medium changes the research process. This Article strongly supports calls for the legal profession and legal academy to be more attentive to the implications of the shift to electronic research.”

Accountable Algorithms, 165 U. Pa. L. Rev. 633 (2017)
“Part I of this Article provides an accessible and concise introduction to foundational computer science concepts that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how algorithmic decision-making may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly in Part IV, we propose an agenda to further synergistic collaboration between computer science, law and policy to advance the design of automated decision processes for accountability.”

Algorithm as a Human Artifact: Implications for Legal {Re}Search, SSRN (2016)
“When legal researchers search in online databases for the information they need to solve a legal problem, they need to remember that the algorithms that are returning results to them were designed by humans. The world of legal research is a human-constructed world, and the biases and assumptions the teams of humans that construct the online world bring to the task are imported into the systems we use for research. This article takes a look at what happens when six different teams of humans set out to solve the same problem: how to return results relevant to a searcher’s query in a case database. When comparing the top ten results for the same search entered into the same jurisdictional case database in Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw, the results are a remarkable testament to the variability of human problem solving. There is hardly any overlap in the cases that appear in the top ten results returned by each database. An average of forty percent of the cases were unique to one database, and only about 7% of the cases were returned in search results in all six databases. It is fair to say that each different set of engineers brought very different biases and assumptions to the creation of each search algorithm. One of the most surprising results was the clustering among the databases in terms of the percentage of relevant results. The oldest database providers, Westlaw and Lexis, had the highest percentages of relevant results, at 67% and 57%, respectively. The newer legal database providers, Fastcase, Google Scholar, Casetext, and Ravel, were also clustered together at a lower relevance rate, returning approximately 40% relevant results.

Legal research has always been an endeavor that required redundancy in searching; one resource does not usually provide a full answer, just as one search will not provide every necessary result. The study clearly demonstrates that the need for redundancy in searches and resources has not faded with the rise of the algorithm. From the law professor seeking to set up a corpus of cases to study, the trial lawyer seeking that one elusive case, the legal research professor showing students the limitations of algorithms, researchers who want full results will need to mine multiple resources with multiple searches. And more accountability about the nature of the algorithms being deployed would allow all researchers to craft searches that would be optimally successful.”

Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err, SSRN (2015)
“Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.”

Algorithmic Entities, SSRN (2017)
“This Article argues that algorithmic entities — legal entities that have no human controllers — greatly exacerbate the threat of artificial intelligence. Algorithmic entities are likely to prosper first and most in criminal, terrorist, and other anti-social activities because that is where they have their greatest comparative advantage over human-controlled entities. Control of legal entities will contribute to algorithms’ prosperity by providing them with identities that will enable them to accumulate wealth and participate in commerce. Four aspects of corporate law make the human race vulnerable to the threat of algorithmic entities. First, algorithms can lawfully have exclusive control of not just American LLC’s but also a large majority of the entity forms in most countries. Second, entities can change regulatory regimes quickly and easily through migration. Third, governments — particularly in the United States — lack the ability to determine who controls entities they charter and so cannot determine which have non-human controllers. Lastly, corporate charter competition, combined with ease of entity migration, makes it virtually impossible for any government to regulate algorithmic control of entities.”

Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator, 25 Ohio St. J. on Disp. Resol. 105 (2010)
“As technology has advanced, many have wondered whether (or simply when) artificial intelligent devices will replace the humans who perform complex, interactive, interpersonal tasks such as dispute resolution. Has science now progressed to the point that artificial intelligence devices can replace human mediators, arbitrators, dispute resolvers and problem solvers? Can humanoid robots, attractive avatars and other relational agents create the requisite level of trust and elicit the truthful, perhaps intimate or painful, disclosures often necessary to resolve a dispute or solve a problem? This article will explore these questions. Regardless of whether the reader is convinced that the demise of the human mediator or arbitrator is imminent, one cannot deny that artificial intelligence now has the capability to assume many of the responsibilities currently being performed by alternative dispute resolution (ADR) practitioners. It is fascinating (and perhaps unsettling) to realize the complexity and seriousness of tasks currently delegated to avatars and robots. This article will review some of those delegations and suggest how the artificial intelligence developed to complete those assignments may be relevant to dispute resolution and problem solving. “Relational Agents,” which can have a physical presence such as a robot, be embodied in an avatar, or have no detectable form whatsoever and exist only as software, are able to create long term socio-economic relationships with users built on trust, rapport and therapeutic goals. Relational agents are interacting with humans in circumstances that have significant consequences in the physical world. These interactions provide insights as to how robots and avatars can participate productively in dispute resolution processes. Can human mediators and arbitrators be replaced by robots and avatars that not only physically resemble humans, but also act, think, and reason like humans? And to raise a particularly interesting question, can robots, avatars and other relational agents look, move, act, think, and reason even “better” than humans?”

Artificial Intelligence, Legal Research and Law Librarians, AALL Spectrum, May/June 2017, at 17
“Artificial intelligence (A.I.) and its legal practice applications are grabbing headlines in the legal industry. Ever since the early success stories of IBM Watson, the legal press has been buzzing with articles that debate whether A.I. is a threat or hope and whether A.I. will transform, disrupt, revolutionize, or even remake the legal industry. … Now it’s time to focus on the law librarian’s role regarding A.I. applications in legal research and aiding practitioners in minimizing potential risks due to A.I. utilization.”

Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327 (2015)
“This article traces the consequences in the shift from a “small data” reasonable suspicion doctrine, focused on specific, observable actions of unknown suspects, to the “big data” reality of an interconnected information rich world of known suspects. With more targeted information, police officers on the streets will have a stronger predictive sense about the likelihood that they are observing criminal activity. This evolution, however, only hints at the promise of big data policing. The next phase will be using existing predictive analytics to target suspects without any actual observation of criminal activity, merely relying on the accumulation of various data points. Unknown suspects will become known, not because of who they are but because of the data they left behind. Using pattern matching techniques through networked databases, individuals will be targeted out of the vast flow of informational data. This new reality subverts reasonable suspicion from being a source of protection against unreasonable stops, to a means of justifying those same stops.”

Can Machines Replace the Human Brain? A Review of Litigation Outcome Prediction Methods for Construction Disputes, SSRN (2015)
“Construction projects are naturally complicated and involve large number of unpredictable as well as external interrelated factors. Complex construction projects value is in excess of billions of dollars. As a result, disputes between the contracting parties are critical and difficult to resolve. Traditionally, litigation was the only avenue to resolve such disputes. However, with its complicated nature and technicalities involved, construction projects’ experts deployed alternative dispute resolution methods such as arbitration and mediation. Each vary in the involved resources and the legal consequences. Litigation, however, is found to be one of the most expensive and time consuming methods. Moreover, the results of litigation are unguaranteed. Therefore, researchers attempted to predict the outcome of litigation in the field of construction dispute to give the contracting parties a good order of estimate on the expected outcome. This would be a good tool to decide whether a party shall file a litigation case or not.”

Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law, SSRN (2016)
“We assess frequently-advanced arguments that automation will soon replace much of the work currently performed by lawyers. Our assessment addresses three core weaknesses in the existing literature: (i) a failure to engage with technical details to appreciate the capacities and limits of existing and emerging software; (ii) an absence of data on how lawyers divide their time among various tasks, only some of which can be automated; and (iii) inadequate consideration of whether algorithmic performance of a task conforms to the values, ideals and challenges of the legal profession. Combining a detailed technical analysis with a unique data set on time allocation in large law firms, we estimate that automation has an impact on the demand for lawyers’ time that while measurable, is far less significant than popular accounts suggest. We then argue that the existing literature’s narrow focus on employment effects should be broadened to include the many ways in which computers are changing (as opposed to replacing) the work of lawyers. We show that the relevant evaluative and normative inquiries must begin with the ways in which computers perform various lawyering tasks differently than humans. These differences inform the desirability of automating various aspects of legal practice, while also shedding light on the core values of legal professionalism.”

Cyberdelegation and the Administrative State, SSRN (2017)
“This paper explores questions and trade-offs associated with delegating administrative agency decisions to computer algorithms, neural networks, and similar examples of “artificial intelligence,” and offers the following preliminary observations to further discussion of the opportunities and risks. First, neither conventional expert systems nor neural networks (or other machine learning mechanisms) are in a position to resolve (without human intervention) context-specific debates about society’s goals for regulation or administrative adjudication – and these debates are often inherent in the implementation of statutes. Those goals must also inform whether we assign value to aspects of human cognition that contrast with what computers can (presently) accomplish, or what might be conventionally defined as rational in a decision-theoretic sense. Second, society must consider path-dependent consequences and associated cybersecurity risks that could arise from reliance on computers to make and support decisions. Such consequences include the erosion of individual and organizational knowledge over time. Third, it may prove difficult to limit the influence of computer programs even if they are meant to be mere decision support tools rather than the actual means of making a decision. Finally, heavy reliance on computer programs – particularly adaptive ones that modify themselves over time – may further complicate public deliberation about administrative decisions, because few if any observers will be entirely capable of understanding how a given decision was reached.”

Databases, E-Discovery, and Criminal Law, 15 Rich. J.L. & Tech. 6 (2009)
“The enduring value of the Constitution is the fundamental approach to human rights transcending time and technology. The modern complexity and variety of electronically stored information was unknown in the eighteenth century, but the elemental due process concepts forged then can be applied now. At some point, the accumulation of information surpassed the boundaries of living witnesses and paper records. The advent of computers and databases ushered in an entirely new order, giving rise to massive libraries of factual details and powerful investigative tools. But electronically collected information sources are a double-edged sword. Their accuracy and reliability are critical issues in the hands of prosecutors and their accessibility a hard-won necessity in preparing a defense. This article examines the use of computer databases and electronic evidence from both standpoints. With limited guidance from federal and state criminal discovery rules, the courts have had to rely on constitutional principles and analogies to civil procedure when faced with database and electronic document discovery requests. A tension exists between the government’s proprietary interest in preserving the sanctity of its databases and the right of the defense to assail the accuracy of the databases’ output or to use them as investigative tools. As the gold standards of forensic science have come to be questioned, so too the inviolability of government databases must be rethought. And the defense’s right to prepare its case and receive a fair trial makes it necessary to use database knowledge comparable to the prosecution. Much of this information is generated solely by the government or its experts. The civilian alternatives are prohibitively expensive, inadequate, or non-existent. This review will highlight the problems created by disparities in resources and the role of constitutional and procedural remedies in the future development of criminal electronic discovery.”

Dawn of Fully Automated Contract Drafting: Machine Learning Breathes New Life into a Decades-Old Promise, 15 Duke L. & Tech. Rev. 216 (2017)
“Technological advances within contract drafting software have seemingly plateaued. Despite the decades-long hopes and promises of many commentators, critics doubt this technology will ever fully automate the drafting process. But, while there has been a lack of innovation in contract drafting software, technological advances have continued to improve contract review and analysis programs. “Machine learning,” the leading innovative force in these areas, has proven incredibly efficient, performing in mere minutes tasks that would otherwise take a team of lawyers tens of hours. Some contract drafting programs have already experimented with machine learning capabilities, and this technology may pave the way for the full automation of contract drafting. Although intellectual property, data access, and ethical obstacles may delay complete integration of machine learning into contract drafting, full automation is likely still viable.”

Death of Rules and Standards, SSRN (2016)
“Scholars have examined the lawmakers’ choice between rules and standards for decades. This paper, however, explores the possibility of a new form of law that renders that choice unnecessary. Advances in technology (such as big data and artificial intelligence) will give rise to this new form – the micro-directive – which will provide the benefits of both rules and standards without the costs of either. Lawmakers will be able to use predictive and communication technologies to enact complex legislative goals that are translated by machines into a vast catalog of simple commands for all possible scenarios. When an individual citizen faces a legal choice, the machine will select from the catalog and communicate to that individual the precise context-specific command (the micro-directive) necessary for compliance. In this way, law will be able to adapt to a wide array of situations and direct precise citizen behavior without further legislative or judicial action. A micro-directive, like a rule, provides a clear instruction to a citizen on how to comply with the law. But, like a standard, a micro-directive is tailored to and adapts to each and every context. While predictive technologies such as big data have already introduced a trend toward personalized default rules, in this paper we suggest that this is only a small part of a larger trend toward context-specific laws that can adapt to any situation. As that trend continues, the fundamental cost trade-off between rules and standards will disappear, changing the way society structures and thinks about law.”

Defending Data, 88 S. Cal. L. Rev. (2015)
“Defending Data begins by describing the data deficit in public defense and discussing the systemic, technological, and cultural reasons for this data-void. Then, Defending Data explains the systems approach to high-stakes professional practices and explores how public defenders can adapt this approach to the delivery of indigent defense services. Based on this analysis, Defending Data proposes a systems approach to public defense and offers a preliminary typology of the data that such public defenders should collect and analyze. Using concrete examples, Defending Data demonstrates how public defender systems might implement a data-driven systems approach. Defending Data concludes with a call for the indigent defense community to reimagine indigent defense by establishing national standards for defending data.”

Defining Autonomy in the Context of Tort Liability: Is Machine Learning Indicative of Robotic Responsibility?, SSRN (2016)
“Bypassing whether robots can be liable, this Paper focuses on the extent to which machine learning heightens robotic accountability, and asks, at what point ought the law hold robots liable because the decision creating the harm was not a function of software programming on the front end, but a function of robotic choice? This Paper recommends a variation of Ugo Pagallo’s “digital peculium” liability scheme for “hard cases” – where fully autonomous robots make decisions absent appropriate linkage to the original programmer and, thus, fall outside the scope of pre-programmed uncertainty. Situating Pagallo’s “hard cases” in the larger abstraction laid out by H.L.A. Hart and Ronald Dworkin, this Paper concludes by considering whether determination of a right answer, or conclusive indetermination of any, exists for application of legal accountability to ever-increasing robotic autonomy.”

Electronically Manufactured Law, 22 Harv. J. Law & Tech. 223 (2008)
“We increasingly communicate and experience law through an electronic medium. Existing scholarship suggests that prior changes in the communication of law – from oral to scribal, scribal to moveable type, the widespread publication of cases – influenced the development of the law, including by contributing to the rise of basic concepts such as precedent. One element of the present shift in the communication of law is that the process by which we find the law has been transformed. Specifically, legal case research, once conducted exclusively through the use of print-based resources (reporter volumes, case digests, treatises), is now conducted primarily through searches of electronic legal databases. This Article employs principles of cognitive psychology to generate empirical predictions about how the shift from a print-based to an electronic research process changes researcher behavior and research outcomes. The Article then assesses the broader impacts of these changes with respect to the content and practice of law.

Specifically, the Article identifies three changes to the research process that are salient for predicting the broader impacts of the shift from print-based to electronic research: (1) Electronic researchers are not guided by the key system to the same extent as print researchers when identifying relevant theories, principles, and cases; (2) Electronic researchers do not encounter and interpret individual cases through the lens of key system information to the same extent as print researchers; and (3) Electronic researchers are exposed to more and different case texts than print researchers. The Article then considers these basic changes in light of principles of cognitive psychology, including the influence of labeling, categorization, and confirmatory bias on understanding, and offers empirical predictions about the impacts of the shift from print-based to electronic research.”

Ethics of Algorithms: Mapping the Debate, SSRN (2017)
“In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.”

From Jeopardy! to Jaundice: The Medical Liability Implications of Dr. Watson and Other Artificial Intelligence Systems, 73 La. L. Rev. 1049 (2013)
“Artificial intelligence systems like Watson can fill gaps that healthcare shortages cause and enhance the quality of care that patients receive. Creating a streamlined approach for assessing liability against artificial intelligence systems will encourage their use by clarifying unknown potential liabilities. By combining elements from medical malpractice, vicarious liability, products liability, and enterprise liability, the law can create a uniform approach for artificial intelligence systems, thereby eliminating any inequities that may arise from courts applying different theories of liability. This Comment explores how the law should assess liability against artificial intelligence systems. Part I of this Comment discusses Watson, other areas of cutting-edge medical technology, and the law’s response to them. Part II analyzes current liability regimes–including medical malpractice, products liability, and vicarious liability–to determine how effectively these legal mechanisms can apply to artificial intelligence systems. Part II also explains why current liability regimes are inadequate. Finally, Part III proposes an integrated system for assessing liability against artificial intelligence systems based on enterprise liability.”

General Approach for Predicting the Behavior of the Supreme Court of the United States, PLOS One, Apr. 12, 2017
“Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. To do so, we develop a time-evolving random forest classifier that leverages unique feature engineering to predict more than 240,000 justice votes and 28,000 cases outcomes over nearly two centuries (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an important advance for the science of quantitative legal prediction and portend a range of other potential applications.”

How the Machine ‘Thinks:’ Understanding Opacity in Machine Learning Algorithms, SSRN (2015)
“This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I [Jenna Burrell] draw a distinction between three forms of opacity: (1) opacity as intentional corporate or state secrecy (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented), and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is key to determining which of a variety of technical and non-technical solutions could help to prevent harm.”

Human Error in Software Development and Inspection (Ray Panko’s Human Error Website 2014)
“Professional programmers are taught that they will make errors. In fact, data from over 10,000 code inspections in industry suggests that they will make undetected errors in 2% to 5% of all lines of code at the end of module development. This knowledge has led to extensive testing in commercial software development. In commercial software development, testing consumes between a quarter and half of all development resources [Jones 1998, Kimberland 2004], and this does not even count rework by developers. There are several types of testing for software. One is code inspection, in which a team of software engineers examines a module of code to identify errors. This work is done in teams because individual inspectors only find a minority of all errors in a module. Even teams typically find about 60% to 75% of the errors in a module. Consequently, there are multiple rounds of testing during commercial software testing. By the time the product is delivered, the error rate is slashed but never eliminated. Putnam & Myers [1992] surveyed data from 1486 projects involving 117 million lines of code and written in 78 languages. Faults per line of code at final inspection, after unit testing of individual pieces. The error rate at final inspection was 0.1% to 0.3%.”

“I, Robot – I, Criminal” – When Science Fiction Becomes Reality: Legal Liability of A.I. Robots Committing Criminal Offenses, 22 Syracuse Sci. & Tech. L. Rep. 1 (2010)
“Can society impose criminal liability upon robots? The technological world has changed rapidly. Simple human activities are being replaced by robots. As long as humanity used robots as mere tools, there was no real difference between robots and screwdrivers, cars or telephones. When robots became sophisticated, we used to say that robots think for us. The problem began when robots evolved from thinking machines into thinking machines or Artificial Intelligence Robots. Could they become dangerous? Unfortunately, they already are. People’s fear of A.I. robots, in most cases, is based on the fact that A.I. robots are not considered to be subject to the law, specifically to criminal law. This note explores which kind of laws or ethics are appropriate to govern the behavior of A.I. Robots and who has the authority to decide a robot’s faith.”

I Think, Therefore I Invent: Creative Computers and the Future of Patent Law, 57 B.C. L. Rev. 1079 (2016)
“Drawing on dynamic principles of statutory interpretation and taking analogies from the copyright context, this article argues that creative computers should be considered inventors under the Patent and Copyright Clause of the Constitution. Treating nonhumans as inventors would incentivize the creation of intellectual property by encouraging the development of creative computers. The article proceeds to address a host of challenges that would result from computer inventorship, ranging from ownership of computer-based inventions, to displacement of human inventors, to the need for consumer protection policies. This analysis applies more broadly to nonhuman creators of intellectual property, and explains why the Copyright Office came to the wrong conclusion with its Human Authorship Requirement. Just as permitting computer inventorship will further promote the progress of science, so too will permitting animal authorship promote the useful arts by creating new incentives for people. Finally, computer inventorship provides insight into other areas of patent law. For instance, computers could replace the hypothetical skilled person that courts use to judge inventiveness. This would provide justification for raising the bar to patentability and would address one of the most serious criticisms of the patent system — that too many patents of questionable value are issued. Creative computers may require a rethinking of the baseline standard for inventiveness, and potentially of the entire patent system.”

Incorporating Ethics into Artificial Intelligence, J. Ethics (2016)
“This article reviews the reasons scholars hold that driverless cars and many other A.I. equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by A.I.-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand.”

Judging Ordinary Meaning, SSRN (2017)
“We identify theoretical and operational deficiencies in our law’s attempts to credit the ordinary meaning of the law and present linguistic theories and tools to assess it more reliably. Our framework examines iconic problems of ordinary meaning — from the famous “no vehicles in the park” hypothetical to two Supreme Court cases (United States v. Muscarello and Taniguchi v. Kan Pacific Saipan) and a Seventh Circuit opinion of Judge Richard Posner (in United States v. Costello). We show that the law’s conception of ordinary meaning implicates empirical questions about language usage. And we present linguistic tools from a field known as corpus linguistics that can help to answer these empirical questions. When we speak of ordinary meaning we are asking an empirical question — about the sense of a word or phrase that is most likely implicated in a given linguistic context. Linguists have developed computer-aided means of answering such questions. We propose to import those methods into the law of interpretation. And we consider and respond to criticisms of their use by lawyers and judges.”

Law and Ethics of High-Frequency Trading, 17 Minn. J. L. Sci. & Tech. 71 (2016)
“Michael Lewis’s recent book Flash Boys has resurrected the controversy concerning “high-frequency trading” (HFT) in the stock markets. While HFT has been important in the stock markets for about a decade, and may have already peaked in terms of its economic significance, it touched a nerve with a public suspicious of financial institutions in the wake of the financial crisis of 2008-2009. In reality, HFT is not one thing, but a wide array of practices conducted by technologically adept electronic traders. Some of these practices are benign, and some even bring benefits such as liquidity and improved price discovery to financial markets. On the other hand, there are legitimate grounds for the commonly heard complaint that “HFT is not fair.” Certain HFT practices such as co-location, flash orders, and enriched data feeds create a two-tiered financial marketplace, while other practices such as momentum ignition, spoofing, and layering are merely high-tech versions of traditional market manipulation. Finally, the creation of special order types such as “Hide Not Slide” shows the exchanges allowing their HFT clients to jump the queue of price-time priority embedded in Regulation NMS and stock market practice. While the commonly-used technique of a cost-benefit analysis leads to equivocal or indeterminate results when applied to HFT trading activity in complex and often opaque markets, a more basic ethic of fairness grounded in commonly accepted rules of financial market behavior illustrates that certain HFT practices are indeed unfair. This Article draws on the legal, finance, and business ethics literature to illustrate exactly how certain forms of HFT are unfair, and proposes four core principles to guide HFT activity and its regulation.”

Learning in Artificial Intelligence: Does Bloom’s Taxonomy Apply?, SSRN (2016)
“From the early days of science and philosophy, humans have wondered exactly what it is that makes us intelligent beings. Plato felt that a good education should include instruction in music, gymnastics, and dialectic, and pondered and how we learned the things we did. Humans also been interested in building machines that mimic their abilities. The original motivation of building mechanical machines was to free us from mundane tasks and to allow fewer people to do particular difficult jobs. Today we build electronic machines that not only do our chores and to assist us physically, but also to help us intellectually. A side effect of building intelligent, electronic machines is that we come to know how humans work a little bit better, and we also learn to better appreciate our amazing ability to learn. There is a danger that we may take the results of artificial intelligence too seriously, and to extend them in ways that are not appropriate. The issues surrounding the mechanisms of artificial intelligence and machine learning may or may not apply to the way that people learn. They are simply the way that we imagine we may learn, a model that needs to be tested against reality. On the other hand, if it makes a computer appear intelligent then it is sufficient, then it is sufficient for our purposes whether it is the way people do it or not. The difference is in intent: are we attempting to automate some difficult process, or are we trying to determine how the human brain acquires new information?”

Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871 (2016)
“At the conceptual intersection of machine learning and government data collection lie Automated Suspicion Algorithms, or ASAs, algorithms created through the application of machine learning methods to collections of government data with the purpose of identifying individuals likely to be engaged in criminal activity. The novel promise of ASAs is that they can identify data-supported correlations between innocent conduct and criminal activity and help police prevent crime. ASAs present a novel doctrinal challenge, as well, as they intrude on a step of the Fourth Amendment’s individualized suspicion analysis previously the sole province of human actors: the determination of when reasonable suspicion or probable cause can be inferred from established facts. This Article analyzes ASAs under existing Fourth Amendment doctrine for the benefit of courts who will soon be asked to deal with ASAs. In the process, the Article reveals how that doctrine is inadequate to the task of handling these new technologies and proposes extra-judicial means of ensuring that ASAs are accurate and effective.”

Machine Testimony, SSRN (2017)
“Machines play increasingly crucial roles in establishing facts in legal disputes. Some machines convey information — the images of cameras, the measurements of thermometers, the opinions of expert systems. When a litigant offers a human assertion for its truth, the law subjects it to testimonial safeguards — such as impeachment and the hearsay rule — to give juries the context necessary to assess the source’s credibility. But the law on machine conveyance is confused; courts shoehorn them into existing rules by treating them as “hearsay,” as “real evidence,” or as “methods” underlying human expert opinions. These attempts have not been wholly unsuccessful, but they are intellectually incoherent and fail to fully empower juries to assess machine credibility. This Article seeks to resolve this confusion and to offer a coherent framework for conceptualizing and regulating machine evidence. First, it explains that some machine evidence, like human testimony, depends on the credibility of a source. Just as so-called “hearsay dangers” lurk in human assertions, “black box dangers” — human and machine errors causing a machine to be false by design, inarticulate, or analytically unsound — potentially lurk in machine conveyances. Second, it offers a taxonomy of machine evidence, explaining which types implicate credibility and how courts have attempted to regulate them through existing law. Third, it offers a new vision of testimonial safeguards for machines. It explores credibility testing in the form of front-end design, input and operation protocols; pretrial disclosure and access rules; authentication and reliability rules; impeachment and courtroom testing mechanisms; jury instructions; and corroboration rules. And it explains why machine sources can be “witnesses” under the Sixth Amendment, refocusing the right of confrontation on meaningful impeachment. The Article concludes by suggesting how the decoupling of credibility testing from the prevailing courtroom-centered hearsay model could benefit the law of testimony more broadly.”

Old Laws New Tricks: Drunk Driving and Autonomous Vehicles, 55 Jurimetrics J. 275 (2015)
“Drunk driving, or driving under the influence (DUI), is a major public health problem in the United States. Despite attempts to educate drivers on the dangers of drunk driving and deter such behavior through criminal punishment, there are still thousands of deaths attributable to drunk driving every year and billions of dollars spent on damage from auto accidents, loss of life, injuries, deterrence, and punishment. Recent developments in autonomous vehicle technology could reduce or eliminate DUI-related accidents within the next decade. However, even in future autonomous vehicle systems that will have the potential to drive themselves in most circumstances, human intervention will sometimes be necessary. This article applies current DUI laws to autonomous vehicles and proposes a legislative change to clarify DUI laws and enhance the public safety.”

Opening the Black Box: In Search of Algorithmic Transparency, SSRN (2016)
“Given the importance of search engines for public access to knowledge and questions over their neutrality, there have been many theoretical debates about the regulation of the search market and the transparency of search algorithms. However, there is little research on how such debates have played out empirically in the policy sphere. This paper aims to map how key actors in Europe and North America have positioned themselves in regard to transparency of search engine algorithms and the underlying political and economic ideas and interests that explain these positions. It also discusses the strategies actors have used to advocate for their positions and the likely impact of their efforts for or against greater transparency on the regulation of search engines. Using a range of qualitative research methods, including analysis of textual material and elite interviews with a wide range of stakeholders, this paper concludes that while discussions around algorithmic transparency will likely appear in future policy proposals, it is highly unlikely that search engines will ever be legally required to share their algorithms due to a confluence of interests shared by Google and its competitors. It ends with recommendations for how algorithmic transparency could be enhanced through qualified transparency, consumer choice, and education.”

Policing Criminal Justice Data, 101 Minn. L. Rev. 541 (2016)
“This article addresses a matter of fundamental importance to the criminal justice system: the presence of erroneous information in government databases and the limited government accountability and legal remedies for the harm that it causes individuals. While a substantial literature exists on the liberty and privacy perils of large multi-source data assemblage, often termed “big data,” this article addresses the risks associated with the collection, generation and use of “small data” (i.e., individual-level, discrete data points). Because small data provides the building blocks for all data-driven systems, enhancing its quality will have a significant positive effect on the criminal justice system as a whole. The article examines the many contexts in which criminal justice data errors arise and offers institutional and legislative solutions designed both to lessen their occurrence and afford relief to those suffering the significant harms they cause.”

Policing Predictive Policing, SSRN (2017)
“Predictive policing raises profound questions about the nature of predictive analytics and the attached article is the first sustained practical and theoretical critique of predictive policing. Questions of data collection, methodology, transparency, accountability, security, vision, and practical implementation emerge from this move toward actuarial justice. Building off a wealth of theoretical insights from scholars who have addressed the rise of risk assessment throughout the criminal justice system, this article provides an analytical framework to police not just predictive policing, but all future predictive technologies.”

Predicting and Understanding Law-Making with Machine Learning, SSRN (2016)
“Out of nearly 70,000 bills introduced in the U.S. Congress from 2001 to 2015, only 2,513 were enacted. We developed a machine learning approach to forecasting the probability that any bill will become law. Starting in 2001 with the 107th Congress, we trained models on data from previous Congresses, predicted all bills in the current Congress, and repeated until the 113th Congress served as the test. For prediction, we scored each sentence of a bill with a language model that embeds legislative vocabulary into a semantic-laden vector space. This language representation enables our investigation into which words increase the probability of enactment for any topic. To test the relative importance of text and context, we compared the text model to a context-only model that uses variables such as whether the bill’s sponsor is in the majority party. To test the effect of changes to bills after their introduction on our ability to predict their final outcome, we compared using the bill text and meta-data available at the time of introduction with using the most recent data. At the time of introduction context-only predictions outperform text-only, and with the newest data text-only outperforms context-only. Combining text and context always performs best. We conducted a global sensitivity analysis on the combined model to determine important factors predicting enactment.”

Predicting the Knowledge-Recklessness Distinction in the Human Brain, SSRN (2017)
“Criminal convictions require proof that a prohibited act was performed in a statutorily specified mental state. Different legal consequences, including greater punishments, are mandated for those who act in a state of knowledge, compared with a state of recklessness. Existing research, however, suggests people have trouble classifying defendants as knowing, rather than reckless, even when instructed on the relevant legal criteria. We used a machine-learning technique on brain imaging data to predict, with high accuracy, which mental state our participants were in. This predictive ability depended on both the magnitude of the risks and the amount of information about those risks possessed by the participants. Our results provide neural evidence of a detectable difference in the mental state of knowledge in contrast to recklessness and suggest, as a proof of principle, the possibility of inferring from brain data in which legally relevant category a person belongs. Some potential legal implications of this result are discussed.”

Private Traits and Attributes Are Predictable from Digital Records of Human Behavior, 110 PNAS 5802 (2013)
“We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait “Openness,” prediction accuracy is close to the test–retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.”

Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, SSRN (2017)
“Machine-learning algorithms are transforming large segments of the economy, underlying everything from product marketing by online retailers to personalized search engines, and from advanced medical imaging to the software in self-driving cars. As machine learning’s use has expanded across all facets of society, anxiety has emerged about the intrusion of algorithmic machines into facets of life previously dependent on human judgment. Alarm bells sounding over the diffusion of artificial intelligence throughout the private sector only portend greater anxiety about digital robots replacing humans in the governmental sphere. A few administrative agencies have already begun to adopt this technology, while others have the clear potential in the near-term to use algorithms to shape official decisions over both rulemaking and adjudication. It is no longer fanciful to envision a future in which government agencies could effectively make law by robot, a prospect that understandably conjures up dystopian images of individuals surrendering their liberty to the control of computerized overlords. Should society be alarmed by governmental use of machine learning applications? We examine this question by considering whether the use of robotic decision tools by government agencies can pass muster under core, time-honored doctrines of administrative and constitutional law. At first glance, the idea of algorithmic regulation might appear to offend one or more traditional doctrines, such as the nondelegation doctrine, procedural due process, equal protection, or principles of reason-giving and transparency. We conclude, however, that when machine-learning technology is properly understood, its use by government agencies can comfortably fit within these conventional legal parameters. We recognize, of course, that the legality of regulation by robot is only one criterion by which its use should be assessed. Obviously, agencies should not apply algorithms cavalierly, even if doing so might not run afoul of the law, and in some cases, safeguards may be needed for machine learning to satisfy broader, good-governance aspirations. Yet in contrast with the emerging alarmism, we resist any categorical dismissal of a future administrative state in which key decisions are guided by, and even at times made by, algorithmic automation. Instead, we urge that governmental reliance on machine learning should be approached with measured optimism over the potential benefits such technology can offer society by making government smarter and its decisions more efficient and just.”

Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots, SSRN (2017)
“Law enforcement currently uses cognitive computers to conduct predictive and content analytics and manage information contained in large police data files. These big data analytics and insight capabilities are more effective than using traditional investigative tools and save law enforcement time and a significant amount of financial and personnel resources. It is not farfetched to think law enforcement’s use of cognitive computing will extend to using thinking, real-time robots in the field in the not-so-distant future. IBM’s Watson currently uses its artificial intelligence to suggest medical diagnoses and treatment in the healthcare industry and assists the finance industry in improving investment decisions. IBM and similar companies already offer predictive analytics and cognitive computing programs to law enforcement for real-time intelligence and investigative purposes. This article will explore the consequences of predictive and content analytics and the future of cognitive computing, such as utilizing “robots” such as an imaginary “Officer Joe Roboto” in the law enforcement context. Would our interactions with Officer Joe Roboto trigger the same Fourth Amendment concerns and protections as those when dealing with a flesh-and-blood police officer? Are we more afraid of a “robotic” Watson, its capabilities, and lack of feeling and biases, compared to a human law enforcement officer? Assuming someday in the future we might be able to solve the physical limitations of a robot, would a “robotic” officer be preferable to a human one? What sort of limitations would we place on such technology? This article attempts to explore the ramifications of using such computers/robots in the future. Autonomous robots with artificial intelligence and the widespread use of predictive analytics are the future tools of law enforcement in a digital age, and we must come up with solutions as to how to handle the appropriate use of these tools.”

Rise of Robots and the Law of Humans, SSRN (2017)
“In this article, I [Horst Eidenmueller] attempt to answer fundamental questions raised by the rise of robots and the emergence of ‘robot law’. The main theses developed in this article are the following: (i) robot regulation must be robot- and context-specific. This requires a profound understanding of the micro- and macro-effects of ‘robot behaviour’ in specific areas. (ii) (Refined) existing legal categories are capable of being sensibly applied to and regulating robots. (iii) Robot law is shaped by the ‘deep normative structure’ of a society. (iv) If that structure is utilitarian, smart robots should, in the not too distant future, be treated like humans. That means that they should be accorded legal personality, have the power to acquire and hold property and to conclude contracts. (v) The case against treating robots like humans rests on epistemological and ontological arguments. These relate to whether machines can think (they cannot) and what it means to be human. I develop these theses primarily in the context of self-driving cars – robots on the road with a huge potential to revolutionize our daily lives and commerce.”

Rise of the Digital Regulator, 66 Duke L.J. 1267 (2017)
“The administrative state is leveraging algorithms to influence individuals’ private decisions. Agencies have begun to write rules to shape for-profit websites such as Expedia and have launched their own online tools such as the Consumer Financial Protection Bureau’s mortgage calculator. These digital intermediaries aim to guide people toward better schools, healthier food, and more savings. But enthusiasm for this regulatory paradigm rests on two questionable assumptions. First, digital intermediaries effectively police consumer markets. Second, they require minimal government involvement. Instead, some for-profit online advisers such as travel websites have become what many mortgage brokers were before the 2008 financial crisis. Although they make buying easier, they can also subtly advance their interests at the expense of those they serve. Publicly run alternatives lack accountability or—like the Affordable Care Act health-insurance exchanges—are massive undertakings. The unpleasant truth is that creating effective digital regulators would require investing heavily in a new oversight regime or sophisticated state machines. Either path would benefit from an interdisciplinary uniform process to modernize administrative, antitrust, commercial, and intellectual property laws. Ideally, a technology meta-agency would then help keep that legal framework updated.”

Robot as Cub Reporter: Law’s Emerging Role in Cognitive Journalism, SSRN (2016)
”Today’s journalist is immersed in news production that no longer treats robot-written news as a mere reference tool. Major news corporations are reshaping the journalism business to reflect the increasingly dominant role of algorithms and its consequent decrease in human curation. With data so integral to today’s news storytelling and the arrival of machines that are learning to ‘sense, think and act’ like their creators, we are called to deliberate on the legitimacy of law to address human risks and responsibilities when humans are harmed physically, socially, financially or professionally. This paper argues that we are entering the age of cognitive journalism that affects the legal personhood question and examines policy initiatives on both sides of the Atlantic for legal norms to inform a law for machines that learn from mistakes and teach other machines. Legal issues raised by driverless cars, human cloning, drones and nanotechnology are examined for what they can offer to an emerging law of the robot. The paper concludes with a call for research that will bring a more nuanced understanding of the legitimate place of law in cognitive journalism.”

Robots as Legal Metaphors, 30 Harv. J.L. & Tech. 209 (2016)
“This essay looks at the role robots play in the judicial imagination. The law and technology literature is replete with examples of how the metaphors and analogies courts select for emerging technology can be outcome determinative. For example, whether a judge sees email as more like a letter or a postcard will dictate the level of Fourth Amendment protection the court is prepared to extend it. But next to no work examines the inverse: when and how judges invoke metaphors about emerging technology when deciding cases about people. Robots represent an interesting case study. The judge’s use of the robot metaphor can be justice enhancing in that it helps translate obscure legal concepts like agency and fault into terms understandable to a lay reader. But the use of the metaphor is also problematic. Courts tend to apply the metaphor to remove agency from individuals whom society already tends to marginalize. Further, judges’ mental models of robots are increasingly outdated, which could lead to judicial error as advanced robots enter the mainstream.”

Siri-ously 2.0: What Artificial Intelligence Reveals About the First Amendment, SSRN (2017)
“The First Amendment may protect speech by strong Artificial Intelligence (A.I.). In this Article, we support this provocative claim by expanding on earlier work, addressing significant concerns and challenges, and suggesting potential paths forward. This is not a claim about the state of technology. Whether strong A.I. — as-yet-hypothetical machines that can actually think — will ever come to exist remains far from clear. It is instead a claim that discussing A.I. speech sheds light on key features of prevailing First Amendment doctrine and theory, including the surprising lack of humanness at its core.            Courts and commentators wrestling with free speech problems increasingly focus not on protecting speakers as speakers but instead on providing value to listeners and constraining the government’s power. These approaches to free speech law support the extension of First Amendment coverage to expression regardless of its nontraditional source or form. First Amendment thinking and practice thus have developed in a manner that permits extensions of coverage in ways that may seem exceedingly odd, counterintuitive, and perhaps even dangerous. This is not a feature of the new technologies, but of free speech law.

The possibility that the First Amendment covers speech by strong A.I. need not, however, rob the First Amendment of a human focus. Instead, it might encourage greater clarification of and emphasis on expression’s value to human listeners — and its potential harms — in First Amendment theory and doctrine. To contemplate — Siri-ously — the relationship between the First Amendment and A.I. speech invites critical analysis of the contours of current free speech law, as well as sharp thinking about free speech problems posed by the rise of A.I..”

‘Smart’ Fourth Amendment, 102 Cornell L. Rev. 547 (2017)
“”Smart” devices radiate data, detailing a continuous, intimate, and revealing pattern of daily life. Billions of sensors will soon collect data from smartphones, smart homes, smart cars, medical devices and an evolving assortment of consumer and commercial products. But, what are these data trails to the Fourth Amendment? Does data emanating from devices on or about our bodies, houses, things, and digital effects fall within the Fourth Amendment’s protection of “persons, homes, papers, or effects”? Does interception of this information violate a “reasonable expectation of privacy?” The “Internet of Things” and the growing proliferation of smart devices create new opportunities for police investigation. If this web of sensor surveillance falls outside of the Fourth Amendment, then warrantless collection and tracking of this smart data presents no constitutional concern. If these data trails deserve constitutional protection, a new theory of the Fourth Amendment must be developed. This article addresses the question of how the Fourth Amendment should protect “smart data.” It exposes the growing danger of sensor surveillance and the weakness of current Fourth Amendment doctrine. The article then suggests a new theory of “informational curtilage” to protect the data trails emerging from smart devices and reclaims the principle of “informational security” as the organizing framework for a digital Fourth Amendment.”

Some Early Thoughts on Liability Standards for Online Providers of Legal Services, 44 Hofstra L. Rev. 541 (2015)
“This Article discusses a classic intersection of law, science, and technology. Just like common law courts adjusted the “mailbox rule” to cover fax machines, courts will have to adjust their existing approach to liability for harmful legal services, given the existence of new providers of legal services online. The result is a clash of cultures between one of America’s most conservative institutions – its common law courts – and some of its most aggressively forward looking ones – internet entrepreneurs. . . . As such online providers become more common, instances of injured parties and lawsuits for damages will inevitably arise. Like death and taxes, tort lawsuits are an indelible feature of American life. And yet, for now, it is unclear what law will apply in lawsuits against online providers. The American law of traditional legal malpractice is several hundred years old and relatively well-developed. In contrast, courts will write standards of liability for online providers of legal services like LegalZoom or Rocket Lawyer on a relatively blank slate. LegalZoom is the oldest and most established of these websites and has existed since only 2001. As of yet, there are no reported cases of lawsuits against LegalZoom or Rocket Lawyer for defective legal forms, so we are in the earliest possible stage. The growth of this area of the law will be fascinating as a matter of doctrinal expansion, but even more so as a meeting between the tortoise and the hare in the courtroom. This Article takes a first stab at laying out some of the issues that courts will face if and when these lawsuits arise.”

“Source” of Error: Computer Code, Criminal Defendants, and the Constitution, 105 Cal. L. Rev. 179 (2017)

“Evidence created by computer programs dominates modern criminal trials. From DNA to fingerprints to facial recognition evidence, criminal courts are confronting a deluge of evidence that is generated by computer programs. In a worrying trend, a growing number of courts have insulated this evidence from adversarial testing by preventing defendants from accessing the source code that governs the computer programs. This Note argues that defendants are entitled to view, test, and critique the source code of computer programs that produce evidence offered at trial by the prosecution. To do so, this Note draws on three areas of law: The Confrontation Clause, the Due Process Clause, and Daubert and its progeny. While courts and commentators have grappled with specific computer programs in specific criminal contexts, this Note represents the first attempt to justify the systematic disclosure of source code by reference to the structural features of computer programs.”

Surveillance Intermediaries, SSRN (2017)
“Apple’s 2016 fight against a court order commanding it to help the FBI unlock the iPhone of one of the San Bernardino terrorists exemplifies how central the question of regulating government surveillance has become in American politics and law. But scholarly attempts to answer this question have suffered from a serious omission: scholars have ignored how government surveillance is checked by “surveillance intermediaries,” the companies like Apple, Google, and Facebook that dominate digital communications and data storage, and on whose cooperation government surveillance relies. This Article fills this gap in the scholarly literature, providing the first comprehensive analysis of how surveillance intermediaries constrain the surveillance executive. In so doing, it enhances our conceptual understanding of, and thus our ability to improve, the institutional design of government surveillance.”

Technological Due Process, 85 Wash. U. L. Rev. 1249 (2007-2008)
“A new concept of technological due process is essential to vindicate the norms underlying last century’s procedural protections. This Article shows how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework of mechanisms capable of enhancing the transparency, accountability, and accuracy of rules embedded in automated decision-making systems.”

Technological Incarceration and the End of the Prison Crisis, SSRN (2017)
“The United States imprisons more of its people than any nation on Earth, and by a considerable margin. Criminals attract little empathy and have no political capital. Consequently, it is not surprising that, over the past forty years, there have been no concerted or unified efforts to stem the rapid increase in incarceration levels in the United States. Nevertheless, there has recently been a growing realization that even the world’s biggest economy cannot readily sustain the $80 billion annual cost of imprisoning more than two million of its citizens. No principled, wide-ranging solution has yet been advanced, however. To resolve the crisis, this Article proposes a major revolution to the prison sector that would see technology, for the first time, pervasively incorporated into the punishment of criminals and result in the closure of nearly all prisons in the United States.”

Three Laws of Robotics in the Age of Big Data, SSRN (2017)
“In his short stories and novels, Isaac Asimov imagined three law of robotics programmed into every robot. In our world, the “laws of robotics” are the legal and policy principles that should govern how human beings use robots, algorithms, and artificial intelligence agents. This essay introduces these basic legal principles using four key ideas: (1) the homunculus fallacy; (2) the substitution effect; (3) the concept of information fiduciaries; and (4) the idea of algorithmic nuisance.”

Trial by Machine, 104 Geo. L.J. 1245 (2016)
“This Article explores the rise of “machines” in criminal adjudication. Human witnesses now often give way to gadgets and interpretive software, juries’ complex judgments about moral blameworthiness give way to mechanical proxies for criminality, and judges’ complex judgments give way to sentencing guidelines and actuarial instruments. Although mechanization holds much promise for enhancing objectivity and accuracy in criminal justice, that promise remains unrealized because of the uneven, unsystematic manner in which mechanized justice has been developed and deployed. The current landscape of mechanized proof, liability, and punishment suffers from predictable but underscrutinized automation pathologies: hidden subjectivities and errors in “black box” processes; distorted decision-making through oversimplified — and often dramatically inaccurate — proxies for blameworthiness; the compromise of values protected by human safety valves, such as dignity, equity, and mercy; and even too little mechanization where machines might be a powerful debiasing tool but where little political incentive exists for its development or deployment. For example, the state promotes the objectivity of interpretive DNA software that typically renders match statistics more inculpatory, but lionizes the subjective human judgment of its fingerprint and toolmark analysts, whose grandiose claims of identity might be diluted by such software. Likewise, the state attacks the polygraph as an unreliable lie detector at trial, where results are typically offered only by defendants, but routinely wields them in probation revocation hearings, capitalizing in that context on their cultural status as “truth machines.” The Article ultimately proposes a systems approach – “trial by cyborg” – that safeguards against automation pathologies while interrogating conspicuous absences in mechanization through “equitable surveillance” and other means.”

Trust But Verify: A Guide to Algorithms and the Law, SSRN (2017)
“The call for algorithmic transparency as a way to manage the power of new data-driven decision-making techniques misunderstands the nature of the processes at issue and underlying technology. Part of the problem is that the term, algorithm, is broad. It encompasses disparate concepts even in mathematics and computer science. Matters worsen in law and policy. Law is driven by a linear, almost Newtonian, view of cause and effect where inputs and defined process lead to clear outputs. In that world, a call for transparency has the potential to work. The reality is quite different. Real computer systems use vast data sets not amenable to disclosure. The rules used to make decisions are often inferred from these data and cannot be readily explained or understood. And at a deep and mathematically provable level, certain things, including the exact behavior of an algorithm, can sometimes not be tested or analyzed. From a technical perspective, current attempts to expose algorithms to the sun will fail to deliver critics’ desired results and may create the illusion of clarity in cases where clarity is not possible.

At a high-level, the recent calls for algorithmic transparency follow a pattern that this paper seeks to correct. Policy makers and technologists often talk past each other about the realities of technology and the demands of policy. Policy makers may identify good concerns but offer solutions that misunderstand technology. This misunderstanding can lead to calls for regulation that make little to no sense to technologists. Technologists often see systems as neutral tools, with uses to be governed only when systems interact with the real world. Both sides think the other simply “does not get it,” and important problems receive little attention from either group. By setting out the core concerns over the use of algorithms, offering a primer on the nature of algorithms, and a guide on the way in which computer scientists deal with the inherent limits of their field, this paper shows that there are coherent ways to manage algorithms and the law.”

Undue Influence of Surveillance Technology Companies on Policing, SSRN (2017)
“Conventional wisdom assumes that the police are in control of their investigative tools. But with surveillance technologies, this is not always the case. Increasingly, police departments are consumers of surveillance technologies that are created, sold, and controlled by private companies. These surveillance technology companies exercise an undue influence over the police today in ways that aren’t widely acknowledged, but that have enormous consequences for civil liberties and police oversight. Three seemingly unrelated examples — stingray cellphone surveillance, body cameras, and big data software — demonstrate varieties of this undue influence. These companies act out of private self-interest, but their decisions have considerable public impact. The harms of this private influence include the distortion of Fourth Amendment law, the undermining of accountability by design, and the erosion of transparency norms. This Essay demonstrates the increasing degree to which surveillance technology vendors can guide, shape, and limit policing in ways that are not widely recognized. Any vision of increased police accountability today cannot be complete without consideration of the role surveillance technology companies play.”

Values Embedded in Legal Artificial Intelligence, SSRN (2017)
“Technological systems can have social values “embedded” in their design. This means that certain technologies, when they used, can have the effect of promoting or inhibiting particular societal values over others. Although sometimes the embedding of values is intentional, often it is unintentional, and when it occurs, it is frequently difficult to observe. The fact that values are embedded in technological systems becomes increasingly significant when these systems are used in the application of law. Some legal technological systems have started to use machine-learning, formal rule representation, and other artificial intelligence techniques. Systems that use artificial intelligence in the legal context raise novel, and perhaps less familiar, issues of embedded values that require particular attention. This article explores challenges posed by values embedded in legal technological systems, particularly those that employ artificial intelligence.”

NEWS ARTICLES

A.I. Is Doing Legal Work. But It Won’t Replace Lawyers, Yet., NY Times, Mar. 19, 2017
“Impressive advances in artificial intelligence technology tailored for legal work have led some lawyers to worry that their profession may be Silicon Valley’s next victim. But recent research and even the people working on the software meant to automate legal work say the adoption of A.I. in law firms will be a slow, task-by-task process. In other words, like it or not, a robot is not about to replace your lawyer. At least, not anytime soon.”

A.I.’s Law Connection: Machine Learning’s Push in Legal and Access to Justice Initiatives, Law.com, Feb. 13, 2017
“Co-creator of IBM’s Watson Legal—yes, that Watson that won Jeopardy and in legal technology circles is often associated with ROSS Intelligence and Thomson Reuters—Kuhn says that A.I.’s ability to allow users to make sense of unstructured data lends itself to a higher level of client service. For example, corporate law departments can use A.I. to analyze outside counsel. Further, it can be applied to access to justice initiatives by helping people that can’t afford legal services interpret documents, determine their legal rights, and so on.”

Artificial Intelligence and Virtual Law Offices Expected to Be Top Technological Trends Impacting the Legal Profession in 2017, Wisconsin Lawyer, Feb. 2017, at 52
“This article addresses what are expected to be the top technological trends impacting the legal profession in 2017. In particular, this article discusses the roles and impacts that artificial intelligence and virtual offices are expected to have on the legal profession in 2017 and beyond.”

Artificial Intelligence Prevails at Predicting Supreme Court Decisions, Science Magazine, May 2, 2017
“”See you in the Supreme Court!” President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholars at predicting Supreme Court decisions, even with less information. Several other studies have guessed at justices’ behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices who’d been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.”

Artificially Intelligent ‘Judge’ Developed Which Can Predict Court Verdicts With 79 Per Cent Accuracy, The Telegraph, Oct. 24, 2016
“A computer ‘judge’ has been developed which can correctly predict verdicts of the European Court of Human Rights with 79 per cent accuracy. Computer scientists at University College London and the University of Sheffield developed an algorithm which can not only weigh up legal evidence, but also moral considerations. As early as the 1960s experts predicted that computers would one day be able to predict the outcomes of judicial decisions. But the new method is the first to predict the outcomes of court cases by automatically analysing case text using a machine learning algorithm.”

Courts Are Now Using A.I. to Sentence Criminals. That Must Top Now, Wired, Apr. 17, 2017
“Algorithms pervade our lives today, from music recommendations to credit scores to now, bail and sentencing decisions. But there is little oversight and transparency regarding how they work. Nowhere is this lack of oversight more stark than in the criminal justice system. Without proper safeguards, these tools risk eroding the rule of law and diminishing individual rights. Currently, courts and corrections departments around the US use algorithms to determine a defendant’s “risk”, which ranges from the probability that an individual will commit another crime to the likelihood a defendant will appear for his or her court date. These algorithmic outputs inform decisions about bail, sentencing, and parole. Each tool aspires to improve on the accuracy of human decision-making that allows for a better allocation of finite resources. Typically, government agencies do not write their own algorithms; they buy them from private businesses. This often means the algorithm is proprietary or “black boxed”, meaning only the owners, and to a limited degree the purchaser, can see how the software makes decisions. Currently, there is no federal law that sets standards or requires the inspection of these tools, the way the FDA does with new drugs.”

Dark Secret at the Heart of A.I., MIT Technology Review, Apr. 11, 2017
“Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.””

Death by Robot, NY Times Magazine, Jan. 9, 2015, at MM16
“Computer scientists are teaming up with philosophers, psychologists, linguists, lawyers, theologians and human rights experts to identify the set of decision points that robots would need to work through in order to emulate our own thinking about right and wrong. Scheutz defines “morality” broadly, as a factor that can come into play when choosing between contradictory paths. It’s a shorter leap than you might think, technically, from a Roomba vacuum cleaner to a robot that acts as an autonomous home-health aide, and so experts in robot ethics feel a particular urgency about these challenges. The choices that count as “ethical” range from the relatively straightforward — should Fabulon give the painkiller to Sylvia? — to matters of life and death: military robots that have to decide whether to shoot or not to shoot; self-driving cars that have to choose whether to brake or to swerve. These situations can be difficult enough for human minds to wrestle with; when ethicists think through how robots can deal with them, they sometimes get stuck, as we do, between unsatisfactory options.”

Does Machine-Learning-Powered Software Make Good Research Decisions? Lawyers Can’t Know for Sure, Legal Rebels (ABA), Nov. 22, 2016
“Do lawyers need to know what they are doing? If you asked sellers of legal research products, their answer might be “mostly.” New technology is changing just how much of the legal research process is hidden from view. The next question sellers must ask is whether the secrecy will pay off. The days when legal research companies could rest on their ability to provide access to materials are over. To compete, each company has two choices: make its product cheaper or make its searches better than those of its competitors. Because the first alternative could set off a price war, companies are fiercely trying to improve their searches, and they increasingly believe that a technology called machine learning is the way to do it.”

Don’t Worry, Attorneys: A.I. Comes in Peace (Perspective), Big Law Business (Bloomberg), Jan. 19, 2017
“Every day seems to bring another headline about A.I.’s newfound prowess. The latest comes from the New York Times, in a lengthy feature on Google Brain, a Google research project pushing the boundaries of A.I.. The article includes a startling anecdote about what happened when Google applied its latest A.I. technology to its 10-year-old Translate service. Suddenly Translate began converting English literature into Japanese — and back — with a fluency that stunned native speakers. And it kept getting better. By the end of this year, translate was improving overnight to a degree “roughly equal to the total gains the old one had accrued over its entire lifetime.” A.I. isn’t somewhere over the horizon anymore. It’s coming, fast. And once people get past the inevitable Skynet jokes, they start wondering how A.I. will affect them — in particular how it will affect their jobs. Everyone remembers how automation upended the manufacturing industry. Will artificial intelligence do the same for the professions — what is often called knowledge work? A.I. will undoubtedly affect white-collar jobs. It stands to make some obsolete. But as a legal technologist, I’m [A.J. Shankar] quite sure it won’t replace attorneys any time soon. In fact, attorneys are some of the people who stand to benefit most from its advance.”

Experts Say A.I. Isn’t Replacing Lawyers, But It Can Make Them More Efficient, ABA J., Mar. 23, 2017
“For better or for worse, human lawyers aren’t going anywhere, according to an article from the New York Times. Lawyers are using artificial intelligence tools for automating tasks, such as contract review and sorting through electronic discovery documents, according to the article. But higher level tasks, especially those that require experience, will take a while, lawyers and other experts told the newspaper. Professor Dana Remus of the University of North Carolina School of Law and labor economist Frank Levy of the Massachusetts Institute of Technology published a paper on the automation of legal work in 2016 and concluded that although the automation of legal tasks reduces the amount of work lawyers must do, it’s not enough to put lawyers out of business. Their paper said that if large law firms adopt new legal technology immediately, those lawyers would lose 13 percent of their current work hours. But the authors said it’s more realistic to assume that this would happen over five years, which would result in closer to a 2.5 percent reduction in hours per year.”

Googling Gives Us Answers—But Deprives Us of Intelligence, Quartz, Apr. 20, 2017
“Search engines play one of the most significant roles in our technologically enabled lives by shaping how we conceptualize and interact with information, knowledge, wisdom, and arguably reality itself. They are our externalized reasoning machines, both facilitating our access to knowledge and quickly becoming our knowledge. They are where we go to research, clarify, and definitively answer our queries, which go on to form the substance of our opinions, views, and beliefs. From the explicit knowledge acquired from the lost art of slow, considered research to the implicit knowledge lost in imagining what we don’t yet know, search engines are surreptitiously eroding the richness and diversity of our knowledge and lives. We reveal our deepest inner thoughts, fears, and desires to search-engine technologies, replacing the intimate human services otherwise offered by teachers, doctors, librarians, friends, confidants, psychiatrists, religious representatives, and respected elders.”

Great A.I. Awakening, N.Y. Times Mag., Dec. 14, 2016
“How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.”

How A.I. Will Redefine Human Intelligence, Atlantic, Apr. 11, 2017
“The machines are getting smarter. They can now recognize us, carry on conversations, and perceive complex details about the world around them. This is just the beginning. As computers become more human-like, many worry that robots and algorithms will displace people. And they are right to. But just as crucial is the question of how machine progress will change our perceptions of human abilities.”

How to Hold Governments Accountable for the Algorithms They Use, Slate, Feb. 11, 2016
“In 2015 more than 59 million Americans received some form of benefit from the Social Security Administration, not just for retirement but also for disability or as a survivor of a deceased worker. It’s a behemoth of a government program, and keeping it solvent has preoccupied the Office of the Chief Actuary of the Social Security Administration for years. That office makes yearly forecasts of key demographic (such as mortality rates) or economic (for instance, labor force participation) factors that inform how policy can or should change to keep the program on sound financial footing. But a recent Harvard University study examined several of these forecasts and found that they were systematically biased—underestimating life expectancy and implying that funds were on firmer financial ground than warranted. The procedures and methods that the SSA uses aren’t open for inspection either, posing challenges to replicating and debugging those predictive algorithms.”

Inadequate Court Software Still Gets People Wrongly Arrested, Lawyers Say, Ars Technica, Feb. 1, 2017
“Both county prosecutors and local public defenders largely agreed that something need to be done about Alameda County Superior Court’s flawed court management software. But how a local judge will order it to be fixed remains unclear. As Ars reported in December 2016, the Alameda County Superior Court switched from a decades-old courtroom management software to a much more modern one on August 1, 2016. Known as Odyssey Court Manager, the new management software is made by Tyler Technologies. However, since then, the public defender’s office has filed approximately 2,000 motions informing the court that, due to its buggy software, many of its clients have been forced to serve unnecessary jail time, be improperly arrested, or even wrongly registered as sex offenders. During a Tuesday hearing, Public Defender Brendon Woods told the court that his clients have been deprived of their constitutional rights as a result.” See also Software Results in Mistaken Arrests, Jail Time? No Fix Needed, Says Judge, Ars Technica, Mar. 10, 2017.

Lawsuit Filed Against Tesla Over Self Driving Function, Paper Chase (Jurist), Apr. 20, 2017
“On Wednesday law firm Hagens Berman [corporate website] filed a class action lawsuit [text, PDF] in California’s Northern District Court [official website] against car company Tesla [official website] over the self driving function in their vehicles. The lawsuit alleges that the autopilot function is “essentially unusable and demonstrably dangerous” and places the car users at risk. The new hardware system, AutoPilot2, is alleged to not live up to standard safety features and was rolled out with defects. Tesla responded in a statement [Business Insider report] that the lawsuit misrepresents many different facts. The company stated, “we have never claimed our vehicles already have functional full self-driving capability”. Tesla also says that the autopilot software is being rolled out in a safe fashion and is subject to government regulation.”

Moral Dilemmas of the Fourth Industrial Revolution, World Economic Forum News, Feb. 13, 2017
“Should your driverless car value your life over a pedestrian’s? Should your Fitbit activity be used against you in a court case? Should we allow drones to become the new paparazzi? Can one patent a human gene? Scientists are already struggling with such dilemmas. As we enter the new machine age, we need a new set of codified morals to become the global norm. We should put as much emphasis on ethics as we put on fashionable terms like disruption. This is starting to happen. Last year, America’s Carnegie Mellon University announced a new centre studying the Ethics of Artificial Intelligence; under President Obama, the White House published a paper on the same topic; and tech giants including Facebook and Google have announced a partnership to draw up an ethical framework for A.I.. Both the risks and the opportunities are vast: Stephen Hawking, Elon Musk and other experts signed an open letter calling for efforts to ensure A.I. is beneficial to society.”

New Casetext Feature Finds Relevant Cases for You, But Along With It Will Come New Pricing, LawSites, July 18, 2016
“The legal research service Casetext is unveiling a new service today that automatically finds cases that are relevant to legal memoranda and briefs. With this unveiling, Casetext, which has been free to use ever since its 2013 launch, is also preparing to roll out its first paid subscription tiers for premium services, while keeping basic access free. The new research tool being unveiled in a limited rollout today is called CARA, short for Case Analysis Research Assistant. What it does is find cases that are relevant to a legal document but not cited in the document. Upload a brief, memorandum or any other document that contains legal text, and CARA analyzes it and generates a list of relevant cases that are not mentioned in the document.”

Novel Approach to Neural Machine Translation, Facebook Code Blog, May 9, 2017
“Language translation is important to Facebook’s mission of making the world more open and connected, enabling everyone to consume posts or videos in their preferred language — all at the highest possible accuracy and speed. Today, the Facebook Artificial Intelligence Research (FAIR) team published research results using a novel convolutional neural network (CNN) approach for language translation that achieves state-of-the-art accuracy at nine times the speed of recurrent neural systems.1 Additionally, the FAIR sequence modeling toolkit (fairseq) source code and the trained systems are available under an open source license on GitHub so that other researchers can build custom models for translation, text summarization, and other tasks.”

Rise of the Robolawyers, Atlantic, Apr. 2017
“[A]dvances in artificial intelligence may diminish their [lawyers’] role in the legal system or even, in some cases, replace them altogether. Here’s what we stand to gain—and what we should fear—from these technologies.”

ROSS A.I. Plus Wexis Outperforms Either Westlaw or LexisNexis Alone, Study Finds, Law Sites, Jan. 17, 2017
ROSS Intelligence, the artificial intelligence legal research platform, outperforms Westlaw and LexisNexis in finding relevant authorities, in user satisfaction and confidence, and in research efficiency, and is virtually certain to deliver a positive return on investment. These are among the findings of benchmark report being released today by the technology research and advisory company Blue Hill Research pitting ROSS against the two dominant legal research services. The full report will be available for download at the ROSS Intelligence website. The study, which ROSS commissioned, assigned a panel of 16 experienced legal research professionals to research seven questions modeling real-world issues in federal bankruptcy law.”

Satnavs ‘Switch Off’ Parts of the Brain, EurekAlert! (AAAS), Mar. 21, 2017
“Using a satnav to get to your destination ‘switches off’ parts of the brain that would otherwise be used to simulate different routes, reveals new UCL research. The study, published in Nature Communications and funded by Wellcome, involved 24 volunteers navigating a simulation of Soho in central London while undergoing brain scans. The researchers investigated activity in the hippocampus, a brain region involved in memory and navigation, and the prefrontal cortex which is involved in planning and decision-making. They also mapped the labyrinth of London’s streets to understand how these brain regions reacted to them.” See Hippocampal and Prefrontal Processing of Network Topology to Simulate the Future, Nature Communications, Mar. 21, 2017.

States Turn to Technology to Calculate Prison Sentences, Governing, Mar. 11, 2015
“We may also have the most complex sentencing system in the world. Along with the vast number of criminal offenses (there were 4,450 federal crimes in the U.S. Code in 2008), there’s an array of rules and exceptions that impact a defendant’s sentence. These include the severity of the crime, the number of offenses committed, credits for time already served, and the defendant’s criminal history. Not surprisingly, errors occur. In 2014, corrections officials in Nebraska discovered they used a flawed formula and miscalculated mandatory minimum sentences for more than 700 inmates, leading to the premature release of nearly 200 prisoners. In Colorado, a 2013 audit revealed that the state’s Department of Corrections incorrectly sentenced as many as 1,000 inmates. The investigation was prompted when a calculation error led to the early release of an inmate who later killed the state’s prison chief.”

Surge Predicted in Artificial-Intelligence Training, Courthouse News, May 3, 2017
“As machines and technology continue to transform the workplace, the Pew Research Center says technologists, futurists and scholars are predicting a surge of interest in artificial-intelligence training programs, online courses and “micro-credentialing.” Many of the 1,408 people whom Pew Researchers spoke with are uncertain, however, if workers will be prepared to compete with artificial-intelligence tools, according to the report released Wednesday, in cooperation with Elon University’s Imagining the Internet Center.”

Survey Shows How Lawyers Use Technology in 2017, Sui Generis Blog, Apr. 6, 2017
“Technology has become part of the fabric of our lives. Its effects are inescapable and its impact on our culture has been tremendous. In the business world, technology has helped to streamline processes and improve efficiencies. Although most lawyers weren’t first in line when it came to using technology in their practices, over time the benefits of doing so became clear. That’s why today’s lawyers are increasingly incorporating the latest tools and software into their law firms. As shown in a survey recently conducted by Above the Law in partnership with MyCase (the company for which I [Nicole L. Black] work) lawyers’ technology needs and decisions vary depending on a number of factors, including firm size. The goal of the survey was to determine how lawyers would use technology in the upcoming year. The focus was on learning more about the goals and challenges lawyers faced in running their practices and the types of technologies they planned to incorporate into their firms in 2017 to solve those problems.”

Teaching Legal Technology, AALL Spectrum, Mar./Apr. 2017, at 22
“Many attorneys are not as fluent they should be in using technology to its fullest capacity for the benefit of the firm or the client, as evidenced by results of the KIA Audit—a basic legal technology skills audit administered by KIA Motors to potential outside counsel. A well-attended 2015 AALL Annual Meeting program discussed the results of the KIA audit and was an eye-opener for many in the room. (View the AALL 2015 Annual Meeting program at bit.ly/AALL2goKIA.) Recognizing the importance of continuing the discussion on teaching legal technology, another Annual Meeting program, presented in July 2016, invited librarians representing firms, law schools, and courts to talk about the issues, the challenges, and the obstacles librarians face in helping law students and attorneys learn, hone, and become self-sufficient with their technology skills.”

Understanding the Differences Between A.I., Machine Learning, and Deep Learning, TechRepublic, Feb. 23, 2017
“With huge strides in A.I.—from advances in the driverless vehicle realm, to mastering games such as poker and Go, to automating customer service interactions—this advanced technology is poised to revolutionize businesses. But the terms A.I., machine learning, and deep learning are often used haphazardly and interchangeably, when there are key differences between each type of technology. Here’s a guide to the differences between these three tools to help you master machine intelligence.”

When It Comes to Reading, Should You Go Paperless?, Lawyerist, Apr. 25, 2017

“So, should lawyers resort to paper when they think it is better or force themselves to think and work digitally? This is typical of many technological decisions we face as lawyers: should I do what is comfortable and established under the guise that it gets me an immediate perceived better result, or do I do what is less immediately comfortable and hope for an improved experience down the road?”

When Reporters Get Hands-on with Robo-Writing, Digital Journalism, Mar. 1, 2017
“The availability of data feeds, the demand for news on digital devices, and advances in algorithms are helping to make automated journalism more prevalent. This article extends the literature on the subject by analysing professional journalists’ experiences with, and opinions about, the technology. Uniquely, the participants were drawn from a range of news organizations—including the BBC, CNN, and Thomson Reuters—and had first-hand experience working with robo-writing software provided by one of the leading technology suppliers. The results reveal journalists’ judgements on the limitations of automation, including the nature of its sources and the sensitivity of its “nose for news”. Nonetheless, journalists believe that automated journalism will become more common, increasing the depth, breadth, specificity, and immediacy of information available. While some news organizations and consumers may benefit, such changes raise ethical and societal issues and, counter-intuitively perhaps, may increase the need for skills—news judgement, curiosity, and skepticism—that human journalists embody.”

Will Algorithms Erode Our Decision-Making Skills?, NPR, Feb. 8, 2017
“Algorithms are embedded into our technological lives, helping accomplish a variety of tasks like making sure that email makes it to your aunt or that you’re matched to someone on a dating website who likes the same bands as you. Sure, such computer code aims to make our lives easier, but experts cited in a new report by Pew Research Center and Elon University’s Imagining the Internet Center are worried that algorithms may also make us lose our ability to make decisions. After all, if the software can do it for us, why should we bother?”

Will Democracy Survive Big Data and Artificial Intelligence?, Scientific American, Feb. 25, 2017
“The digital revolution is in full swing. How will it change our world? The amount of data we produce doubles every year. In other words: in 2016 we produced as much data as in the entire history of humankind through 2015. Every minute we produce hundreds of thousands of Google searches and Facebook posts. These contain information that reveals how we think and feel. Soon, the things around us, possibly even our clothing, also will be connected with the Internet. It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours. Many companies are already trying to turn this Big Data into Big Money. Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?”

RESOURCES

90 Active Blogs on Analytics, Big Data, Data Mining, Data Science, Machine Learning (updated) (KDnuggets)
“This post updates a previous very popular post 100 Active Blogs on Analytics, Big Data, Data Mining, Data Science, Machine Learning as of March 2016 (and 90+ blogs, 2015 version). This year we removed 26 blog sites from the previous list that does not meet our active criterion: at least one blog in the last 3 months (since Oct 1, 2016). We also added ten new relevant blogs to the list. All blogs in this list are categorized into two groups: very active and moderately active. The former often have several entries each month while the latter may only have one post for a few months recently. We also separate blogs that do not involve much in technical discussions as in an Others group. Within each group of blogs, we list in alphabetical order. Blog overview is based on information as it has appeared on its URL as of 1-1-2017. If we missed some popular active blogs, please suggest them in the comments below.”

A.I. Roundup – A Guide to ILTA’s Artificial Intelligence Content (International Legal Technology Association)
“OK, so Watson won “Jeopardy!” back in 2011. That’s ancient history in technology years. ILTA provides a wealth of programming about the current state of affairs in Artificial Intelligence that will benefit law firms and corporate legal departments. In the future, I’m [Joe Davis] sure we’ll have a bot to curate all this content for us. In the meantime, below is a sampling of some of ILTA’s best A.I.-related content from ILTACON, Insight, webinars, white papers and Peer to Peer.”

Legal Technology (ABA)
Collection of stories published by the American Bar Association concerning the use of technology in law practice, developments in the industry, and applications to research, information security and litigation.

Project Information Literacy (PIL)
“Project Information Literacy (PIL) is an national and ongoing series of research studies that investigates what it is like being a college student in the digital age. We seek to understand how college students find and use information — their needs, strategies, practices, and workarounds — for course work and solving information problems that arise in their everyday lives.”

[1] The future is just around the corner. See, e.g., Geoff Brown, Technology Could Allow Facebook Users to Type 100 Words Per Minute—Using Only Their Thoughts, Hub (Johns Hopkins U.), Apr. 19, 2017; Ahmed Alkhateeb, Science Has Outgrown the Human Mind and Its Limited Capacities, Aeon, Apr. 24, 2017.

[2] See Cara E. Greene, Competent Representation: Ethics and Technology in the Practice of Law (ABA 2013) (“The days are gone when all an attorney needed to practice law was a typewriter and a telephone. Technology has transformed the legal practice and attorneys must stay abreast of technological developments if they are to satisfy their ethical obligation to provide competent representation. In fact, the American Bar Association recently amended the Model Rules of Professional Conduct (“Model Rules”) to account for changes in technology, concluding that “competent lawyers must have some awareness of basic features of technology” and amending Comment 6 of Model Rule 1.1 to clarify that minimum competence requires that attorneys keep abreast of changes in the law and practice, including “the benefits and risks associated with technology.”” Id. at 2 (footnote omitted)). See also Ivy Grey, Exploring the Ethical Duty of Technology Competence, Part I, Law Tech. Today, Mar. 8, 2017 (“Competency directly relates to performing our duties as an attorney (Model Rule 1.1) and indirectly relates to fees and billing (Model Rule 1.5). Our ethical duties require us to do more than just to maintain client confidences, therefore our duty to be technologically competent must extend beyond confidentiality, too. A lawyer must be competent in all matters reasonably necessary for the representation.”); Ivy Grey, Exploring the Ethical Duty of Technology Competence, Part II, Law Tech. Today, Mar. 9, 2017 (“It can be an intimidating thought that by refusing to become technologically competent, lawyers are knowingly wasting client’s time and money. If true, the billable time spent manually performing easily-automated basic tasks or fruitlessly fiddling with MS Word may be an unearned fee to which the lawyer is not entitled. It’s already clear that clients are not willing to pay for this time, but this could be more than a billing write-off—it may constitute an ethical violation.”)

[3] See, e.g., Hope Reese, Understanding the Differences Between A.I., Machine Learning, and Deep Learning, Tech Republic, Feb. 23, 2017. See generally FYI: Technology Terms Defined (ABA Legal Technology Resource Center) (collection of tools and dictionaries of law-related computer terms).

[4] This is especially true for legal research. See Ronald E. Wheeler Jr., Does WestlawNext Really Change Everything?: The Implications of WestlawNext on Legal Research, 103 Law Libr. J. 359 (2011); Robert Ambrogi, Upsetting the Applecart of Legal Research, Above the Law, May 15, 2017; Sam Glover, Casetext’s CARA Takes the Search out of Legal Research, Lawyerist, July 18, 2016; Brian Sheppard, Does Machine-Learning-Powered Software Make Good Research Decisions? Lawyers Can’t Know for Sure, Legal Rebels (ABA), Nov. 22, 2016; Joe Hodnicki, 10,000 Documents: Is There a Flaw in West Search?, Law Librarian Blog, Mar. 20, 2017 (raising questions about the ultimate scope of search retrieval results in Westlaw and Lexis).

[5] Moreover, it is raising general concerns for artificial intelligence, electronic personhood, robotic autonomy and emergence; machine bias, ethics, and morality; and legal accountability and responsibility. See, e.g., Alex Hern, Give Robots ‘Personhood’ Status, EU Committee Argues, Guardian, Jan. 12, 2017 (“The European parliament has urged the drafting of a set of regulations to govern the use and creation of robots and artificial intelligence, including a form of “electronic personhood” to ensure rights and responsibilities for the most capable A.I.. In a 17-2 vote, with two abstentions, the parliament’s legal affairs committee passed the report, which outlines one possible framework for regulation.”).

[6] See Ken Strutin, Mecha Justice: When Machines Think Like Lawyers, LLRX, Sept. 10, 2016.

[7] Privacy, confidentiality, discovery, legal and factual research, and the basics of conducting business in an online world must be informed by an understanding of the latest currents in computers and legal craft. See, e.g., ABA Formal Opinion 477 Securing Communication of Protected Client Information (May 11, 2017) (“A lawyer generally may transmit information relating to the representation of a client over the internet without violating the Model Rules of Professional Conduct where the lawyer has undertaken reasonable efforts to prevent inadvertent or unauthorized access. However, a lawyer may be required to take special security precautions to protect against the inadvertent or unauthorized disclosure of client information when required by an agreement with the client or by law, or when the nature of the information requires a higher degree of security.”); Internet of Things (GAO 2017) (“The rapid, global proliferation of IoT [Internet of Things] devices has generated significant interest. In light of the current and potential effects of the IoT on consumers, businesses, and policy makers, GAO was asked to conduct a technology assessment of the IoT. This report provides an introduction to the IoT and describes what is known about current and emerging IoT technologies, and the implications of their use.”).

[8] See Ken Strutin, Cut and Paste Opinions: A Turing Test for Judicial Decision-Making, LLRX, July 25, 2015; Ken Strutin, Cognitive Independence in Judicial Decision-Making, N.Y.L.J., Sept. 26, 2016, at 5.

Posted in: AI, Legal Marketing, Legal Profession, Legal Research, Legal Technology