Though the criticisms of President Trump’s withdrawal from Syria have been exaggerated (see here and here), the bipartisan condemnation reflects the fact that Trump does his greatest damage to the national interest on matters that are not grounds for impeachment. When he abandons our allies, emboldens our enemies, or engages in damaging trade wars, he may be guilty of bad policy choices, but those choices are not high crimes and misdemeanors. They are matters to be judged by the voters on Election Day.

Unfortunately, our Constitution has, with the aid of Congress and the Supreme Court, created a chief executive who possesses the power to act unilaterally on the full range of domestic and foreign policy matters. With a simple phone call or tweet, Presidents can create serious problems while acting within their legal authority.

So while Congress should address Trump’s violations of the law, it needs to do much more than that. Congress needs to ensure that it prevents future Presidents from making major policy decisions on their own rather than in conjunction with other elected officials. For more on that point, see here.

Introduction 

If you’re anything like me, your first experiences in law school—whether it is a “summer start” program like the one in which I began or the more traditional 16-week semester—may well turn out to be a wakeup call of the worst kind, like a slept-through-your-flight kind of call from a hotel’s front desk. In short, the first months in law school have the power to make you feel not only like you’ve been left behind but that you’re all alone. It’s like Home Alone; only this time, you’re playing Kevin. 

Perhaps that seems dramatic. And in one sense, I suppose it is. After all, plenty of students adjust to the rites and rigors of law school without so much as breaking a sweat. But for the rest of us, there’s an adjustment period, and a significant one at that. But now that I’m nearly halfway through my 3L year, I wanted to take some time to pass on what I’ve learned. In particular, let me simply say “I wish I’d known that” about these “Three Rs and a T” earlier than I did. 

Consult Resources. 

Perhaps not surprisingly, the first thing to which I wish I had been privy during my first few months of law school were the school’s vast network of resources, of both the professional and personal variety. They are there, waiting to be consulted. But you’ve got to know who and what and where they are. 

By professional resources, I mean both awareness of and access to the vast array of supplementary materials that line your library’s dusty shelves as well as your school’s version of the University of Oklahoma the College of Law’s 1L “Study Resources” splash page. Not to be dramatic, but my school’s supplementary materials were the sine qua non of my vast improvement between my first and subsequent semesters.  

That said, the truth is that there are far more resources to peruse than there is time to peruse them. But don’t let the sheer volume of supplementary sources scare you away. Instead, find a few you like, and then apprentice yourself to them early on. It’ll make a world of difference come exam time. (Personally, I like to spend most of my time with conceptual outlines and digital recordings of famous law school lecturers. Others prefer multiple-choice compilations or flash cards.) 

By personal resources, on the other hand, I mean the plethora of personnel who are at your service. Although my school’s director of academic support had yet to take the metaphorical field when I arrived, I have since had the pleasure of working with her while she called the tutorial plays. I don’t know about you, but I find that talking through concepts in one-on-one conversation is one of the best ways in which I learn. Whether those conversations are for you or not, get to know the administrators, the advisors, and the tutors. They’re there for you, and they want to see you succeed. And if you’re wondering why I haven’t mentioned professors in this section, it’s that my own experience has taught me that profs generally make for better conversation partners than tutors. So get your directed academic guidance elsewhere. 

Prioritize Relationships. 

For most of you, this one’s probably a no-brainer. Like most things in life, law school is both easier and more enjoyable when you do it with others. No, don’t do others’ work or sign up for classes just because someone else does. But don’t forget to build relationships. And don’t start thinking about relationships exclusively through the lens of networking just because you’re in law school. People aren’t projects; they don’t want to be treated like means to ends. So “Yes,” get to know your classmates, mentors, and other friends of your school . . . but not because of what they can do for you but because of who they are. 

More specifically, within the first few months of school make it a personal goal to deal with your own myopia by seeking out relationships with a few good men and women, especially men and women with different interests, skills, and backgrounds. You won’t believe how much of the “law” you miss simply because of your own background and biases. Plus, a few good men and women make for excellent study buddies come exam time. And everyone needs a few study buddies.  

Even Lebron can’t win titles on his own. Of course, if those study buddies totally fail you, just get new ones. If you’re wondering how to do that, just call the Los Angeles Lakers’ front office. I’m sure team executives would be happy to tell you how they managed to trade away all Lebron’s teammates to places stretching from New Orleans to Timbuktu.  

But, on a more serious note, other people often make all the difference in who we are and what we become. See generally John MacMurray, Persons in Relation (Humanity Books 1961). Most of your success in law school, as in life, in other words, will likely be byproduct of those around you. See id. This is categorically true—even for the “highest ranked” among us. So get out there and make a few friends. But remember to treat them like people, not projects. 

Read Effectively. 

This one’s probably a no-brainer too. But get ready to read, read, and then read some more. But be sure to learn how to read first. So you know, I wrestled with whether to make “Reading” the first ‘R’ in my list, primarily because it seems to be the foundational element of every (legal) education program in the world. But I’ve chosen to put it here because even though it is arguably the most important item in this list, chances are it’ll be easiest to “fix.” 

If you don’t already know, the “best” readers among us read different genres differently. See, e.g., Mortimer J. Adler & Charles Van Doren, How to Read a Book: The Classic Guide to Intelligent Reading 190-308 (Touchstone Books 1972). For example, the best readers approach academic journal articles on sub-atomic particles one way and New York Times opinion pieces another. To become a good reader, in other words, is to learn to take stock not only of what you’re reading but why you’re reading. Only then will you be able to understand how you ought to be reading. This is even more important when it comes to reading the law. 

One quick illustration: I came to law school after spending three years working as a middle division American history teacher. My paramount objective was to introduce students to the “past” as well as the people, places, and events therein. One of the many ways I did that was by giving lectures. To prepare, I would often have to conduct research and hunt for facts. The most important facts were usually buried somewhere within hundreds or thousands of pages of journal entries, newspaper articles, or monographs. Sometimes the hunt was exhausting. But it was always worth it because it wasn’t until I had a decent grasp of the facts that I was able to present a story of the past through which students were prepared to identify and explain, or analyze and interpret, historical events. 

Understanding the facts is an equally important part of the study and practice of law. But to treat cases only—or even primarily—as a hunt for hidden facts is to miss the point of reading case law. Yes, we must have a basic grasp of the facts in order to analyze cases intelligently, but the goal of reading case law is to learn to identify the relationships between the facts and the rules, not just to memorize facts. In some sense, that is the essence of the practice and interpretation of the law: What is the relationship between fact(s) X and rule(s) Y at time(s) Z, and what does that relationship permit, encourage, or bind the court to do? 

It Just Takes Time. 

The last thing about which I want to say “I wish I’d known that” is that, for most of us, adjusting to the rites and rigors of law school just takes time. I find it ironic that I didn’t see this coming. As an “older” student (I was 29 when I started), I suppose I should have expected that any measure of success in law school—like most good things in life—wasn’t going to come overnight. As convenient as they are, everyone knows microwave ovens ain’t got nothin’ on slow cookers. Wherever you are in the process, remember this. It’ll probably take some time to adjust.  

Just exactly how much time it’ll take to adjust is different for everybody. It took me a semester. In fact, by the time I figured out what was going to be asked of me on my first set of exams, it seemed like I had less than a couple weeks to prepare properly. Others probably could’ve taken—and passed—their exams without preparing and all. Not me. And likely not you either. So take some time to reflect on how quickly (or slowly) you typically adjust to new environments, and then do the things I’ve recommended above. 

Conclusion 

Ultimately, the key to achieving a measure of success in law school is not about how quickly you adjust or how effectively you read; instead, achieving a measure of success in law school comes as you get to know who you are and what you need to do in order to accomplish your goals. If you’re 22 or 23, be willing to admit that you don’t know much about who you are or where you’re headed. And if you’re a little older, like I was, just remember that graduate school and mid-life career changes are never easy. In a word, don’t worry about whether you’re going to be a “success.” Success does not, after all, always look like graduating “number one” or landing a federal appellate clerkship. It might, but it might also look like cranking out public interest briefs for next to nothing.  

Notwithstanding the fact that everyone’s personalities and professional goals are different, I have a sneaking suspicion that the sooner you learn how to consult resources, prioritize relationships, read effectively, and acknowledge that it takes time to adjust to the rites and rigors of law school, the less likely you’ll find yourself saying—like I did—“I wish I’d known that.” 

On July 23, 2019, I emailed the Associate Deans’ and Deans’ ABA listservs asking for information about innovative courses. I received 54 responses describing more than 60 courses. I felt so fortunate to get to learn about all the interesting and innovative classes law schools have created, and I hope this post, which will be the first in a series of posts describing what I learned, proves to be of value to you. (Note 1: I have decided to adjust my label from “unique courses” to “innovative courses” so I can duck the question of whether any particular course meets the high standard of distinctiveness suggested by the word “unique.”

I decided to organize the courses into four categories: Required Courses, Electives, Skills Courses and Clinics, and Law and Technology Courses, and I am planning six blog posts, including this one on Required Courses, two on Electives (for which I received the largest number of nominated courses), two on Skills Courses and Clinics (for which I received the second largest number of nominated courses), and one on Law and Technology Courses. (Note 2: I acknowledge that these categories are arbitrary and simplistic. Skills courses and clinics are always taught against doctrinal backgrounds, and required and elective doctrinal courses typically teach analytical skills, and, in some cases, drafting and other practice skills.) My goal is to complete this series of six posts over the next 12 weeks.

Each posting will describe the courses, identify the law school that offers the course, and, if I have the information, provide the name of a professor at the law school who teaches the course. I will include at least some commentary about most of the courses. The quoted language comes from the email I received from the law school or from the law school’s website.

In this posting, as my title for this posting promises, I am focusing on innovative required courses. The seven required courses in this category fit into three sub-categories: (1) law practice skills courses, (2) professionalism and professional identity, and (3) foundational knowledge. Six of the seven courses are first-year courses.

Law Practice Skills

Professor Laura Thomas at the University of Minnesota School of Law has co-led the law school’s first year Law in Practice course for years. The course

[S]eeks to transform law students’ emerging knowledge of legal doctrine and reasoning into an introductory understanding of the practice of law.   LiP combines classroom teaching with small group simulation experiences to provide the conceptual knowledge and professional skills needed to master the iterative process of discovering new facts, refining legal research objectives and managing the relationship with the client. Law School faculty members teach a weekly class exploring doctrinal and strategic issues in the simulated cases. Students perform simulations in ‘Practice Groups’ of eight students led by practicing attorneys. Groups of two students engage in client or witness interviews, client counseling, and negotiation and dispute resolution simulations. Each student individually takes a deposition.

Here is a link to additional information about the course, and here is a link to an article about the course from Minnesota’s alumni magazine.

While many law schools have second-year simulation courses that are similar in content, what distinguishes this course is the choice to move it forward to the first year, a choice that I would hope would increase the likelihood that students retain the excitement about becoming lawyers that led them to go to law school in the first place.

Professionalism and Professional Identity

Three of the seven nominated required courses fit into this category, and I believe 25-30 law schools, including my current law school and my prior law school, require similar courses, all in the first year. Particular kudos are due in this category for Mercer Law School, which was one of the first law schools, if not the first, to create such a course and St. Thomas University School of Law, which has taken a leadership role in this field.

The courses in this category about which I was emailed were:

Mercer’s first The Legal Profession course, created by Professor Patrick Longen. This three-credit hour course is described as

[A]n exploration of lawyer professionalism. Students learn about what ‘professionalism’ means for lawyers and why it matters. They see what pressures the practice of law places on professionalism in different settings. The students explore the many ways in which the legal profession seeks, imperfectly, to create and perpetuate the conditions that promote professionalism. This course also examines the extraordinary challenges and opportunities that come with a life in the law, and the students study ways in which professionalism contributes to the satisfaction that lawyers find in their calling. In addition, to class readings, discussions, guest speakers, and an exam, the students write two papers reflecting on their career goals. They also visit in small groups with experienced lawyers to discuss life in the legal profession, and they read a biography of a famous lawyer or judge and discuss it in a small group setting.

Here is a link describing the course’s evolution.

The University of North Dakota School of Law’s Professional Foundations course is a team taught, two-credit hour course that was created and coordinated by Professor Emeritus Patti Alleva (who retired this year) and the law school’s new Dean, Michael McGinniss. The class

[I]ntroduces students to concepts of professional role, identity, and practice for lawyers. A key objective of the course is to assist students in beginning to cultivate a reflective mindset about professional life in the law and to develop the habits needed to exercise sound professional judgment as lawyers. Students will develop the skill of practiced self-reflection in legal settings and, in exploring the kind of lawyers they want to become, deepen their ability to apply their professional values in the practice of law.

Texas A & M School of Law’s Professional Identity course. In Professional Identity students are asked to engage in reflection about themselves, their goals, and how to best go about achieving them. PI is a chance for students to focus on their own professional development.

Foundational Knowledge

Villanova University Charles Widger School of Law has created two business-focused, required courses aimed at providing students with foundational business knowledge. The first, a required first-year, one credit course, called Business & Financial Literacy Module, is taught as a one-week intersession in January of students’ first year. It was described to me in this way: The course

[I]ntroduces all 1L students to critical business and finance concepts and their application in practice. This required course begins with an overview of basic financial literacy concepts, including instruction on how to read a financial statement and how to value a business. The course moves beyond these basics to show how these concepts are used in a practical setting. Students work in small groups to solve an ongoing problem involving the valuation and sale of a business. Their work is overseen by practicing attorneys who help students put the concepts they have learned into practice as they work through this real-life legal scenario. The week culminates with teams of students negotiating a deal and creating a term sheet for their clients, all with the guidance and supervision of experienced practitioners.

The second course, The Business Aspects of Law Module, was designed as a follow-up to the 1L course. The course was described to me in this way:

This one-week course was designed in consultation with law firm and in-house leaders. Practitioners from various settings – from global firms to small boutiques, from giant corporations to family businesses, from non-profits to government – show the students how different legal organizations run their business. Putting this knowledge into practice, students are broken into small teams and tasked with a simulation that requires them to run the general counsel’s office at a multinational corporation.

Together, these courses address an issue that I have heard about from practitioners all over the country: most new lawyers do not understand essential business concepts that can be significant both to their own practices and to their work for clients.

University of North Texas College of Law has created a one credit hour, first-year course it calls Lawyering Fundamentals. The course is aimed at providing UNT students with somewhat of a hybrid of the Legal Process courses from prior eras of legal education and the professionalism and professional identity courses described above. The course, per the description I received,

[I]ntroduces students to the UNT Dallas College of Law and its curriculum, and introduces concepts and skills that will be important throughout the study of law, including introduction to law as a profession, introduction to the court systems in Dallas, anatomy of a trial and anatomy of a deal, methods of effective studying and learning in law school, and interactions and interviews with lawyers relating to legal education and the practice of law.

Final Thoughts

Innovative courses in the first year are rare. Constrained by bar exam pass rate concerns, marketing concerns that lead law schools not to require more credit hours than their peer law schools, and the pervasive influence of the Langdellian legal education curriculum and the law school quasi-Socratic teaching method, law schools have only tinkered around the edges of the required curriculum. As you will see in future posts, the same cannot be said for upper-level skills courses, electives, and law and technology courses.

The intersection of technology and healthcare has reached an unprecedented point in history. The traditional medical model, where the ability to accurately diagnose and treat patients is gained from thousands of hours of hands-on medical training, is being challenged. The challenge comes from the creation of artificial intelligence (AI) software that can use deep learning to meet or exceed the accuracy and dependability of medical decisions made by their human counterparts. These AI systems function as an “economy of minds” using collective experiences of human physicians and healthcare providers to create massive databases of knowledge that can be trained to perform specific undertakings.[1] AI already infiltrates fields such as radiology, optometry, and some simple surgeries. But what happens when the initial programming has been outgrown by the learning of the technology and the technology makes a mistake. Who becomes the tortfeasor? The doctor relying on the AI? The hospital paying for the AI? Is the company responsible for the initial programming? Or is no one liable?

This paper will challenge the current application of the technology’s ethical considerations facing a healthcare provider using AI and explore the use of AI systems in medical decision making. I will divide my argument into five parts. Part I will define AI by deciphering a very complex, technical topic understandable language. Part II will break down AI into the three primary classifications of AI systems that will be important to delineate as these types of systems are discussed. Part III will explore the current and potential uses of AI in healthcare by giving more concrete examples of AI applications when mass amounts of data are available. Part IV will identify significant concerns with the ethics of AI as we know it today but specifically, focus on the reliability concerns of a healthcare provider using an AI system for diagnosis or treatment planning. We will also look at how current law intersects with innovative AI. Finally, in Part V, I make a recommendation considering the use of AI in healthcare and how the symbiosis of such tools can be maximized while at the same time limiting risk.

1.     Artificial Intelligence Defined

Artificial Intelligence (AI) is one of the most misunderstood terms and difficult to explain concepts in our culture today. When the public hears the term AI, they often imagine how the machines become self-aware and begin systematically disposing of the entire human race as depicted in the movies.[2] However, what AI actually is, as we know it today, “is a combination of neuro-linguistic processing with a knowledge base and data storage where interaction data matches with analytic data.”[3] In layman’s terms, AI is a computer program trained to recognize a specific outcome after being exposed to adequate amounts of variable data that would allow a statistical rate of accuracy to be determined when it analyzes a new data point.[4] The ability of these systems to use “probabilistic representations” and “statistical learning methods” has opened the door to AI influenced products in “machine learning, statistics, control theory, neuroscience, and other fields.”[5] The goal of these systems is to perform in such a way that “if observed in human activity” that the general public would label the performance “intelligent.”[6] Depending on the task, this can be easily accomplished or impossible with the ability of current computing power.[7] However, scientists and engineers have been finding more and more applications for these outcome-driven systems in our current society.[8]

Products such as advanced analytics and diagnostic analysis are now being utilized in a variety of industries and marketplaces such as law and medicine. Global researchers and advisors, Gartner, called advances in AI “the most disruptive class of technologies over the next [ten] years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep neural networks.”[9] Gartner and many others feel that we are still in the infancy of AI.[10] This infancy, or newness, is apparent in the only recent widespread rise of facial recognition as a standard feature on many cell phones, cars that self-park and have limited autopilot abilities, and big data analytics is the new normal instead of a rarity.[11]

2.     Classifying AI

With an informed understanding of how AI functions, it is possible to further classify AI systems into three distinct categories: Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Superintelligence.[12] These classifications also help delineate how the stage of progression that science and computing power currently limits our AI capabilities.[13]

Artificial Narrow Intelligence (ANI) is the most common type of AI and essentially a task-specific program interface.[14] When most people speak of existing AI or they reference a current product as using AI, ANI is typically what they are describing. Products such as advanced research analytics, diagnostic analysis, and statistical probabilities are all examples of ANI.[15] Because it refers to the limited parameters of such a system, ANI is also referred to as “Weak AI.”[16] The term, weak, is analogous with basic or limited compared to the theoretical potential of more advanced systems. An interesting example of this weakness is China’s “crime-fighting facial recognition software” that recently gave famous executive, Dong Mingzhu, a ticket for jaywalking as a bus with her face on the side of it sped through an intersection.[17]

Artificial General Intelligence (AGI) differs from ANI in that it refers to the programming either mirroring or “exceeding human intelligence.”[18] AGI can learn multiple tasks and transfer it’s learning between them.[19] In other words, the AI “learns how to learn.”[20] Examples of AGI would be IBM’s Watson supercomputer and self-driving cars as we know them today.[21] However, within this category of AI is where our current state of technology puts a hard stop on the capability of development.[22] With the goal of replicating human behavior and convincing the user of intellectual understanding, the results are simply beyond the comprehension of today’s science.[23]  The brain and how our consciousness works is undoubtedly beyond our current capacity.[24] Science is unable to replicate even the most basic of human abilities currently, and it is debated whether “mapping the human brain will ever be feasible.”[25] The most capable AGI to date is the Impala algorithm which can learn up to 30 different tasks with a variety of complexity.[26] While impressive innovation, the Impala algorithm is basic in comparison to the millions of separate tasks performed every day by the human mind. This fact is not lost on the cutting-edge developers of AI. These scientists and researchers are diligently working to expand the borders of possibility as we know it as evidenced by the new and increasingly advanced systems coming online regularly.

Artificial Superintelligence (ASI) is the final and most advanced level of AI. This level of AI would be self-aware and would far exceed the human capability both in quantity and quality of performance. This phenomenon is often referred to as the “singularity” or the point in time where technology will reach a point of “unfathomable changes” to human existence as we know it.[27] It has been hypothesized that after the singularity, computations that would have taken years only minutes before would only take seconds and that life as we know it would never be the same.[28] The reality of this phase of AI is met with “radical uncertainty” and is entirely theoretical, but the potential is fascinating that top minds feel we may see the singularity in five to thirty years.[29] The possibility of an algorithm that can synthesize an amount of data that could lead to the eradication of poverty, homelessness or disease could change the world forever.[30] Although, there is a valid concern about the ethical considerations of a self-aware computer that exceeds human understanding. Think tanks, like the Partnership on AI, are attempting to bring those policy issues to the forefront of the discussion as we continue to innovate and create.[31] By regulating the industry as it is being designed, these groups hope to inject ethical considerations early in an attempt to limit any negative impact on society in light of the potential of this type of AI.[32]

3.     Applications of AI in Healthcare

Few industries showcase the importance of quick and accurate decision making as in healthcare. ANI and AGI systems aimed at medical decision making are taught with a variety of healthcare data with the goal of speeding up many tasks.[33] Moreover, the AI can significantly reduce or eliminate human error when critical decisions must be made in seconds instead of minutes or hours.[34] These systems are assisting healthcare providers in their decision making daily in the form of assisting with emergency dispatch calls, virtual nurses, robotic assistance in laparoscopic surgery, and symptomology research. The AI systems “mine medical records, design treatment plans [and] create drugs [] faster than any current actor on the healthcare palette including any medical professional.”[35]

The AI has been successful in assisting healthcare providers by instantaneously analyzing the enormous possible number of diagnoses for patients to determine a “more accurate diagnosis” in many disciplines.[36] These diagnoses have evolved into more than another data point on a graph; they are a complete qualitative and quantitative analysis designed to replicate or exceed the decision made by a physician in a specific task or diagnostic reading.[37] Therefore in areas where the AI’s diagnosis is exceptionally accurate, their findings have been given significant weight and a heightened status. For example, IBM’s supercomputer, Watson, has been trained to analyze and diagnose certain types of cancer, like leukemia, by synthesizing an individual’s blood and bone marrow testing against a dataset of positive cancer diagnosis in multiple stages of the disease.[38] These AI systems are substantially well trained and, in some diagnoses, their findings are “significantly more accurate” than traditional cancer staging systems.[39]

The ability of hospitals and physicians to provide their patients with better health outcomes by leveraging these types of new technologies has led to a rapid expansion of available tools and software based on ANI and AGI in healthcare. These systems can analyze data specific to a hospital or hospital system or drill down data in individual clinics or health departments.[40] The abundance of data that has been collected over the years can now be easily referenced and compiled into usable models and statistical calculations that give healthcare, as an industry, better probabilistic diagnostic data, but also provide insight into healthcare trends and successful initiatives both globally and in local communities.[41] The specificity of ANI has led to immediate applications in the normal versus abnormal diagnoses that have led to further testing and more accurate diagnosis in breast, colorectal, and optometric cancers and conditions.[42]

In addition to diagnostic analysis, ANI systems are also helping healthcare providers and hospitals to be more efficient in their choice of orders and testing when a patient presents for treatment.[43] Doctor AI, a cutting-edge AGI system, uses data mined from the hospital’s electronic health record (EHR) to perform a differential diagnosis, a comparison between two or more similar diagnoses, with significantly higher accuracy than baseline human expectations.[44] In sum, this system can predict a physician’s diagnosis and anticipate order sets faster and with greater efficiency than a healthcare provider making a preliminary diagnosis and then manually uploading a pre-planned group of orders commonly associated with their initial findings.[45] In doing so, the nurses and healthcare providers can provide specific, necessary tests to the patient faster and reduce the time needed to treat the individual patient’s condition.[46] This saves the patient time, expense, and suffering while receiving their healthcare.

4.     The Ethical Considerations of AI

The potential upside of using AI technology to increase patient outcomes is staggering. These systems can provide the healthcare provider with information in seconds that traditionally would have taken days to analyze, require multiple appointments, and delay care to the current patient as well as everyone else trying to be seen.[47] This technology has the “potential to redesign healthcare completely.”[48] However, fundamental legal and ethical questions about how the AI will be used must be answered in regards to patient privacy, disclosures about the AI making healthcare decisions, and reliance on the diagnostic reports. These questions have already become front of mind for many in bioethics, although the rapid creation and implementation of AI in the healthcare setting has compounded the importance of these issues in regards to the application of AI. Those in bioethics are, in turn, looking to the law and the courts to guide them in their decision making. After all, time has shown that while advances in technology affect the law, the law, in turn, affects the innovation of technology.[49]

A.  Informed Consent of Healthcare Decisions

For even the most simple AI system to function, an enormous amount of data is needed to train the neural network on what is or is not a correct outcome.[50] In patient care, interdisciplinary planning, and diagnosis of specific disease processes, the logical source of this data would come from actual patient data within that particular clinic, specialty care unit, or hospital system. The issue of informed consent about the use of thousands of patient records suddenly becomes of paramount concern. Informed Consent is defined as having “full knowledge of the risks and concerns.”[51] While accurate that moving forward informed consent could be obtained at new patient intake, the issue is using previous patient data stored in a proprietary system or an EHR. The use of this data is highly regulated under the Health Information and Portability and Accountability Act (HIPAA).[52] HIPAA protects this sensitive data from being distributed or misused by individuals that do not have a medical need to know about that specific patient’s care.[53] Hospitals would essentially be opening patient charts to these AI systems to use the data as they see fit. Without the patient’s informed consent to allow the data to be mined, the hospital and AI system have violated these HIPAA concerns.[54] Thus, without the informed consent of the data pool, sensitive patient information has been unlawfully used.[55]

B.  Disclosure of the Use of AI

In the same vein, the patient bill of rights, adopted in the Patient Protection and Affordable Care Act, requires that the patient is fully informed of the decisions being made in regards to healthcare.[56] To be fully informed would include the available healthcare options relevant to their current healthcare needs.[57] If the healthcare provider could use AI to verify their diagnosis, similar to a second opinion, the patient, in theory, should have disclosure of these options.[58] Inversely, this may also include situations where the healthcare provider chose to use their own judgment over the diagnosis provided by the AI. In turn, if the healthcare provider consulted the AI to make their diagnosis or plan of care, the patient should be informed of this use, regardless of the level of accuracy.[59] These patients should then have the same opportunity and ability to choose treatment paths as a patient faced with laparoscopic or traditional surgery, for example.[60] Denying the patient the ability to understand the AI being used in their care would violate the patient bill of rights by reducing their ability to choose their care.[61]

However, healthcare literacy now also includes healthcare professionals attempting to explain one of the most complicated phenomena of emerging technology in AI. Physicians and healthcare providers are trained to explain procedures at a third-grade level to patients to increase the likelihood of their understanding. The question remains whether advanced AI can adequately be explained at such a low level. If not, is informed consent indeed obtained? Unfortunately, this concern is compounded with the struggle of healthcare as a whole being adequately explained and it is a topic that requires much more research beyond the scope of this paper.

C.  Reliance on the AI by Healthcare Professionals

Perhaps most important to the discussion of AI in healthcare, is the question of what happens when the AI is wrong. The idea of the entire course of treatment being swift and efficient is noble, but what if the AI made a mistake and the patient is submitted to unnecessary tests that potentially delay the patient getting the care they need. This is not to mention that if these AI systems are being used to diagnose cancer where, for example, an incorrect diagnosis could prove fatal or cost millions of dollars in unnecessary treatment. Therefore, a correct diagnosis now includes the additional steps of the physician or healthcare provider essentially deciding whether to trust the AI diagnosis or even use the AI at all.

Many of these issues have not been substantially raised because of the rapid design to implementation that has occurred as these systems come to market.[62] Fundamentally, computer scientists and programmers are typically not lawyers, healthcare providers or bound by a strict ethical code.[63] However, these ethical issues and practical limitations of even the best AI systems are blatantly obvious to their creators from the outset:

One limitation of Doctor AI is that, in medical practice, incorrect predictions can sometimes be more important than correct predictions as they can degrade patient health. Also, although Doctor AI has shown that it can mimic physicians’ average behavior, it would be more useful to learn to perform better than average. We set as our future work to address these issues so that Doctor AI can provide practical help to physicians in the future.[64]

Doctor AI is arguably one of the most advanced medical AGI systems in existence, and it has been shown to be 80% as effective as a physician or healthcare provider with the ability and training to diagnose and create data sets.[65] Thus at only 80% effective, it becomes apparent that there are understood limitations of even the best AI at this point. It is entirely reasonable that if a healthcare provider substituted their reasoning for that of an AI system, be it from time-pressure, staffing issues, or pressure from the hospital that hopes to replace human staffers with this technology, they would be knowingly risking the patient’s positive health outcome.

5.     Proposed Solutions

The question then becomes, can we rely on the AI to make healthcare decisions at all? The answer is no, or at least, as of the time of this paper being written, not yet. The current state of AI has put healthcare providers in a lose/lose situation. To explain, I will break my argument down into two main parts. Initially, in Part I, the lack of computational power to regulate the decision-making algorithm, and then, in Part II, where the tortfeasor is inefficiently defined where errors are made.

A.  Inability to Regulate the Algorithm

AI functions by raw data being introduced to the system and then being passed through a neural network that, based on how the program is “taught,” produces an outcome that a human would determine to be intelligent.[66] For ANI systems, the success, or accuracy, of the system is based on the quality of the data and the quality of the teaching.[67] For advanced AGI systems, the issues are further complicated by the system’s ability to learn on its own, thus extending beyond the programmed data and teaching provided by the creator or administrator of the system.[68] Thus, once the code makes adaptations beyond the initial learning, the algorithm can be influenced by a host of external influences that may cause discrepancies or inaccuracies that are entirely “out of the hands” of the developers.[69] Simply put, once the AGI surpasses its training, we are unable to regulate the algorithm or pinpoint the learning that caused the AI to respond the way it does. The current limitations of our computing power cannot audit or catalog individual AGI decisions because there is simply too much data to synthesize to determine how a single decision is made.[70] Even a snapshot of the decisions made by the system at the time, like a black box in an airplane, could be impossible to comprehend effectively outside the entire system.[71]

One of the best examples of this is Microsoft’s AI chatbot, Tay.[72] Tay was an AGI system attempting to interact with users on Twitter at a level that was indistinguishable from other human users.[73] It analyzed traffic on Twitter and learned how users of the platform act and what their typical responses to questioning might be.[74] Unfortunately, the project had to be scrapped within sixteen hours of its implementation because Tay’s output became “racist, inflammatory, and political” to the point of “Hitler was right” as well as “9/11 was an inside job.”[75] The outside influences that occurred beyond the initial data set led to arguably corrupted outputs.[76]

Another example is the Correctional Offender Management Profiling for Alternative Sanctions or COMPAS.[77] COMPAS was an AGI system that became prone to identify African American defendants as almost twice as likely to re-offend and offered tainted recommendations to those defendants.[78] This system used probability models associated with zip codes, social media activity, and income levels to determine likely future behavior of a subset of the population without taking into account societal and social factors like first-time offenses or underlying factors surrounding the crime.[79]

In 2018, The House of Representatives warned about the promises and dangers of artificial intelligence stating, “allowing misuse-related considerations to influence research priorities and norms” and engage with “relevant actors when harmful applications are foreseeable.”[80] These concerns are not unfounded. In a separate but related use of this information, there is growing concern that AI systems that obtain EHR data could be abused or altered by individuals or companies.[81] These modified AI systems could bypass the doctor-patient privilege and provide potential employers or government agencies with privileged information about individuals that they would otherwise not have access to.[82]

With that being said, we are again faced the question of whether we can rely on the AI at all. The same ability of the system to make a determination could also be its detriment. Healthcare providers, lawyers, judges, and a host of other professionals would essentially be placing their licenses on the line by solely adopting the decision provided by the AI.

B.  AI and the Law

According to the current state of the law, when mistakes are made in healthcare, the remedy is most often in tort law. Tort law provides financial compensation for individuals who are “harmed by the negligent conduct of others.”[83] The “responsible party” would be the one “responsible for causing the injury.”[84] The problem lies in determining the responsible party regarding the AI.[85] As mentioned previously, the current state of computer power is unable to decipher the chain of learning and decisions made by the AI after the system surpasses the initial set of learning.[86] This inability would leave an injured party likely without the ability to prove causation and is repugnant to the notions of fairness and responsibility under the law.[87]

Problems in determining the responsible party regarding the AI has led to creative legal solutions to eliminate this liability loophole.[88] Suggestions have been made that we assign such an AI system “legal personhood.”[89] The legal personhood assignment would allow the AI to have liability when it harms someone.[90] However, an algorithm cannot be deterred by the threat of tort or criminal prosecution the same way that a human can.[91] In turn, assigning personhood to the AI system would be an opportunity to assign blame with little or no recourse for the algorithm not to perform such a task, and the developer of the AI would be able to distance themselves from the decisions of the system because it is individual and separate from the “person.”[92] This is not to mention that assigning blame to an algorithm outside of an agency relationship would in no way guarantee recovery.

Further, there have been proposals that the law is expanded to encompass the use of the AI with a “group responsibility” function since the use of the system would be in concert with the developer, the healthcare provider, and the hospital or clinic, for example.[93] However, considering that once the AGI system surpasses its initial learning, that its decisions are essentially it’s own, it would be unreasonable to hold the developer liable.[94] The decisions made by the system are no longer based solely on the initial data set, and the AI is outside the developer’s control.[95] A decision like this, to hold everyone liable with a stake in providing the AI system’s existence essentially, would thwart AI development in an instant.[96]

In the same token, the hospital or physician that supplied the data essentially did only that. In patient care, every individual is an endless combination of individual variables. The data set is, however, one of the most critical pieces of the AI being successful.[97] If the data set was tainted or corrupt, it is foreseeable that the data set provider could have individual liability. Although, if the issue occurred beyond the scope of the initial data, it would be unreasonable to hold this individual or entity liable based on the machine learning. Just as with the developers, the system has surpassed the initial data set learning, and outside resources are influencing the system.[98]

In this scenario, the healthcare provider relying on the AI’s diagnosis should be assigned blame as the tortfeasor. The AI system is a tool to be used only in the combination of the healthcare provider’s expertise.[99] The healthcare provider who substitutes his or her experience and judgment for the decision of the AI system should be held responsible for the outcome. The “last chance” doctrine gives the healthcare provider the final opportunity to catch the mistake and mitigate the damage on the journey to positive patient outcomes.[100] In addition, the American Medical Association (AMA) requires that the attending healthcare provider that has accepted care of that patient is responsible for that patient until handoff to another provider occurs.[101] This responsibility includes the patient care decisions, treatment plans, and care plans associated with the individual patient during their time under the care of that provider.[102] This also includes the decision to rely or not to rely on AI in their decision making.

Therefore, in the current state of technology, using AI to diagnose patients is a lose/lose situation for healthcare providers. If the AI is wrong they are liable for a tort action, and if the AI is correct but the physician did not use the diagnosis, the healthcare provider denied the patient proper care and is liable as well.[103]

Proponents of the use of AI would argue that there is no difference in a human physician or healthcare provider looking at the data and making a similar mistake. This is true, of course. However, the physician is not able to see all of the decisions the AI made to arrive at its proposed diagnosis, yet the physician must make the judgment call as to the validity of the result. Until science is able to analyze the algorithm of the AI, the system will not be able to be trusted in its fullest capacity. The burden of perfection is once again placed on physicians and healthcare providers whose duty is to care for their patients with the highest level of ethics and technological understanding.

Furthermore, assigning blame to the healthcare provider will limit the AI to a tool be used to double-check analysis as the healthcare provider will be unwilling to trust their license on the system entirely. It will be up to the healthcare providers and hospital staff to resist pressure from hospital administration to replace the skilled team with automation. The resistance will be easier said than done in the current healthcare climate, where there is downward pressure to bill more patients and at the same time reduce cost. However, by further defining the role AI plays in healthcare, developers and entrepreneurs can focus on specific tools for specific needs at a higher rate and expand the offerings of ANI as AGI takes shape. This change in conceptual thinking will only initially limit the expanse of attempts at AI in healthcare but will ensure patients get the best outcomes until the technology can be seen as a help up and not a staff replacement.

C.  The AI Attorney’s Role

What form AI takes moving forward is unknown, but it has the potential to change how we interact with the world from now on.[104] Rule 1.1, comment 8, of the Model Rules of Professional Conduct, in which most states have adopted a version of such rule, that states:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.[105]

Attorneys have a duty to understand the technology they use in their practice.[106] They have a responsibility to protect their client’s interests and advise them of the risks and benefits of their actions; this includes hospitals and the companies developing AI.[107] These attorneys will play an integral role in explaining “prosocial” and ethical considerations when advising the AI developers as the technology grows and expands within regulated markets like healthcare.[108]

The action taken by these attorneys will also influence the legislation toward the ability to propose and enact new laws that further govern autonomous creations like AGI on their plight towards an ASI system.[109] However, those prominent in the legislature will need attorneys who can expressly counsel both the needs of the law to embrace the potential of what AI is, and could be. The legislature will also need educated lawyers who can explain how small intricacies in the law can further clarify the definition of AI and how these systems fit into those new laws. Until then, attorneys should advise their healthcare clients that there may be a symbiosis of experts and analytics that aid in increased health outcomes but without the sole reliance on the AI in order to reduce liability.[110]

Conclusion

Artificial Intelligence is a phenomenon that has the potential to alter how we interact with data, learn, and make decisions. In healthcare, these decisions will give humanity access to data synthesis that could lead to the next technology revolution. However, in designing AI as we know it today, it could be said that “scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”[111] For healthcare specifically, the inability to determine the accuracy of current AI could lead to potentially disastrous effects of patients and patient care if there is too much reliance on the wrong system. The current AI systems have only limited testing, but the cost-saving potential is forcing them into hospitals and clinics at an alarming rate. Healthcare providers must be conscious of their liability and the limitations of current AI just like they are with staffing considerations, internal policies, or the changing healthcare climate in general. However, AI systems being used to double-check the healthcare provider’s years of experience or using the AI’s determination as a starting point for research could in and of itself increase the speed and accuracy of healthcare delivery without total reliance on the system. This would allow the healthcare provider to mitigate their risk but give their patients the highest levels of care available.

At the same time, the court will be faced with tough decisions determining the liability surrounding these AI systems. Litigators that can describe the interworkings of AI and its effect on society will be invaluable. It will be up to these well-informed attorneys to understand the needs of their clients and to challenge the developers of AI in such a way to inject the necessary controls and auditing capabilities on new AI systems coming into the market. Moreover, it will be up to these AI attorneys to seek remedies for those who have been harmed by using their knowledge of AI, the specific AI system, and forcing the courts to assign blame. These individuals will have a tremendous impact on the way this technology shapes future society. This is an enormous blessing and an enormous burden in the same breath.

In conclusion, AI is here to stay. It should be embraced for its vast potential upside but kept at arm’s length as it evolves into the powerfully reliable technological resource it is meant to be. The AI of today is only a glimpse of what is to come with technology. In fact, leading minds believe the singularity is a mere 5 to 30 years away from the time I am writing this.[112] If that is the case, healthcare, in all likelihood, will be AI’s crown jewel or guillotine. When an individual’s life is in the balance, there is no greater risk to the individual and the financial ramifications associated with these decisions are insurmountable. The developers of advanced AI must understand these concerns and hold themselves to a higher standard in their design. They must consider prosocial elements and implement them broadly where possible as AI evolves. And the law must adapt to the change that AI presents on a widescale use. We must support those carrying the burden of this technology, like healthcare providers, and give them the guidance they deserve.

 

Ryan Dobbs, 3L

Ryan is the founding president of TALIS and editor-in-chief of the TALIS blog. Contact Ryan at RyanDobbs@ou.edu, at RyanDobbs.Lawyer, or on twitter @RyanDobbs

References

[1] Dr. Ben Goertzel, The Joe Rogan Experience (Dec. 6, 2018) (downloaded using iTunes).

[2] John Niman, A Brief Overview of Artificial Intelligence Application and Policy, Nev. Law. 8 (2018).

[3] Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter, Artificial Intelligence 3 (Dec. 31, 2015), https://www.aaai.org/ojs/index.php/AImagazine/issue/view/212.

[4] id.

[5] id.

[6] David T. Laton, Manhattan_project.exe: A Nuclear Option for the Digital Age, 25  Cath.U.J.L. & Tech. 94 (2016).

[7] Goertzel, supra note 1.

[8] Research Priorities, supra note 3, at 3.

[9] Alex Woodie, How AI Fares in Gartner’s Latest Hype Cycle, Datanami (October 3, 2018, 6:46 PM), https://www.datanami.com/2017/08/29/AI-fares-gartners-latest-hype-cycle/.

[10] id.

[11] id.

[12] Laton, supra note 6, at 94.

[13] id.

[14] Ryan Dowell, Fundamental Protections for Non-Biological Intelligences or: How We Learn to Stop Worrying and Love Our Robot Brethren, 19 Minn. J.L. Sci. & Tech. 305, 308 (2018).

[15] Research Priorities, supra note 3, at 3.

[16] Dowell, supra note 14, at 308.

[17] Tang Ziyi, AI Mistakes Bus-Side Ad for Famous CEO, Charges Her with Jaywalking, CX Live (Nov. 22, 2018), https://www.caixinglobal.com/2018-11-22/ai-mistakes-bus-side-ad-for-famous-ceo-charges-her-with-jaywalkingdo-101350772.html

[18] Niman, supra note 2, at 8.

[19] Aaron Krumins, Artificial Intelligence is Here, and Impala is its Name, Extreme Tech (Aug. 21, 2018, at 1:01 PM), https://www.extremetech.com/extreme/275768-artificial-general-intelligence-is-here-and-impala-is-its-name.

[20] id.

[21] Bridget Watson, A Mind of Its Own-Direct Infringement by Users of Artificial Intelligence Systems, 58 IDEA: J. Franklin Pierce for Intell. Prop 65, 73 (2017).

[22] Krumins, supra note 18.

[23] Laton, supra note 6, at 94.

[24] Niman, supra note 2, at 8.

[25] id.

[26] Krumins, supra note 18.

[27] Goertzel, supra note 1.

[28] id.

[29] id.

[30] id.

[31] Partnership on AI, https://www.partnershiponAI.org/ (last visited, Nov. 18, 2018).

[32] Jordan Bigda, The Legal Profession: From Humans to Robots, 18 J. High Tech. L.

396, 398–99 (2018).

[33] Fei Jiang et.al, Artificial Intelligence in Healthcare: Past, Present and Future, 2 Stroke and Vascular Neurology, (Nov. 21, 2018, 5:47 PM), https://svn.bmj.com/content/2/4/230.

[34] id.

[35] 10 Ways Technology Is Changing Healthcare, The Medical Futurist, https://medicalfuturist.com/ten-ways-technology-changing-healthcare (last visited Nov. 21, 2018 at 5:47 PM).

[36] Watson, supra note 20, at 73.

[37] id.

[38] id.

[39] Harry Burke et. al, Artificial Neural Networks Improve the Accuracy of Cancer Survival Prediction, Cancer (Nov. 21, 2018 at 5:59 PM), https://onlinelibrary.wiley.com/doi/full/10.1002/%28SICI%291097-0142%2819970215%2979%3A4%3C857%3A%3AAID-CNCR24%3E3.0.CO%3B2-Y.

[40] Dowell, supra note 14, at 308.

[41] Burke, supra note 36.

[42] id.

[43] Edward Choi et al., Doctor AI: Predicting Clinical Events via Recurrent Neural Networks, 56 JMLR 1, (2016), http://proceedings.mlr.press/v56/Choi16.pdf.

[44] id.

[45] id.

[46] id.

[47] Research Priorities, supra note 3, at 3.

[48] 10 Ways, supra note 32.

[49] Aryeh Friedman, Law and the Innovative Process: Preliminary Reflections, 1986 Colum. Bus. L. Rev. 1 (1986).

[50] Research Priorities, supra note 3, at 3.

[51] Black’s Law Dictionary (10th ed. 2014).

[52] Health Insurance Portability and Accountability Act of 1996 (HIPAA) Pub.L. 104–191, § 221, 110 Stat. 1936, 2009 (1996).

[53] id.

[54] id.

[55] id.

[56] Patient Protection and Affordable Care Act, Pub.L. 111–148, § 2717, 124 Stat. 119 (2010).

[57] § 221, 110 Stat. at 2009.

[58] id.

[59] id.

[60] id.

[61] § 2717, 124 Stat. at 119.

[62] Bigda, supra note 29, at 399.

[63] id.

[64] Choi, supra note 40.

[65] id.

[66] Laton, supra note 6.

[67] id.

[68] Weston Kowert, The Foreseeability of Human-Artificial Intelligence Interactions, 96 Tex. L. Rev. 181, 184 (2017).

[69] Kowert, supra note 65, at 184.

[70] Krumins, supra note 18.

[71] id.

[72] Tay, Microsoft’s AI Chatbot, Gets a Crash Course in Racism from Twitter, The Guardian, (Mar. 24, 2016),https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-AI-chatbot-gets-a-crash-course-in-racism-from-twitter.

[73] id.

[74] id.

[75] id.

[76] id.

[77] Rise of the Racist Robots – How AI is Learning all our Worst Impulses, The Guardian (Aug. 8, 2017), https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-AI-is-learning-all-our-worst-impulses.

[78] id.

[79] id.

[80] 2018 CQDPRPT 0144 (2018) (House panel delving into promises, dangers of artificial intelligence).

[81] Goertzel, supra note 1.

[82] id.

[83] Kowert, supra note 65, at 184.

[84] id.

[85] id.

[86] id.

[87] Mark Chinen, The Co-Evolution of Autonomous Machines and Legal Responsibility, 20 Va. J.L. & Tech. 338 (2016).

[88] Kowert, supra note 65, at 184.

[89] Chinen, supra note 80, at 338.

[90] id.

[91] Chinen, supra note 80, at 338.

[92] id.

[93] id.

[94] id.

[95] Niman, supra note 2, at 8.

[96] Chinen, supra note 80, at 338.

[97] Jiang, supra note 30.

[98] Kowert, supra note 65, at 184.

[99]  Dowell, supra note 14, at 308.

[100] Restatement (Second) of Torts § 3 (2000).

[101] Katherine Blondon et. al, Physician Handoffs: Opportunities and Limitations for Supportive Technologies, AMIA Ann’l Symp. Proc. (Nov. 5, 2015) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4765668/.

[102] id.

[103] Kowert, supra note 65, at 184.

[104] Roy D. Simon, Artificial Intelligence, Real Ethics, N.Y. St. B.J., March/April 34, 37 (2018).

[105] Model Rules of Prof’l Conduct r. 1.1, cmt. 8 (Am.Bar Ass’n 2013).

[106] Model Rules of Prof’l Conduct r. 1.1, cmt. 8 (Am.Bar Ass’n 2013).

[107] Model Rules of Prof’l Conduct r. 2.1 (Am.Bar Ass’n 2013).

[108] Chinen, supra note 80, at 338

[109] Laton, supra note 6, at 94.

[110] Dowell, supra note 14, at 308.

[111] See Jurassic Park (Universal Pictures 1993).

[112] Goertzel, supra note 1.

When I began law school, I had no idea what I wanted to do with a JD. All I knew was that I was interested in the law and that it couldn’t hurt to have another degree. I walked onto campus committed to chasing whatever grabbed my attention. I expected Constitutional Law, Criminal Law, Torts, or any other course to give me a pull in the direction I would go down. But my pull came unexpectedly.

My classes kept me preoccupied with legal doctrine and more reading than I care to recall. I was intrigued by my classwork, but I could never single out one as my primary interest. I kept running home and talking to my wife about stuff that I wasn’t required to learn about. It was the Digital Initiative Lunch-and-Learns that I would attend.

To me, it was a no-brainer to attend a free lunch and learn some useful skills. At the beginning that was just how I saw it. No need to bring food to school or pay for a meal, and maybe I learn how to save a few minutes when I had to write a brief. By the end of my first semester, those lunch-and-learns turned out to be far more significant.

I began to research emerging technologies in law practice. I started following dozens of legal innovation accounts on Twitter. I went to every possible Lunch-and-Learn that I could, even if I had brought my lunch that day. I could not get enough information about how technology and innovation could change the way that lawyers functioned.

Writing a brief? Here’s a tool to check your grammar and Blue Book citations. Researching cases to argue in court? Here is an artificial intelligence tool that shows you the most influential cases on the issue. Not very organized? Here is a tool to help you map out your days, projects, or notetaking.

To me, it just made sense to use these tools. After all, I am a first-generation law student with very little idea of how the law is “supposed” to be practiced. In short, you could say I was impressionable.

So, I took advantage of the programs that OU Law provided for me. As I mentioned before, I went to countless Digital Initiative lunch meetings. I became LTC4 certified thanks to the law school’s collaboration with the Legal Technologies Core Competencies Certification Coalition. I attended the 2019 ABA TECHSHOW in Chicago (a highlight of my first year) where I got to meet influential legal bloggers like Bob Ambrogi, Ivy Grey,  and Kevin O’Keefe, along with the Lawyerist team. And now, I work on building our very own student-led Technology and Legal Innovation Society here at OU Law.

I am blown away just thinking about all of the opportunities that I have been able to seize in just one year. But the benefits of Technology and Innovation didn’t end with the spring semester. I was lucky enough to land a job that allowed me to practice my new skills and even work remotely. Then, at the end of the summer, my employer offered to keep me around to help manage online accounts, his website, and other tech-related tasks. All of these opportunities are a direct result of my pursuit of innovation.

I have barely started my 2L year and I have already practiced using tools that save time and streamline the processes of research, drafting documents, managing documents, and organizing firm information. To me, that practical application and being able to use technology as a way to improve my study habits is an invaluable experience.

My experience with legal technology and innovation has changed my approach to my work and study in several ways; I am more efficient. I am more organized. I like to think that I also keep my information more secure. I can collaborate with coworkers and peers from anywhere. In short, I can do any task better and faster than I could without these tools.

I still don’t know what kind of law I will practice in the future. I’m not even sure that I will actively practice law. If I do, I likely won’t practice for very long. What I do know is this: I will use technology and innovation to do my job better and more efficiently. I will not use my incompetence to justify billing my clients for hours of work that I could save (not to mention the fact that it would be unethical and likely get one disbarred if taken to an extreme degree). I will make any legal process as transparent and simple for future clients because hiring a lawyer is already intimidating enough.

I will drive the legal profession to be better, serve more people, and positively impact lives. And I will be able to do all of this thanks to legal technology and innovation.

Giving law students the opportunity to advocate for clients is an essential part of the law school experience. When those clients are some of their state’s most vulnerable residents, the experience can be even more rewarding.

Third-year law students at the University of Nebraska College of Law have the opportunity to serve as guardians ad litem (GAL) for children in Nebraska’s child welfare system. The Children’s Justice Clinic (CJC) is a partnership between the College of Law and Nebraska’s Center on Children, Families, and the Law.

Since the year-long clinic began in September 2017, students have been appointed on 41 cases, representing 88 Nebraska children.

“Advocating for very young children presents a unique challenge that requires a special skill set,” said Judge Roger Heideman, presiding juvenile court judge for the Separate Juvenile Court of Lancaster County, Nebraska.

The CJC is a unique opportunity for law students which includes:

  • Guardian ad litem foundations – an intensive classroom component that students take prior to representing clients. Students learn the foundations of child representation, including courtrooms skills, federal and state child welfare laws, the child welfare process, child development, and trauma in young children.
  • Weekly seminars – each seminar is developed to enhance and complement the knowledge and skills that students learned in the foundations course. Topics include such areas as drug and substance abuse, domestic violence, and human trafficking.
  • Case consultations – the clinic director and the multidisciplinary team of psychologists, social workers, and child welfare practitioners from the Center on Children, Families, and the Law meet weekly with students to advise on cases.
  • Reflective consultation – a licensed mental health practitioner and the clinic director help equip students for handling the emotional challenges of their cases.

“Being part of the CJC has expanded my law school education in a way I could have never imagined,” said Rachel Kunz, a third-year law student and current CJC student attorney. “I feel confident going into the courtroom or family team meetings to advocate for my clients and make their voices heard throughout the process. Being a part of this clinic makes me excited for my future as a lawyer.”

Chicago-Kent’s 1L Your Way Program allows students who know the area of law in which they would like to specialize to select a more flexible track. This optional program allows full-time J.D. students to defer a required first-year course to the second year in favor of taking either an approved, upper-division elective course or a unique first-year clinical course.

IL Your Way enhances students’ ability to exercise agency over their own education—which, literature suggests, has substantial pedagogical benefits—while also ensuring that all students are exposed to the necessary building blocks of the law. Elective Course Option. For example, students interested in intellectual property may take Patent Law, which would allow them to take the Patent Bar during their second year.  Similarly, students who know they want to specialize in labor and employment law may take Employment Relationships, and students who plan to practice in a corporate law environment may take Business Organizations. The law school hopes this flexibility enhances students’ marketability for summer employment.

Legal Clinic Option. Instead of taking an elective, students may choose to participate in one of the law school’s clinics to gain a better sense of the skills demanded in practice areas of interest to the student. Clinic practice areas at Chicago-Kent include criminal defense, employment/civil litigation, entrepreneurial law, family law, health and disability law, and tax and probate law.

Students also can opt into a rotation option for their 1L clinic options. Loosely based on the medical school model, this opportunity allows first-year students to participate along with upper level students as members of more than one clinic.

As a result of this new program, in spring 2019, Chicago-Kent College of Law became one of the few law schools to allow first-year students to fully participate in the law school’s clinics. The law school anticipates that approximately 50 Chicago-Kent 1L students will be eligible to participate in the First-Year Clinic during the spring semester each year.

I like games. Pinball. Arkanoid (kinda like Pong, but in space and more colors and the cursor can turn into different things), hidden object games, trivia games. Also, I like the law. A lot.

There may be those who feel that the law and games really shouldn’t mix, outside of Phoenix Wright, Ace Attorney. But, I had an idea and I turned my idea into a conference presentation and I gave my presentation at CALICon ’19. There’s video and a handout and a description – it was a nice crowd. And a very good conference.

Also, you like games and you like law (especially if you’re a law student), here are a couple of things you might like to check out:

 

Aloha readers,

It has been almost two years since my last entry. I have been completely overwhelmed with balancing my academic journey getting my Masters in Public Health (MPH) and Juris Doctorate at the University of Hawaii at Manoa and developing my professional identity in the legal community in Hawaii.

I wrapped up taking the February 2019 Bar Examination in Hawaii and am a few weeks out before we find out our results.

Let me update you all on the successes of ChadBot (my rightfully named Chatbot). To refresh your memory:

“My second project includes a chatbot that responds to people who access the main legal aid website. The purpose of the chatbot is to communicate with end users who may not know how to navigate the website and find the legal solutions they are seeking. Here at Legal Aid, there are so many people who need help but are unable to get it due to ineligibility, they do not have time to come in, or other factors that prevent them from seeking help. The current site that end users would normally go to is (http://www.lawhelp.org/hi). The intent of this chatbot (or should I say Chad-bot) is to create a streamline process to give end users the resources they need at the starting page.

To create the dialogue and sample chatbot, I used Chatfuel. This program is very easy to use and very user friendly. More news on this chatbot latter.”

Since Chadbot’s inception in July 2017, Chadbot has evolved to being based on Dialogflow and has helped over 1000 users access legal information with brief quires. Chadbot can be found on the at Legal Aid Society of Hawaii website near the bottom of the page.

Diagramflow is powered by Google’s machine learning expertise AI and allows users to interact by building engaging voice and text-based interfaces.

I am looking forward to updating Chadbot this coming year as I have more time to create more “intents” and dialogue. To my readers who made it this far, id love to hear your successes with AI and Chatbots and if you have any suggestions to improve Chadbot.

Before I leave you for the day, here is the Hawaiian word for the day: kaulike.

Kaulike means “justice” as in Chief justice, luna kānāwai kiʻekiʻe ā kaulike; and pono kaulike, the quality of being impartial or fair. Additionally, kaulike means “balance” especially in regards to science, like a balanced chemical equation.  The word to me signifies how science and technology are inherently intertwined with the law and the pursuit equal access to justice.

Until next time!

Mahalo nui loa,

Chad Au

I am a big fan of podcasts. I am also a law student. Naturally, this means that I listen to podcasts about practicing law quite a bit. One of my favorite podcasts is The Lawyerist.

For anyone that is not familiar with Lawyerist, it is a platform that provides insight and resources to law firms to become better in practicing law. While all of the episodes I have listened to have struck me in unique ways, their episode with Fastcase CEO Ed Walters was particularly intriguing to me. In this episode, the focus was on leveraging data to make better lawyers and law firms. The discussion is driven by the publication of the book “Data-Driven Law.”

For some, the idea of using data in a law firm may sound foreign. However, as this episode pointed out, all firms rely on data. Lawyers rely on knowing judges and cases to anticipate how likely they are to win a case. Some lawyers rely on experience to determine how much certain litigation might cost. Practicing law is already driven by data. But what if that data was more accurate? What if it allowed you to standardize the likelihood of success? What if it made clients more confident in your services? The possibilities could drastically improve your practice if used correctly in several ways.

There were three uses for data that I found especially powerful while listening to this episode;

Increasing Firm Efficiency

Law firms need to be nurtured and grown. That means that there is more work in running a firm than just practicing law. You have to bill clients, bring in new clients, cover overhead costs, and pay employees. All of this takes time away from handling cases and actually practicing law no matter how you handle them. Most likely, the firm is going to spend money on advertisement campaigns and tools to improve client experiences to increase referrals.

By analyzing data, a firm could be able to identify where they are getting higher or lower returns on investments. Online campaigns could cost more than what the firm is making by intake from those services. Maybe a partner spends time chasing leads or managing payroll that limits the revenue that they could be bringing in with their billable hours. That partner could save time by using a more efficient tool, and, even if it costs money to use, could actually be saving money by freeing up their time to handle cases.

Data could explain how much money and time is being sunk into these tasks that firms may not realize are actually costing them money. This would allow you to readily access the analysis of how well your firm is using its resources.

More Accurate Predictions

Law firms provide unique services in relation to other markets. Lawyers rarely know, down to a statistical level, how likely cases are to play out one way or another. A layer’s experience and personal analysis already serve as data that informs decisions. However, when clients bring potentially life-changing issues to a lawyer, it is easy to see why they are afraid to move forward. Additionally, if a potential client is shopping for a lawyer, they may take the case to more than one firm to make their decision. It would stand to reason that the client would move forward with the firm that is able to calculate an exact higher-percentage of success, rather than the one that thinks it has “a good chance” of being a winner.

This ability shows that you take the time to consider their particular facts and apply your previous success in similar cases. Clients will feel far more confident in a firm that calculates their confidence instead of just making an educated guess. For a lot of clients, they are bringing life-changing problems to lawyers. When you consider that, it is easy to see why they would prefer to see a firm put in the effort to calculate probabilities.

Potential to Unbundle Services

Not all clients need or require a lawyer to handle their entire case. Some clients simply cannot afford a lawyer. So what if you could offer some services at a low flat rate? This has been a trend in law since the introduction of LegalZoom and other platforms that offer assistance with certain documents. However, law firms in many states would be capable of offering these “unbundled” services with a more personal touch as well.

As some states move towards allowing lawyers to assist pro se litigants, unbundled services could become the basis for several firms. With data analytics, firms could calculate flat rate services. These services could satisfy the client’s need for value and the firm’s need for profit. The system would be able to track how much time it takes to draft certain documents. The firm could then determine precisely how much each document would cost to draft.

Law firms need to focus on improving their operations. To do that, they need to understand their current operations. by leveraging their data, firms can increase efficiency and lower costs. They could even offer affordable flat-rate services. In order to reach more clients and improve existing client relationships, data should serve as an important tool for any law firm.