Like any tool, GenAI has drawbacks and risk factors. By learning the risk factors and how they arise, you can avoid them or mitigate their impact.
Misinformation
You have probably heard someone tell you something but add "check my facts" or "don't take my word for it." What are we to make of what they said? Is it true or not? Typically, they are letting you know that they believe the information that they told you to be accurate, however, they are inviting you to independently verify that information. Why might they do so? For one, they may be uncertain about their own knowledge and do not want to give you misinformation. It is possible that they heard the information from a reliable source but also heard contradictory information from another reliable source and thus are not sure themselves whether it is true. Or they may know that the information was true in the past but suspect that it may now be outdated since they have not checked recently. Alternatively, they may feel that if you independently verify something that they believe to be true it will be more persuasive to you since it has corroboration.
Unlike these examples, GenAI generally will not caveat the responses it provides to account for the possibility of incorrect or outdated information. As discussed on the previous page, the technology is not acting as a fact-checker; rather, it is operating like a sentence-completer and pattern-recognizer. How do you complete a non-sensical sentence? What happens if you identify patterns that are not really there? Colloquially speaking, we may describe these experiences as "hallucinations" if performed by humans. The GenAI literature has adopted this terminology to describe instances where the algorithm provides responses that are either non-sensical or inconsistent with observed reality. Although GenAI currently lacks the ability to fact-check its responses, the user is typically well-equipped to do so. Citations should be verified and assertions should be substantiated with independent evidence.
GenAI platforms specifically tailored to legal research are less likely than generalist GenAI to create false citations; however, they are nonetheless susceptible to inaccurately representing the contents of real authority. There are several reasons for this.
First, as law students are well aware, judicial opinions often contain arguments from opposing parties so that the court can assess the merits of each party's position. An algorithm may interpret the court's expression of a party's argument as its decision, rather than just a necessary step to explain why the argument ultimately fails.
Second, law students quickly learn to critically evaluate decisions and identify the holding versus dicta. No doubt, GenAI is making great strides but currently lacks an attorney's expert analysis.
Third, constitutions and statutes may be written in antiquated or ambiguous language that require context and references which may not be immediately apparent to an algorithm trained on more straight-forward text, like news articles, Wikipedia, or community forums. Specialty services can attempt to cater to legal sources; however, they remain built on LLMs trained primarily on data from the past few decades.
Ethical Concerns
GenAI should never be relied upon in lieu of a credible source. Rather, it should be considered a valuable tool that--when used correctly--can greatly increase productivity. Ethical issues may arise when attorneys misuse the platform or fail to acknowledge use of GenAI where they have the duty to do so. For example, this Law 360 tracker identifies judicial orders in federal courts regarding use of GenAI and requirements for disclosure. Similarly, publishers (including law reviews) have devised their own policies regarding submissions with text generated using GenAI and reporting requirements/disclosures by authors of such submissions.
Privacy
When people hear about privacy risks in the context of online service platforms, they tend to think of the generalized risk that the platform is selling data or profiting off of their uploads. When using GenAI in the legal context, there is another serious risk: that the attorney, by uploading facts specific to their client's situation, may run afoul of the attorney's duty to confidentiality (see, e.g., ABA Rule 1.6). The best way to avoid this problem is to carefully review the duty of confidentiality (and any state bar equivalent) and ensure that no privileged information is uploaded to the platform.
Bias & Discrimination
There is a danger in expecting GenAI to automatically make fair and neutral decisions. Consider, for example, the problem of "predicting" future crime. An early implementation of such a system resulted in predicting future policing rather than future crime. Lum, K. and Isaac, W. (2016), To predict and serve?. Significance, 13: 14-19. https://doi.org/10.1111/j.1740-9713.2016.00960.x
It turns out that algorithms can recreate the same biases of data used to train them. This should not be surprising for those who understand how the technology works (specifically, the recreation of perceived patterns). Accordingly, users should be aware of and carefully evaluate the presence of bias when reviewing responses generated by GenAI.
Resources
The amount of computing power required to generate new content, especially images and video, can be astronomical. Studies differ on the exact environmental cost of each GenAI prompt (it also depends on the question, model, context, etc.). Users should be aware of this cost and individually evaluate when and how to use GenAI accordingly.
Overuse
Some commentators are concerned that extensive use of GenAI will bring about serious long-term consequences, ranging from the inability of users to do their own research to the inability to be human. Users can individually determine whether these claims have merit and decide their own comfort level with GenAI use.