Generative Artificial Intelligence in Legal Practice: Promise, Peril, and the Discovery Frontier

Changes in modern technology impacting law is not a new concept. Throughout history, technological innovations have disrupted societal norms, necessitating adaptations in legal frameworks. From the industrial revolution to the digital age, courts and legislatures have been forced to grapple with novel questions of ethical and social repercussions of innovative technology integration. Yet, generative artificial intelligence (Gen AI) is already a very divisive issue. The rapid rise of Gen AI is quickly transforming the world around us. People tend to either love it or hate it (and/or are scared of it), and lawyers are no exception to the divide. I have personally spoken to attorneys who love the integration and the potential of Gen AI, and I have heard from other attorneys who claim they would never touch it; that it is ruining the integrity of the industry. Like most disputes, reality probably exists somewhere in the middle.

On one hand, AI gives lawyers the opportunity to accelerate some of the work that is more menial or tedious, and the savings of time can get passed on to our clients as savings on cost. But on the other hand, if you have been paying attention to legal news, then you are already aware that Gen AI has been citing case law that does not exist, and lawyers are getting sanctioned for their improper fabrications. Furthermore, there are concerns about AI data being discoverable, which could lead to complications (or at least change) in the future of litigation. So is AI useful for counsel or clients?

Generative AI: Defined and Refined for Legal Work

To properly answer that question, it is important to at least understand how Gen AI works. “Generative Artificial Intelligence” refers to a subset of artificial intelligence systems designed to produce novel outputs based on input data. These systems often function through generative models, which aim to replicate the statistical patterns inherent in the training data. This is where some lawyers go wrong; patterns are not perfect. Unlike traditional AI systems that execute deterministic tasks, generative models exhibit creativity-like properties, enabling the generation of unique content such as text, images, or audio. In legal work, Gen AI is being used to generate text, summarize documents, draft responses, and analyze large datasets. Used carefully, these tools can assist, rather than replace, human judgment.

The deployment of Gen AI across law firms and corporate legal departments has accelerated quickly. According to the American Bar Association (“ABA”), the pace of technological adoption will only increase, and competence in responsible AI use is becoming a baseline professional duty. Gen AI tools are now embedded in tasks ranging from e-discovery and document review to contract management, due diligence, legal research, and regulatory monitoring. Reuters has reported that e-discovery is one of the most heavily affected areas, with AI categorization tools streamlining how legal teams identify relevant or privileged material. The ability to sift through terabytes of data efficiently can save clients substantial time and cost.

Yet these gains come with considerable risk. Accuracy remains a central concern. Gen AI systems can produce content that is plausible but incorrect, a phenomenon known as “hallucination.” In legal contexts, even a minor factual or citation error can mislead courts or clients. Moreover, lawyers have ethical obligations under the rules of professional conduct to maintain competence and safeguard client confidences. Uploading sensitive client information into external AI platforms can breach confidentiality or privilege if data is retained, shared, or used for model training. The duty of technological competence now encompasses understanding how AI tools handle data, the limits of their reliability, and the means of verifying their outputs.

The Discovery Frontier

Speaking of AI data, we are on the frontier of AI law: particularly complex developments that involve the intersection of AI and discovery. While there are currently many cases regarding AI (See the ABAs “recent developments in Artificial Intelligence” article for a breakdown of cases in 2025), perhaps the most recognizable is the ongoing New York Times v. OpenAI litigation. In 2023, the Times sued Open AI and Microsoft, accusing them of using millions of articles without permission. This lawsuit has been novel in a way, where we are starting to realize just how much data is going through these AI companies. Notably, in May 2025, the United States District Court Southern District of New York ordered Open AI to preserve and segregate all of its data (OpenAI argued that they would have to retain up to 60 billion conversations and estimated that likely only .006% of the data was relevant to the case). Open AI has since been relieved of this obligation as of September 2025, but it does create curiosity to how and when our data input into AI companies will be accessible.

Federal Rule of Civil Procedure 34 governs the production of documents, electronically stored information (ESI), and tangible things during discovery in civil litigation. It allows a party to serve a request on another party to produce and permit inspection, copying, testing, or sampling of materials in the responding party’s possession, custody, or control. This includes writings, records, photographs, sound recordings, databases, and other digital information stored in any medium. Thus, currently, we sit inside the question: Are AI conversations discoverable? Because if so (and if in the scope of discovery, they probably are), our clients may require counsel on AI technology in order to protect their best interests. Similarly, attorneys must understand the same limitations. It is our responsibility to ensure the Client’s interests remain the priority.

To harness the potential of Gen AI while protecting clients, firms must integrate robust supervision and accountability measures. That means clear internal policies governing AI use, training for lawyers and staff on both capabilities and risks, and transparent client communication about when and how AI tools should be used. Courts are currently answering these questions, and I suspect there will be many more to come in the next five years. It’s exciting in a way. These legal battles will shape the evidentiary rules surrounding AI.

Conclusion

So to return to the question “is AI useful for counsel or clients?” Maybe. AI is not a replacement to legal counsel, and it’s absolutely not helpful in every situation. While AI promises enormous advantages in speed, insight, and reach, it is not a substitute for human discernment, legal reasoning, or professional integrity. Just like other innovations of technology, it is a tool, and we are still learning how to use it safely and effectively. I believe that some lawyers will be better at recognizing the benefits and limitations of AI, and as we become more integrated with the technology, those lawyers will have a headstart on what the future of law looks like.

This article is for informational purposes only and is not intended to constitute legal advice.