top of page

To prompt or not to prompt?

By Eve Vlemincx.

The question of whether lawyers should embrace AI-generated content for legal analysis is one that demands careful consideration, given the potential biases and inaccuracies that come with it. On the other hand, there are compelling reasons for lawyers to learn how to prompt when it comes to making legal communication more accessible and comprehensible to clients.

No: Lawyers should not learn to prompt when it concerns legal analyses for the following reasons:

  1. Inaccuracy and Unreliability: Despite AI's remarkable advancements, it is not infallible, especially in the context of complex legal analysis. The intricacies and nuances of the law demand human expertise to validate and cross-reference AI-generated information. Relying solely on AI for legal analysis can lead to errors with significant consequences for clients and the integrity of the legal system.

  2. Biases and Misinformation: AI models are trained on vast datasets, which can inadvertently contain biases or outdated legal information. When AI generates content based on these datasets, it may perpetuate such biases, compromising the objective and fair nature of legal advice. Additionally, the potential for fake news and misinformation in AI-generated content raises ethical concerns for legal professionals.

Yes: Enhancing legal communication and accessibility

AI can undoubtedly serve as a valuable tool for lawyers when it comes to improving legal communication and accessibility.

  1. Avoiding Legalese: Legal jargon can be confusing and intimidating for clients who are not familiar with legal language. AI can assist lawyers in crafting clearer and straightforward legal messages, eliminating legalese, and making legal documents more understandable for clients. This can empower clients to make informed decisions and help the lawyers in becoming better communicators.

  2. Training Platform for Accuracy and Clarity: Rather than relying blindly on AI-generated content, lawyers can use it as a training platform to ensure the text is both legally accurate and written in a clear and understandable manner. By incorporating human oversight and validation, lawyers can enhance the quality of AI-generated content and ensure its reliability.


The decision of whether to prompt with AI-generated content in the legal profession must take into account the potential biases and inaccuracies that AI can introduce. While caution is essential in using AI for legal analysis, lawyers can undoubtedly benefit from AI when it comes to improving legal communication and accessibility. By finding a balance between leveraging AI's strengths and maintaining human expertise, the legal profession can harness the full potential of AI as a valuable tool, ultimately benefiting both lawyers and their clients.


About the Author Eve Vlemincx is a strategic advisor with expertise in a wide array of areas including legal digital transformation, innovation and leadership. She serves as an advisory council member for Harvard Business Review and is a Course Facilitator at Stanford Graduate School of Business. Eve is highly sought after as a keynote speaker and guest lecturer in various professional settings. Notably, she has been honored as a five-time recipient of the Stanford GSB LEAD Award.

Operating at the dynamic intersection of legal and business, Eve holds certifications from esteemed institutions such as Oxford, Harvard, Kellogg and Stanford Graduate School of Business. Additionally, she brings substantial experience as a seasoned lawyer specializing in corporate law and restructurings.

Eve's guiding philosophy is centered on working smarter, not harder, as she helps individuals and organizations navigate the complexities of today's rapidly evolving landscape.


bottom of page