Select Page

Provisions That Should be in Every Law Firm’s AI Policy: Part II

by | Nov 21, 2025

Earlier this year, we published a Model AI Policy for law firms along with a blog outlining the 10 Provisions That Should be in Every Law Firm’s AI Policy.

However, because the pace of AI innovations continues to accelerate, law firms must regularly revisit their policies to ensure they reflect the current environment.

In Part II of this series, we outline three additional provisions that should be in every law firm’s AI policy: agentic AI oversight, bias and fairness, and vendor and third-party compliance.

Agentic AI Oversight

“Agentic AI” refers to systems that can act autonomously and are capable of independent decision making. In a law firm setting, agentic AI tools may be able to send emails, summarize discovery materials, or generate filings without being prompted at every step. They may also be capable of making legal decisions, recommendations about litigation strategy, and which cases a firm can take on. While efficient, these AI capabilities introduce new risk to law firms.

An agentic AI provision in an AI policy should specify that attorneys remain fully responsible for every AI-assisted action or document. It should make clear that AI can automate workflows but cannot provide legal advice, negotiate, or communicate with clients independently. Finally, all agentic actions should be logged, continuously monitored, and subject to immediate human override.

Setting these expectations can help ensure attorneys stay at the forefront of client service.

Bias and Fairness

AI systems learn from massive data sets, which means they can replicate or even amplify bias in those data sources. Bias can be harmful in a number of contexts, but in the legal industry, it is especially dangerous. A biased document review or intake tool could impact who gets opportunities, how cases are prioritized, or even how evidence is interpreted.

A bias and fairness provision in an AI policy should require the firm and its employees to evaluate tools for bias before adoption and ban tools and practices that could result in discriminatory or biased treatment of clients, employees, witnesses, or other individuals.

This clause reinforces the firm’s ethical obligations and helps ensure that AI strengthens, rather than undermines, fairness and equal treatment.

Vendor & Third-Party Compliance

Many AI tools are developed by external vendors, and there are thousands of AI companies operating in the marketplace today. While working with a third-party can be more efficient and unlock new capabilities for a law firm, it also triggers confidentiality and data protection concerns.

A vendor and third-party compliance provision should mandate due diligence before adopting any AI vendor, require contractual safeguards, and set guidance for ongoing monitoring.

By formalizing these requirements, firms will reduce exposure to data leaks, privilege breaches, and ethical violations.

Final Thoughts

AI can help law firms work smarter, but without clear boundaries, it can also expose them to ethical and reputational risk.

While policies are just one part of a responsible AI strategy, a well-drafted AI policy protects clients, supports attorneys, and keeps law firms on the right side of professional responsibility. Check out our updated model AI policy here.