AI IN THE CORPORATE LANDSCAPE: LEGISLATION, TRANSACTIONS, AND STRATEGY - A Comprehensive Guide for Legal Practitioners, Corporate Officers, and Transaction Counsel
- Erick Robinson
- 2 days ago
- 21 min read
Introduction
The year 2026 marks a watershed moment for artificial intelligence regulation, contracting, and corporate governance. After years of legislative proposals, voluntary frameworks, and incremental agency guidance, the regulatory environment has shifted decisively. In the United States, a patchwork of state laws has begun taking effect even as the federal government signals an intent to impose a unified national framework through preemption. In Europe, the EU AI Act—the world’s first comprehensive legal framework for AI—is entering its most consequential enforcement phase, with obligations for high-risk AI systems becoming applicable in August 2026. Globally, more than seventy-five nations are actively developing or tracking AI legislation.
For corporate officers, transaction counsel, and legal practitioners, these developments carry immediate and practical consequences. AI is no longer a peripheral technology consideration; it is a core variable in regulatory compliance, risk allocation, intellectual property strategy, and fiduciary oversight. The integration of AI into commercial products, outsourcing arrangements, enterprise software, and internal decision-making has forced a fundamental rethinking of longstanding contract frameworks, due diligence practices, and board governance structures.
This article provides a comprehensive analysis organized around three interrelated pillars. Part I examines recent developments in AI legislation and regulation across federal, state, and international jurisdictions. Part II addresses the emerging standards and best practices for AI-related transactions and agreements—from technology licensing and SaaS contracts to M&A and government procurement. Part III analyzes the development and refinement of corporate strategy relating to AI, including board governance, risk management frameworks, and the organizational changes required to deploy AI responsibly and at scale. Together, these three pillars form the essential knowledge base for any practitioner advising clients on AI-related matters in the current environment.

Part I: Recent Developments in AI Legislation and Regulation
The regulatory landscape for artificial intelligence is evolving at a pace that challenges even the most attentive practitioners. The absence of a comprehensive federal AI statute in the United States has created a vacuum that state legislatures, federal agencies, and international bodies have each moved to fill in overlapping and sometimes conflicting ways. Understanding this multi-layered environment is essential for compliance planning and strategic counseling.
A. The Federal Landscape: Executive Action, Agency Guidance, and the Push for Preemption
At the federal level, the regulatory posture has undergone a marked shift since January 2025. President Trump’s Executive Order 14179 revoked portions of the Biden-era AI executive order that had emphasized safety testing, reporting requirements, and risk-mitigation mandates. The current administration’s approach emphasizes innovation, competitiveness, and deregulation as national priorities, framing AI development as a strategic asset in global competition rather than a source of domestic regulatory constraint.
This policy orientation was further codified in Executive Order 14365, issued in December 2025, which instructed executive branch officials to draft legislative recommendations for a uniform federal regulatory framework that would preempt state AI laws deemed to impose “undue burdens.” The Executive Order preserves limited carve-outs for state laws relating to child safety protections, AI compute and data center infrastructure, state government procurement, and other topics to be determined.
On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, outlining legislative recommendations for Congress to establish a unified federal approach. The Framework builds on prior executive actions and the administration’s “America’s AI Action Plan,” proposing that Congress adopt legislation broadly preempting state AI laws. However, Congress has thus far declined to enact comprehensive federal preemption, including rejecting such provisions in both the One Big Beautiful Bill Act and the National Defense Authorization Act.
In parallel, Senator Marsha Blackburn introduced the TRUMP AMERICA AI Act in December 2025, which seeks to codify the Executive Order’s approach into a single federal rulebook. Multiple additional congressional proposals are expected throughout 2026. The Algorithmic Accountability Act and the AI Foundation Model Transparency Act remain active in Congress and represent the most likely pathways to comprehensive federal legislation by late 2026 or 2027.
The practical reality for practitioners is that federal AI governance currently relies on a combination of executive orders, agency enforcement under existing statutes, and voluntary guidelines—none of which creates directly enforceable obligations on private companies. The NIST AI Risk Management Framework has emerged as a de facto compliance standard adopted voluntarily by enterprises seeking to demonstrate responsible AI governance, but it carries no binding legal authority. Federal Trade Commission enforcement actions under Section 5 authority, Equal Employment Opportunity Commission guidance on algorithmic hiring, and Food and Drug Administration oversight of AI in medical devices represent the most concrete federal regulatory touchpoints for private sector actors.
1. GSA’s Proposed AI Procurement Clause
One of the most significant recent federal developments is the General Services Administration’s proposed clause GSAR 552.239-7001, published on March 6, 2026. This proposed clause, titled “Basic Safeguarding of Artificial Intelligence Systems,” would impose substantial obligations on contractors providing AI solutions under GSA Multiple Award Schedule contracts. The clause introduces six core obligations:
disclosure of all AI systems used in contract performance;
data segregation and deletion requirements;
a 72-hour incident reporting mandate;
adherence to codified “Unbiased AI Principles”;
contractor responsibility for third-party AI “Service Provider” compliance; and
the grant of an irrevocable, royalty-free, non-exclusive government license to use AI systems for any lawful government purpose.
The clause’s treatment of “Service Providers” is particularly notable. Despite these entities not being parties to the GSA contract, the clause holds prime contractors directly responsible for their compliance. This means contractors could be liable for ensuring that any company whose AI system they use—even under a standard commercial platform agreement—complies with the clause’s full requirements. The government also retains the right to independently evaluate AI systems and suspend their use for noncompliance, and contractors face liability for decommissioning costs if their contracts are terminated for noncompliance with the Unbiased AI Principles.
For contractors and their commercial AI vendors, these requirements will necessitate significant adjustments to product offerings, compliance programs, and risk allocation strategies. The public comment period was extended to April 3, 2026, with potential inclusion in GSA’s Schedule Refresh 32. Practitioners advising government contractors should conduct immediate gap analyses of current AI offerings and review all agreements with commercial AI providers to assess the feasibility of flowing down these new obligations.
B. State AI Legislation: A Patchwork of Enforceable Obligations
While federal action remains aspirational, state legislatures have moved decisively. Multiple state AI laws took effect in January 2026, and several more are scheduled to become enforceable by mid-year. These laws are already reshaping vendor contracting practices and compliance expectations across industries.
1. Colorado AI Act
Colorado has enacted what is widely regarded as the most comprehensive state AI law in the United States. The Colorado AI Act establishes requirements for developers and deployers of “high-risk” artificial intelligence systems, defined as systems that make or are a substantial factor in making a consequential decision with a material legal or similarly significant effect on consumers. Covered domains include education, employment, essential government services, healthcare, housing, insurance, and legal services. The Act requires the use of reasonable care to avoid algorithmic discrimination, the development of risk management policies and programs, implementation of consumer notices, and the conduct of impact assessments.
Colorado postponed the Act’s implementation date from February 1 to June 30, 2026, partly in response to industry concerns and partly to allow time for alignment with potential federal action. The Act provides an affirmative defense for entities that comply with nationally or internationally recognized risk management frameworks for AI systems. The Colorado Attorney General holds exclusive enforcement authority and has been granted rulemaking power to implement the Act’s requirements.
2. California: Multiple Overlapping AI Statutes
California has enacted a suite of AI-focused statutes that collectively impose transparency, training data disclosure, employment oversight, and consumer protection obligations, all effective January 1, 2026:
• AI Safety Act (SB 1047 successor): Establishes protections for employees who report AI-related risks or critical safety concerns to authorities, including whistleblower protections related to specific AI models. Creates the CalCompute public AI cloud consortium.
• AI Transparency Act (SB 942): Mandates that covered providers—AI systems publicly accessible within California with more than one million monthly visitors—implement comprehensive measures to disclose when content has been generated or modified by AI. Violations carry penalties of $5,000 per violation per day.
• Training Data Transparency Act (AB 2013): Requires developers of generative AI systems to publish a high-level summary of the datasets used to develop and train their systems.
• CCPA Automated Decision-Making Regulations: Under new regulations issued pursuant to the California Consumer Privacy Act, businesses using automated decision-making technology to make significant decisions about consumers must provide pre-use notices, access rights, and opt-out mechanisms. These requirements become enforceable January 1, 2027.
3. Texas Responsible AI Governance Act (TRAIGA)
Texas’s TRAIGA took effect on January 1, 2026. The statute regulates certain uses of AI systems, provides for civil penalties and Attorney General enforcement, and includes an innovative regulatory sandbox concept that permits testing of AI systems under defined conditions before broader deployment. TRAIGA reflects an approach that balances regulatory oversight with support for innovation—a framework that may serve as a model for other business-friendly states considering AI legislation.

4. Other Notable State Activity
The pace of state AI legislation is accelerating dramatically. As of early 2026, more than seventy-eight chatbot-related bills are alive across twenty-seven states. New York’s RAISE Act, building on NYC Local Law 144’s bias audit requirements for automated employment decision tools, would impose safety policies on large AI model developers. Illinois, Maryland, and New Jersey have enacted or are considering targeted restrictions on AI use in hiring decisions.
Utah has established an AI learning laboratory for regulatory experimentation. Virginia and Washington are considering AI transparency bills modeled on California’s framework. These developments underscore a critical point for practitioners: even companies that do not believe they are in the “AI business” are likely subject to AI-related obligations through their use of third-party AI tools and platforms.
C. International Regulatory Developments
1. The EU AI Act: Entering Enforcement
The EU AI Act, adopted in May 2024, represents the world’s first comprehensive legal framework for AI. The Act entered into force on August 1, 2024, and its provisions are being phased in over a staged timeline. Prohibitions on unacceptable-risk AI practices—such as social scoring systems and manipulative AI—took effect in February 2025. Governance rules and obligations for general-purpose AI models became applicable in August 2025. The most consequential enforcement milestone is August 2, 2026, when the majority of the Act’s remaining provisions become applicable, including obligations for high-risk AI systems under Annex III, transparency requirements under Article 50, and the requirement that each EU member state establish at least one operational AI regulatory sandbox.

The Act employs a risk-based classification framework. High-risk AI systems—those used in employment decisions, credit assessments, educational placement, healthcare delivery, law enforcement, and critical infrastructure management—face the most stringent requirements. Providers must implement quality management systems, maintain detailed technical documentation, conduct conformity assessments, affix CE markings, register systems in the EU high-risk AI database, and establish post-market monitoring processes. Deployers must implement human oversight measures, monitor system operation, maintain logs, and report serious incidents within seventy-two hours. Non-compliance penalties can reach €35 million or seven percent of worldwide annual turnover, whichever is higher.
The European Commission’s November 2025 Digital Omnibus package proposed extending the applicability date for certain high-risk system obligations from August 2026 to as late as December 2027. However, this proposal remains subject to legislative negotiation, and organizations should not rely on an extension materializing. Prudent compliance planning treats August 2, 2026 as the operative deadline.
2. Other International Frameworks
Beyond the EU, the global regulatory picture is diversifying rapidly. China has adopted a vertical control model, mandating watermark labeling of AI-generated content from September 2025 and strengthening cybersecurity-related fines from January 2026. India has opted for a “soft law first, hard law where harm is evident” approach, with the Ministry of Electronics and Information Technology’s seven-principle governance framework guiding sectoral regulation.
The Council of Europe’s Framework Convention on AI, signed in September 2024 by the United States and other nations, would establish binding international obligations upon ratification—though the current U.S. administration’s posture on international AI commitments introduces uncertainty about American participation. For multinational enterprises, navigating this fragmented landscape requires jurisdiction-specific compliance strategies that cannot be addressed through a single global framework.
Part II: Insights and Standards for AI-Related Transactions and Agreements
The integration of artificial intelligence into commercial products and enterprise operations has exposed fundamental inadequacies in traditional contract frameworks. Technology agreements that were adequate when AI served as a passive feature within human-directed workflows are insufficient for an era of agentic AI, embedded decision-making, and autonomous operations. Practitioners across the buy-side and sell-side are confronting novel questions about intellectual property ownership, liability allocation, data governance, regulatory compliance, and risk management that require purpose-built contractual solutions.
A. The Evolving Architecture of AI Agreements
1. Beyond Generic AI Disclaimers
Two years ago, many technology agreements addressed AI—if at all—through a generic disclaimer or a brief acknowledgment that AI features might be included in the offering. That approach is now inadequate. The emergence of agentic AI and deeply embedded AI functionality has created tension with traditional SaaS agreement structures. Sophisticated buyers are pushing for service-oriented terms: defined service descriptions, performance-based warranties, governance and audit rights, and outcome-tied liability structures. Vendors are resisting some of these provisions, but the negotiating dynamic is shifting, particularly in deals where AI performs critical enterprise tasks.
The better approach for both parties is to integrate AI risk into the substance of the contract—including the IP provisions, liability framework, data governance terms, compliance allocation, and service-level commitments—rather than isolating AI considerations in a standalone addendum. A well-drafted AI agreement reflects the specific use case and the actual risk profile of the deal. While this requires more work upfront, it produces an agreement that functions when it matters.
2. Service Descriptions for AI-Driven Functions
Where AI performs tasks on a customer’s behalf, the service description must articulate what the AI is actually doing—the workflows it executes, the decisions it makes, the systems it touches, and the guardrails that apply. Vague scope definitions create gaps that both parties will regret when something goes wrong. Standard SaaS agreements for AI products frequently include broad disclaimers discarding any warranty as to accuracy or fitness for purpose. While vendors have legitimate reasons for these disclaimers—AI outputs are probabilistic, not deterministic—sophisticated buyers should negotiate for performance benchmarks, minimum accuracy thresholds, and structured remediation processes when AI systems fail to meet defined service levels.
B. Intellectual Property: Models, Outputs, and Training Data
1. Ownership and Licensing of AI Outputs
In the realm of AI, questions that once focused on software and content now extend to models and outputs. Agreements must address ownership or license rights across multiple categories: raw model outputs, post-processed or human-refined outputs, deliverables incorporating AI outputs, prompt libraries, templates, and evaluator tools. Each of these can embody significant value, and agreements should address derivative works treatment, assignment mechanics, and usage rights for each category. The failure to define these rights at the outset creates disputes that are expensive and difficult to resolve after the fact.
A critical emerging issue concerns whether AI-generated outputs are copyrightable. Courts are increasingly signaling that works generated solely by AI without meaningful human creative direction may not qualify for copyright protection. This has profound implications for contracting: if an AI-generated deliverable cannot be copyrighted, the “assignment” or “license” of rights in that deliverable may convey less than either party expects. Practitioners should consider including representations about the level of human involvement in creating deliverables, warranties regarding the copyrightability of works, and indemnification provisions addressing scenarios where copyright protection proves unavailable.
2. Training Data Rights and Obligations
Rights in training data have become one of the most heavily negotiated elements of AI agreements. Customers providing training data should consider required consents, use limitations, deletion and return rights, and data segregation. Vendors may seek licenses to use customer-provided data to improve their services, subject to regulatory and confidentiality limitations. Both parties must address lawful data collection, use, and commercialization, along with restrictions relating to sensitive or regulated data. Compliance representations regarding training data are non-negotiable—this includes adherence to all use restrictions in licenses, contracts, terms of service, or other agreements related to AI technology, training data, or AI inputs.
C. Liability, Indemnification, and Insurance

1. Rethinking Liability Frameworks
The traditional SaaS liability model—in which liability is capped, the vendor maintains the platform, and the customer bears responsibility for how it uses the tool—is being challenged by the realities of agentic AI. Where AI systems autonomously execute workflows, make decisions, and interact with downstream systems, the question of who bears responsibility for AI-caused harm becomes genuinely difficult. Practitioners should consider graduated liability structures that distinguish between harms arising from model defects, harms arising from customer misuse, and harms arising from unpredictable AI behavior that neither party could reasonably have foreseen.
Indemnification provisions in AI agreements are increasingly addressing AI-specific scenarios: claims that AI-generated outputs infringe third-party intellectual property; claims that AI systems produced discriminatory outcomes in violation of applicable law; claims arising from the unauthorized use of training data; and claims related to AI-generated content that is defamatory, inaccurate, or harmful. Each of these categories presents distinct risk profiles that may warrant different liability caps, different insurance requirements, and different allocation of the duty to defend.
2. AI and Insurance
Questions about AI use are now appearing as part of the underwriting and renewal process for certain liability and cyber insurance policies. Insurers are increasingly seeking to understand the scope and nature of an organization’s AI deployment, the governance structures in place, and the risk management practices applied to AI systems. Companies that cannot provide satisfactory answers may face higher premiums, narrower coverage, or coverage exclusions for AI-related claims. Transaction counsel should advise clients to anticipate insurer inquiries and to maintain documentation—including AI inventories, risk assessments, and governance policies—that demonstrates responsible AI management.
D. AI Provisions in M&A Transactions

1. AI-Specific Representations and Warranties
AI-specific representations and warranties are becoming standard in sophisticated M&A transactions, even when the target company’s AI use is not material to the core business. These clauses serve as a method for buyers and investors to obtain contractual assurances that risks unique to AI have been addressed, to backstop due diligence, and to provide a clear path for recourse if post-transaction problems emerge. Key areas for AI-specific reps and warranties include the following:
• Data use and training data: Representations that the AI model was trained only with permissioned data, and that the target company has the lawful right to use all data incorporated into its AI systems.
• IP ownership and protection: Warranties regarding ownership of AI models, absence of third-party claims, and the validity of IP protection for AI-related assets.
• Regulatory compliance: Representations that the target’s AI systems comply with applicable laws, including anti-discrimination statutes, privacy regulations, and sector-specific requirements.
• Litigation and risk disclosure: Disclosures of any pending or threatened claims, inquiries, audits, or investigations related to AI products or AI use in the business.
• Ongoing compliance: Representations confirming compliance with all use restrictions in licenses, contracts, and terms of service related to AI technology.
2. AI Considerations in NDAs and Due Diligence
The use of AI in due diligence processes raises a fundamental question: whether uploading confidential information into an AI tool violates the NDA. In the current deal environment, sophisticated parties are negotiating explicit AI provisions in confidentiality agreements to avoid unintended breaches, protect trade secrets, and preserve deal value. Common approaches include prohibiting the uploading of confidential information into public or open-source AI platforms, restricting the use of AI tools that retain data or use inputs for model training, and requiring prior written consent before using AI in diligence. Rather than banning AI outright, many parties are establishing guardrails that permit the use of secure, enterprise-grade AI tools with contractual assurances regarding data isolation, no-training commitments, and deletion capabilities.
E. Regulatory Compliance as a Contracting Imperative
When negotiating agreements for the implementation and use of AI tools, regulatory compliance must be at the forefront during diligence and in negotiations. The cascade of new state laws, the approaching EU AI Act enforcement deadline, and evolving agency guidance mean that an agreement signed today must anticipate compliance obligations that may not yet be fully defined. Practitioners should build into AI agreements provisions for regulatory change management, compliance cooperation, and shared responsibility for monitoring and responding to new legal requirements. Termination provisions should address what happens when regulatory changes render an AI system non-compliant—including ongoing access during winddown, data access and return, content extraction, and data deletion.
Government contractors face particularly acute challenges. The proposed GSA clause discussed in Part I, combined with existing OMB memoranda and agency-specific guidance, creates a web of compliance obligations that must be flowed down through supply chains. Contractors that fail to anticipate these requirements in their vendor agreements risk being unable to perform their government contracts without expensive and time-consuming renegotiation of commercial AI platform terms.
Part III: Development and Refinement of Corporate Strategy Relating to AI
If Parts I and II address the external legal environment, Part III turns inward. The question for corporate leadership is no longer whether to adopt AI, but how to govern its use, integrate it into strategic planning, manage the associated risks, and build the organizational capabilities necessary for responsible deployment at scale. This is fundamentally a governance challenge, and the boards and executive teams that treat it as such will be best positioned to capture value while managing risk.
A. AI as a Board-Level Governance Priority
1. The Governance Gap
Despite the centrality of AI to corporate strategy, a striking governance gap persists. According to recent research, only thirty-nine percent of Fortune 100 boards have any form of AI oversight—whether through committees, directors with AI expertise, or ethics boards. Only thirteen percent of S&P 500 companies have at least one director with AI-related expertise. A McKinsey survey of directors found that sixty-six percent say their boards have limited to no knowledge or experience with AI, and nearly one in three say AI does not even appear on their board agendas. The National Association of Corporate Directors reports that only seventeen percent of boards have established an AI education plan for directors, and only six percent have a dedicated committee to oversee AI.
These numbers are striking in light of research demonstrating that AI-literate boards deliver measurably superior financial performance. A recent MIT study found that companies with AI-literate directors outperform their peers by 10.9 percentage points in return on equity. The competitive advantage of effective AI governance is no longer theoretical; it is empirically documented.
2. The KPMG-INSEAD AI Governance Principles
In April 2026, KPMG International and the INSEAD Corporate Governance Centre published AI Governance Principles for Boards—a set of sector-agnostic principles designed to guide boards at all levels of AI maturity. The principles address four interrelated dimensions of board responsibility:
• Strategic Oversight for Long-Term Value Creation: Governing in uncharted territory that favors speed, experimentation, and quick results while maintaining long-term strategic coherence.
• Active Technology and Security Oversight: Balancing technology sovereignty, cybersecurity, data security, and AI-specific security risks with the agility and scale benefits of partnering and outsourcing.
• Workforce Transformation and Human Accountability: Balancing productivity gains with forward-looking workforce and talent management strategies that preserve human judgment in critical decisions.
• The Work of the Board Itself: Considering how AI will affect the board’s own oversight processes, governance practices, and information asymmetries with management.
These principles reflect a critical insight: AI governance is not merely a risk management function but a strategic imperative that requires the same level of board attention as capital allocation, CEO succession, and competitive positioning.
B. Building an AI Governance Framework
1. The Three-Layer Challenge
Effective AI governance requires alignment across three organizational layers that frequently operate independently: the board, management, and the technical implementation team. The most common structural failure in AI governance is that each of these groups evaluates AI independently, producing fragmented assessments that lack strategic coherence. Boards should insist on cross-functional AI governance committees that include representation from all three layers, with clear reporting lines, defined escalation protocols, and regular cadences of review.

2. Classification and Inventory
No organization can satisfy regulatory requirements or manage AI risk without first knowing what AI systems are in use, where they are deployed, and who built or procured them. A comprehensive AI inventory is the prerequisite for everything else—and for many organizations, completing this inventory alone will take longer than expected. The inventory should classify each AI system into risk tiers:
• Administrative support: Note synthesis, meeting preparation, document retrieval. Requires standard privacy and access controls.
• Analytical support: Trend analysis, anomaly detection, scenario comparison. Requires validation protocols and documented human review.
• Advisory and decisional support: Systems that propose strategic options, flag consequences, or make or substantially inform consequential decisions. Requires formal review, independent testing, and documented human sign-off.
The higher the tier, the stronger the controls required. Organizations that fail to distinguish between low-risk productivity tools and high-impact systems that could alter decisions or create legal exposure will find themselves unable to allocate governance resources effectively.
3. Policies, Disclosures, and Accountability
A robust AI governance framework requires written policies addressing tool usage, disclosure obligations, data handling, validation requirements, and records retention. Disclosure policies are particularly important: directors and officers do not need notification every time an AI tool assists with a routine task, but they do need transparency when AI materially shapes assumptions, risk ratings, strategic options, M&A evaluations, capital allocation scenarios, or legal interpretations. A useful standard is this: if the AI output could reasonably affect a board decision, the role of AI should be disclosed.
Accountability must be formally assigned. In many organizations, the governance committee oversees policy, the audit or risk committee reviews controls, and management owns implementation. This division works well when responsibilities are clearly documented. When ownership is vague, issues disappear between committees, and accountability collapses precisely when it is most needed.
C. AI and Strategic Decision-Making
1. From Experimentation to Execution
Survey data reveals a clear transition in corporate AI priorities. CEOs’ top AI priorities for 2026 center on building internal expertise (thirty-one percent globally, thirty-seven percent in North America), strengthening organizational culture to support adoption, and identifying the most effective tools through build-or-buy decisions. Operational priorities—such as integrating data, improving output quality, and identifying proven use cases—are also prominent, signaling a shift from the experimentation phase to practical deployment at scale.
For directors, the implication is clear: effective oversight increasingly centers on ensuring the capabilities, infrastructure, and governance needed to implement AI at scale and embed it into core business operations. AI governance is inseparable from strategic oversight. A board that cannot articulate how AI contributes to the organization’s competitive positioning, operational efficiency, and risk profile is not fulfilling its fiduciary duties in the current environment.
2. AI in M&A and Capital Allocation
With interest rates stabilizing and investor confidence returning, M&A activity is surging—driven in significant part by strategic interest in AI capabilities, digital infrastructure, and the energy transition. For boards, this renewed M&A cycle means that acquisition and capital allocation strategies must incorporate AI considerations at every stage: from target identification and valuation through diligence, integration planning, and post-closing governance. Directors should understand how each potential deal aligns with the company’s AI strategy and whether the expected benefits justify the risks and capital outlay. The AI-specific contractual provisions discussed in Part II are the transactional expression of this strategic imperative.
3. Talent Strategy and Workforce Transformation
Boards should ensure that management is focused on hiring and retaining employees with strong AI skills and a high willingness to apply them. The talent dimension of AI strategy is frequently underappreciated at the board level, but it is a critical success factor. AI systems require skilled human oversight, interpretation, and direction. Organizations that treat AI as a substitute for human judgment—rather than an augmentation of it—will encounter both operational failures and regulatory exposure. Directors should ask pointed questions about workforce readiness: Are we preserving institutional knowledge as we adopt new technologies? What key performance indicators should we request from management to assess the efficacy of AI initiatives? Is management preparing the workforce for how AI technologies will evolve over the next five to ten years?
D. Risk Management: A Continuous, Integrated Function
1. The Board-Level Operating Model
The most effective boards in 2026 are performing three functions simultaneously: enabling responsible AI use, governing the invisible influence of AI on decision-making, and preserving human accountability. A board-level AI risk management operating model should follow a structured sequence:
1. Map AI exposure. Identify every AI system that informs strategy, reporting, compliance, talent, cybersecurity, and customer operations.
2. Classify materiality. Distinguish low-risk productivity tools from high-impact systems that could alter decisions or create legal exposure.
3. Assign oversight. Clarify which committee oversees which categories of AI risk, and where management accountability sits.
4. Set boardroom rules. Approve policies for tool usage, disclosure, data handling, validation, and records retention.
5. Build challenge mechanisms. Require testing, independent review, and structured dissent for high-stakes AI uses.
6. Track incidents and learn. Review failures, near-misses, and control gaps as part of normal governance processes.
7. Review strategic value. Evaluate whether AI is improving decision quality, not merely reducing administrative burden.
2. Metrics and Dashboards
Boards should request a concise, decision-oriented dashboard with meaningful indicators. Useful metrics include the number of approved AI tools, the percentage of high-impact systems independently tested, unresolved AI-related incidents, vendor concentration risk, policy exceptions granted, and board papers with disclosed AI involvement. It is also advisable to conduct at least one annual deep-dive on a major AI use case—whether in pricing, fraud detection, forecasting, HR screening, legal review, or cybersecurity. The purpose is to move beyond surface reassurance and understand how the system actually works, where it fails, and how management responds.
3. Silent Partners: AI in the Information Supply Chain
A particularly insidious risk arises from AI that shapes board decisions indirectly—through management dashboards, recommendation engines, automated alerts, or pre-processed briefing materials—without being visible during board discussion. These “silent partner” systems may filter, prioritize, or frame information in ways that influence outcomes without directors understanding the role of AI. Boards should require management to identify and disclose all AI systems that contribute to the preparation of board materials, and to evaluate whether those systems introduce biases, limitations, or blind spots that could affect the quality of boardroom deliberation.
E. Looking Forward: The 2026 Imperative
The convergence of regulatory enforcement, transactional complexity, and governance expectations creates an imperative that cannot be deferred. Organizations that have not begun structured AI compliance efforts should treat the multiple enforcement deadlines of 2026—the EU AI Act’s August deadline, Colorado’s June compliance date, Texas’s operational requirements, and the cascade of California statutes—as binding commitments that require immediate action. Those that have begun compliance work should stress-test their programs against the contracting requirements described in Part II and the governance standards described in Part III.
The emergence of real-time AI auditing technology—in which models can be continuously monitored for compliance in production rather than assessed only at deployment—will fundamentally transform enforcement capabilities. Regulatory technology startups in this space are attracting significant venture capital and government procurement interest, pointing to a future in which AI governance is automated and adaptive rather than static and documentary. Boards and management teams that invest now in the infrastructure for continuous AI compliance will be better positioned for a regulatory environment that demands not periodic snapshots but ongoing proof of control.
Conclusion
Artificial intelligence has crossed the threshold from emerging technology to embedded operational reality. The legal and governance infrastructure is racing to catch up. For practitioners, the message of 2026 is unambiguous: the time for generic disclaimers, aspirational policies, and watch-and-wait approaches has passed. Organizations must now demonstrate evidence of control—a clear inventory of AI systems, defined ownership, documented governance processes, and the ability to demonstrate compliance across overlapping and sometimes conflicting jurisdictions.
The three pillars examined in this article—legislation, transactions, and corporate strategy—are deeply interconnected. Regulatory requirements drive contractual provisions; contractual provisions shape corporate governance; and corporate governance, in turn, determines an organization’s capacity to comply with evolving regulatory demands. Practitioners who operate in silos—treating regulatory compliance, deal negotiation, and governance counseling as separate workstreams—will produce suboptimal outcomes for their clients. The most effective advice integrates all three dimensions into a coherent strategy that anticipates regulatory change, embeds compliance into the contracting process, and elevates AI governance to the level of board-level fiduciary responsibility.
The stakes are substantial. Regulatory penalties under the EU AI Act can reach seven percent of global revenue. State attorneys general are stepping up enforcement. Insurance underwriters are scrutinizing AI practices. And the reputational consequences of AI failures—whether through algorithmic discrimination, data misuse, or uncontrolled autonomous behavior—can dwarf any regulatory fine. But the opportunities are equally significant. Organizations that govern AI effectively will outperform their peers, as empirical research increasingly demonstrates.
The competitive advantage belongs to those who treat AI governance not as a compliance burden but as a strategic asset. For corporate directors, transaction counsel, and legal practitioners, 2026 is the year in which AI governance moved from the periphery to the center of professional responsibility. The framework presented in this article is designed to provide the analytical structure, practical guidance, and strategic perspective necessary to meet that responsibility.
*****
Disclaimer: This article is provided for informational and educational purposes only and does not constitute legal advice. The information contained herein reflects the law as of the date of publication and is subject to change. Readers should consult with qualified legal counsel regarding specific legal questions or circumstances. The views expressed in this article are those of the author and do not necessarily reflect the views of Cherry Johnson Siegmund James, PC.



Comments