Protecting Corporate Data, Elevating Service Standards, and Structuring Indemnity in the Age of AI
- Erick Robinson
- 3 hours ago
- 31 min read
I. Executive Summary
This article provides a comprehensive legal analysis addressing the critical commercial and technology contracting issues that enterprises face in their relationships with digital and software suppliers. Whether an organization is a multinational energy company, a mid-market financial services firm, or a growing technology startup, the legal frameworks governing supplier relationships must evolve to match the sophistication, scale, and risk profile of modern digital operations.
The proliferation of artificial intelligence, cloud computing, and software-as-a-service platforms has fundamentally altered the commercial contracting landscape. Suppliers that once delivered discrete software licenses now offer dynamic, data-intensive services that interact with, process, and in some cases learn from customer data. This shift demands a corresponding evolution in the contractual protections that enterprises deploy to safeguard their proprietary information, ensure adequate service delivery, and allocate risk in a manner commensurate with the value and sensitivity of the underlying data and operations.

This analysis is organized into six principal sections, each addressing a core area of concern for enterprise customers:
1. Norms Relating to Protection of Customer Content. An examination of market-accepted norms and recommended contractual clauses governing digital suppliers’ usage of customer data, including specific analysis of AI training restrictions, data segregation requirements, and confidentiality architectures.
2. Standard of Services. A detailed assessment of service-level expectations that enterprises can reasonably propose to digital suppliers as alternatives to the widely disclaimed “as is” and “as available” service standards, including uptime commitments, performance benchmarks, and remediation protocols.
3. Indemnity Language. A thorough analysis of standard and non-standard indemnification provisions in technology agreements, including the critical interplay between indemnity obligations and limitation-of-liability clauses, and the practical impact of that interplay in litigation.
4. Selecting and Working with Outside Counsel. Guidance on identifying technology-focused legal counsel and the types of advisory services that organizations should expect in the areas of commercial technology agreements, AI transactions, and regulatory compliance.
5. AI-Specific Legal Considerations. A discussion of the emerging legal landscape surrounding artificial intelligence, including legislative monitoring, AI transaction structuring, and corporate AI governance strategy.
6. Negotiating with Major Software Providers. Practical guidance for enterprises negotiating enterprise agreements with large technology vendors such as AWS, Microsoft, Salesforce, Oracle, and others.
Throughout this analysis, the article draws upon current market data and recent legislative developments—including the EU AI Act, the Colorado AI Act, the Texas Responsible AI Governance Act, and evolving federal guidance. While the specific contractual provisions and negotiating leverage available to each organization will vary based on its size, industry, and bargaining position, the principles and frameworks discussed herein are universally applicable.
II. Norms Relating to Protection of Customer Content
A. The Evolving Landscape of Data Usage in Digital Supplier Agreements
The question of what digital suppliers may and may not do with customer data has become one of the defining commercial and legal issues of the current era. Historically, enterprise software agreements focused primarily on license scope, access controls, and basic confidentiality obligations. The advent of cloud computing, and more recently generative artificial intelligence, has introduced a new dimension: the potential for supplier systems to ingest, process, aggregate, and even learn from customer data in ways that were not contemplated when many existing contractual frameworks were drafted.
There are now well-established market norms governing data usage in enterprise technology contracts. These norms have crystallized rapidly in the period between 2023 and 2026, driven by a convergence of regulatory pressure, high-profile data incidents, and growing enterprise sophistication regarding the value and sensitivity of their proprietary information. These norms apply with equal force to the Fortune 500 multinational and the fifty-person software company—the principles are the same, even if the negotiating dynamics differ.
B. Fundamental Data Ownership and Control Principles
The foundational principle in any enterprise technology agreement should be an unambiguous assertion of customer data ownership. Every organization, regardless of size, should ensure that its digital supplier agreements contain an express provision confirming that all data provided by the customer, generated by the customer’s use of the supplier’s services, or derived from the customer’s operations remains the exclusive property of the customer at all times. This principle should not be subject to qualification or exception.
Recommended contractual language should address the following elements:
• Ownership Declaration. All Customer Data, including without limitation all inputs, outputs, metadata, usage data, analytical results, and derivatives thereof, shall remain the sole and exclusive property of Customer. Supplier acquires no right, title, or interest in Customer Data by virtue of this Agreement or the provision of Services.
• Scope of “Customer Data.” The definition of Customer Data should be drafted broadly to encompass all categories of information that the customer may introduce into or generate through the supplier’s platform, including structured data, unstructured data, documents, communications, analytical outputs, and machine-generated data.
• Negative Covenants. Supplier shall not license, sell, commercialize, share, or otherwise make available Customer Data to any third party, or use Customer Data for any purpose other than the provision of Services under this Agreement, without Customer’s prior written consent.

For smaller enterprises that may have less negotiating leverage with major providers, these provisions remain critically important. Even where a supplier’s standard terms include some form of data-ownership language, the devil is in the details—particularly around exceptions, permitted uses, and derivative data. Every organization should scrutinize these provisions carefully.
C. AI Training Restrictions: The Central Battleground
The single most consequential data-usage issue in contemporary technology contracting is whether, and to what extent, a supplier may use customer data to train, fine-tune, or otherwise improve its artificial intelligence or machine learning models. This is the area where an enterprise’s contractual protections must be most precise and most stringent.

1. The Problem of Model Training on Customer Data
When a supplier uses a customer’s data to train or refine AI models, several adverse consequences may follow. The information embedded in the customer’s data may become encoded in the model’s parameters, creating a risk that the model will subsequently generate outputs that reflect, reproduce, or are influenced by the customer’s proprietary information. This “data leakage” risk is not theoretical—researchers have repeatedly demonstrated that large language models and other AI systems can memorize and regurgitate training data under certain conditions.
Moreover, when a supplier trains its models on data from multiple customers, the resulting model improvements benefit all users of the supplier’s platform, including the customer’s direct competitors. In effect, the customer would be subsidizing the competitive advantage of its rivals by contributing its proprietary data to a shared model. This dynamic is particularly acute in industries where operational data, financial strategies, research outputs, and engineering specifications carry enormous competitive value—energy, pharmaceuticals, financial services, defense, and advanced manufacturing, among others.
The distinction between enterprise-tier and consumer-tier AI services is critical here. With an enterprise account, many AI providers contractually guarantee that customer data is not used to train their models. With a personal or free-tier account, no such guarantee exists, and the provider may improve its models by training on the data submitted through those accounts. The interfaces may look identical, but the contractual protections are fundamentally different.
2. Recommended Contractual Framework for AI Training Restrictions
Enterprises should insist on a comprehensive prohibition against the use of their data for AI training purposes. The recommended contractual architecture includes the following tiers of protection:
Tier 1: Absolute Prohibition on Model Training. The supplier’s agreement should contain a clear, unconditional prohibition on the use of Customer Data to train, fine-tune, validate, test, or otherwise improve any AI model, machine learning algorithm, or automated system, whether the supplier’s own or a third party’s. This prohibition should apply regardless of whether the data is anonymized, aggregated, or de-identified.
Tier 2: Prohibition on Derivative Use. The supplier should be prohibited from creating derivative works, compilations, aggregations, or analyses based on Customer Data that could be used to enhance models or services provided to other customers. This provision addresses the common supplier practice of “anonymizing” data and claiming that the resulting derivatives are no longer customer data.
Tier 3: Technical Safeguards. The agreement should require the supplier to implement technical measures to enforce data segregation, including logical separation of Customer Data from other customers’ data, access controls that prevent unauthorized personnel from accessing Customer Data, and audit trails that document all access to and processing of Customer Data.
Tier 4: Contractual Flow-Down. The supplier should be required to impose equivalent restrictions on all subcontractors, subprocessors, and third-party service providers that may access or process Customer Data.
For mid-market and smaller enterprises, the challenge is often that major suppliers present their standard terms on a take-it-or-leave-it basis. Even in these situations, organizations should document their data-usage expectations clearly and seek at minimum a representation from the supplier that enterprise-tier data handling applies, including a commitment that data will not be used for model training.
D. Data Segregation and Technical Controls
Beyond the contractual prohibitions discussed above, enterprises should require their digital suppliers to implement robust technical controls to ensure data isolation. The market has converged around several key technical requirements:
• Logical Segregation. Customer Data must be logically segregated from the data of other customers at all times, whether in transit, at rest, or during processing. Multi-tenant architectures are acceptable only if the supplier can demonstrate that its isolation mechanisms prevent any cross-tenant data exposure.
• Encryption Standards. All Customer Data must be encrypted using industry-standard protocols (currently AES-256 or equivalent) both at rest and in transit. Encryption keys must be managed in accordance with recognized key management standards, and the customer should retain the option to manage its own encryption keys where feasible.
• Access Controls. The supplier must implement role-based access controls that limit access to Customer Data to only those personnel who require such access for the performance of services. All access must be logged and auditable.
• Data Localization. Enterprises operating across multiple jurisdictions should retain the right to specify the geographic locations where their data may be stored and processed, particularly given the complex regulatory landscape governing cross-border data transfers under the GDPR, the UK Data Protection Act, and other applicable data protection regimes.
• Data Deletion and Return. Upon termination or expiration of the agreement, the supplier must return all Customer Data in a commercially usable format and certify the complete and irrecoverable destruction of all copies, including backup copies, within a specified timeframe. This provision is essential to prevent vendor lock-in and to protect the organization’s ability to migrate to alternative providers.
E. Confidentiality Architecture
The confidentiality provisions in supplier agreements should be structured to provide comprehensive protection across the full lifecycle of the engagement. The following elements are essential:
• Broad Definition of Confidential Information. The definition should encompass all information disclosed by the customer, whether marked as confidential or not, and should include all Customer Data, business strategies, financial information, technical specifications, and operational data.
• Survival Period. Confidentiality obligations should survive the termination of the agreement for a minimum of five years, and indefinitely for trade secrets. Many supplier-proposed agreements limit the survival period to two or three years, which is inadequate for protecting long-lived proprietary information.
• Standard of Care. The supplier should be required to protect the customer’s confidential information using at least the same degree of care it uses to protect its own most sensitive confidential information, but in no event less than a reasonable standard of care.
• Permitted Disclosures. Any exceptions to confidentiality (such as disclosures required by law) should be narrowly drawn and should require the supplier to provide advance notice to the customer and cooperate with the customer’s efforts to obtain protective treatment of the disclosed information.
F. Regulatory Drivers and Market Trends
The norms described above are reinforced by a rapidly evolving regulatory landscape that affects organizations of all sizes. The EU AI Act, which entered its phased implementation period in 2025, imposes transparency and data-quality requirements on providers of AI systems, with particularly stringent obligations for high-risk applications. The Colorado AI Act, effective February 1, 2026, requires impact assessments for high-risk AI systems and establishes disclosure obligations that have downstream effects on vendor contracting. The Texas Responsible AI Governance Act (TRAIGA), effective January 1, 2026, establishes a comprehensive framework that bans certain harmful AI uses and requires disclosures when AI systems interact with consumers.
In the United States, the absence of a comprehensive federal AI statute has not prevented regulatory activity. The Federal Trade Commission, the Equal Employment Opportunity Commission, and the Consumer Financial Protection Bureau have all signaled active enforcement against AI systems that produce discriminatory outcomes or misleading practices. The NIST AI Risk Management Framework, while voluntary, has become a de facto reference point for governance and vendor assessments across both public and private sectors.
At the federal contracting level, the GSA proposed a significant new contract clause in March 2026 (GSAR 552.239-7001) that would grant the government expansive ownership of all data inputs and outputs and prohibit contractors from using government data for training or improving AI models. While this clause applies specifically to government contracts, it reflects and reinforces the broader market trend toward strict data-usage restrictions in enterprise technology agreements—and its principles are increasingly being adopted by private-sector buyers as well.
Organizations of all sizes should leverage these regulatory developments in their negotiations with suppliers, framing their data-protection requirements not merely as commercial preferences but as alignment with evolving legal obligations and industry best practices. Even smaller companies can point to the direction of regulatory travel to strengthen their negotiating position.
III. Standard of Services: Beyond “As Is” and “As Available”
A. The Problem with “As Is” Service Delivery
It is common for software and digital service providers to deliver their services on an “as is” and “as available” basis. These disclaimers, which are standard features of consumer and even many enterprise software agreements, effectively disclaim all warranties regarding the quality, reliability, accuracy, and fitness of the services for any particular purpose. For any organization whose operations depend on digital services—from a global manufacturer running its supply chain on cloud-based ERP to a mid-market retailer relying on SaaS for e-commerce—accepting an “as is” service standard is fundamentally incompatible with reasonable risk management.
When a supplier provides services “as is,” it is in effect telling its customer: “We make no promises about whether this service will work, how well it will work, or whether it will be available when you need it.” For companies where system failures can have safety, environmental, regulatory, financial, or reputational consequences, this position is untenable. Even for smaller organizations, the cascading impact of a prolonged outage or data-quality failure can be existential.
Enterprises should systematically reject “as is” and “as available” disclaimers in their technology agreements and instead negotiate for specific, measurable, and enforceable service-level commitments. The following subsections outline the principal categories of service-level commitments that organizations can reasonably propose and that suppliers in the market typically agree to.
B. Uptime and Availability Commitments
The most fundamental service-level commitment is a guaranteed level of system availability. Market standards for enterprise SaaS and cloud services have converged around a range of availability commitments:
Availability Tier | Uptime Guarantee | Max Annual Downtime | Typical Use Case |
Standard | 99.5% | ~43.8 hours | Non-critical business tools |
Enhanced | 99.9% | ~8.77 hours | Core enterprise applications |
Premium | 99.95% | ~4.38 hours | Mission-critical operations |
Ultra-High | 99.99% | ~52.6 minutes | Safety-critical systems, trading |
For most enterprise applications, organizations should target a minimum of 99.9% availability, with higher commitments for safety-critical and operations-critical systems. The appropriate tier depends on the business impact of downtime for the specific service. Major cloud providers, including Microsoft, AWS, Google Cloud, and Salesforce, publish Service Level Agreements with specific uptime commitments and service credit frameworks. Smaller and mid-market suppliers may offer less aggressive uptime targets, but should still commit to measurable availability standards.
Key considerations for availability commitments include the measurement methodology (how is uptime calculated, and what constitutes “downtime” versus “scheduled maintenance”), the measurement window (monthly versus quarterly versus annual), and the exclusions (force majeure events, customer-caused outages, and scheduled maintenance windows). Organizations should insist on independent monitoring or access to the supplier’s monitoring data to verify compliance.
C. Performance and Response-Time Commitments
Beyond mere availability, enterprises should negotiate for specific performance standards that address the quality of the service when it is available. These may include:
• Response Time. Maximum allowable latency for system responses, typically measured at the 95th or 99th percentile. For example, a commitment that 99% of API calls will return a response within 500 milliseconds.
• Throughput. Minimum transaction processing capacity, measured in transactions per second or concurrent users supported without performance degradation.
• Error Rate. Maximum allowable percentage of failed transactions or system errors, exclusive of errors caused by customer-side issues.
• Data Processing Time. For analytics, reporting, and data-pipeline services, maximum allowable processing time for defined data volumes and query types.
These metrics should be defined with sufficient specificity to be objectively measurable and should be tied to remedies for non-compliance.
D. Incident Response and Resolution Commitments
A robust SLA framework must include specific commitments regarding the supplier’s response to service disruptions and incidents. The following table illustrates a typical incident-severity classification and corresponding response and resolution targets:
Severity | Description | Response Time | Resolution Target |
Critical (P1) | Complete service outage or data loss affecting production operations | 15 minutes | 4 hours |
High (P2) | Major functionality impaired; no workaround available | 1 hour | 8 hours |
Medium (P3) | Partial functionality impaired; workaround available | 4 hours | 2 business days |
Low (P4) | Minor issue; cosmetic defect or enhancement request | 1 business day | Next scheduled release |
These severity classifications and response targets are widely accepted in enterprise technology agreements and are generally achievable by competent service providers. Organizations should insist on clear definitions for each severity level, escalation procedures for incidents that are not resolved within the target timeframes, and root-cause analysis reporting for all Critical and High severity incidents.
E. Service Credits and Financial Remedies
Service-level commitments are only as meaningful as the remedies available when they are breached. The standard market remedy for SLA failures is a service credit—a reduction in the fees owed by the customer for the affected period. Organizations should negotiate for a credit structure that provides meaningful financial incentive for the supplier to maintain compliance.
A typical service credit framework operates on a tiered basis:
• Availability between 99.9% and 99.5%: 10% credit on monthly fees for the affected service.
• Availability between 99.5% and 99.0%: 25% credit on monthly fees.
• Availability below 99.0%: 50% credit on monthly fees, plus right to terminate without penalty if the failure persists for two consecutive months.
Critically, organizations should negotiate for service credits that are not the exclusive remedy for SLA failures. Many supplier-proposed agreements include language stating that service credits constitute the customer’s “sole and exclusive remedy” for availability failures. This language should be resisted, as it would preclude the customer from pursuing other remedies (including damages) for significant service disruptions that cause material harm to the organization’s operations.
F. Warranties That Replace “As Is”
In addition to the specific service-level commitments described above, enterprises should negotiate for affirmative warranties that replace the “as is” disclaimer. These typically include:
• Performance Warranty. The supplier warrants that the services will perform materially in accordance with the documentation and specifications provided by the supplier.
• Conformance Warranty. The supplier warrants that the services will conform to the functional requirements agreed upon by the parties during the procurement process.
• Professional Standards Warranty. The supplier warrants that all professional services will be performed by qualified personnel in a workmanlike manner consistent with generally accepted industry standards.
• Compliance Warranty. The supplier warrants that the services will comply with all applicable laws, regulations, and industry standards, including data protection and cybersecurity requirements.
• Non-Infringement Warranty. The supplier warrants that the services do not infringe upon any third party’s intellectual property rights.
• Malware Warranty. The supplier warrants that the services and all deliverables will be free from viruses, malware, and other malicious code.
These warranties are well-established in enterprise technology procurement and are routinely accepted by reputable suppliers. Suppliers that refuse to provide any warranties beyond an “as is” disclaimer should be viewed with significant caution, as such refusal may indicate a lack of confidence in their own service quality.
IV. Indemnity Language: Standard Provisions, Non-Standard Provisions, and the Interplay with Liability Limitations
A. The Role of Indemnification in Technology Agreements
Indemnification clauses are among the most heavily negotiated provisions in any technology agreement, and for good reason. An indemnity is, at its core, a contractual promise by one party to bear the financial consequences of specified risks. In the context of technology agreements, indemnification provisions serve as a critical mechanism for allocating the risk of third-party claims—including intellectual property infringement claims, data breach claims, and claims arising from the supplier’s negligence or misconduct—between the customer and the supplier.
Understanding the distinction between standard and non-standard indemnities, and the interplay between indemnification obligations and limitation-of-liability clauses, is essential for any enterprise seeking to protect its interests in technology procurement. This section provides guidance applicable to organizations of all sizes, from global corporations negotiating bespoke enterprise agreements to mid-market firms reviewing a supplier’s standard terms.
B. Standard Indemnification Provisions
The following categories of indemnification are considered standard in enterprise technology agreements and should be present in every agreement an organization enters into with a digital supplier:
1. Intellectual Property Infringement Indemnity
This is the most universally accepted supplier indemnity and is considered non-negotiable in the market. Under this provision, the supplier agrees to defend, indemnify, and hold harmless the customer against any third-party claim alleging that the supplier’s software, platform, or services infringe a third party’s patent, copyright, trademark, or trade secret rights.
Key elements of a well-drafted IP indemnity include:
• Duty to Defend. The supplier assumes the obligation to retain counsel and defend the customer against the infringement claim, at the supplier’s expense.
• Duty to Indemnify. The supplier agrees to pay all damages, settlements, and costs (including reasonable attorneys’ fees) arising from the claim.
• Mitigation Options. The agreement should specify the supplier’s remedial options if an infringement claim is sustained or reasonably likely to be sustained, including obtaining a license, modifying the infringing elements, or providing a functionally equivalent non-infringing replacement.
• Exclusions. The supplier’s indemnity obligation is typically excluded for infringement arising solely from the customer’s unauthorized modification of the services, the customer’s combination of the services with non-supplier products (unless such combination was directed or anticipated by the supplier), or the customer’s continued use of a prior version after a non-infringing update was made available.
IP infringement indemnities are widely accepted as operating outside the general limitation-of-liability cap. In many enterprise agreements—including, by way of example, Microsoft’s Enterprise Agreement—the IP indemnity is uncapped. Organizations of all sizes should treat this as a baseline expectation.
2. Data Breach and Security Indemnity
This indemnity has become increasingly standard, though suppliers often resist it more vigorously than the IP indemnity. Under this provision, the supplier agrees to indemnify the customer against losses arising from the supplier’s failure to maintain adequate security controls, resulting in unauthorized access to, disclosure of, or loss of Customer Data.
The scope of a data breach indemnity should cover notification costs (including the cost of complying with breach notification laws in all applicable jurisdictions), credit monitoring and identity protection services for affected individuals, regulatory fines and penalties, forensic investigation costs, public relations and crisis management expenses, and damages awarded in third-party litigation arising from the breach.
In practice, data breach indemnities are often subject to a separate, elevated liability cap (commonly referred to as a “super cap”) rather than being fully uncapped. A typical super cap for data breach liability ranges from two to five times the annual fees paid under the agreement, depending on the volume and sensitivity of the data processed. For smaller contracts, the absolute dollar value of the super cap should still be meaningful relative to the potential exposure.
3. Compliance Indemnity
A standard compliance indemnity requires the supplier to indemnify the customer against claims arising from the supplier’s violation of applicable laws, regulations, or industry standards. This is particularly important in the technology context, where the regulatory landscape is evolving rapidly and non-compliance can expose the customer to significant liability even when the underlying violation was caused by the supplier.
C. Non-Standard Indemnification Provisions
The following categories of indemnification are considered non-standard—meaning that they are not universally present in enterprise technology agreements and may require more vigorous negotiation to obtain. Larger enterprises with greater negotiating leverage are more likely to obtain these provisions, but mid-market companies should be aware of them and pursue them where the risk profile warrants:
1. AI Output Indemnity
As AI-powered services become more prevalent, a new category of indemnity has emerged: the AI output indemnity. Under this provision, the supplier indemnifies the customer against claims arising from the AI system’s outputs, including claims of defamation, privacy violation, intellectual property infringement, or discrimination.
This is a rapidly evolving area. Several major technology providers have introduced AI-specific indemnities in recent months, though the scope and conditions of these indemnities vary significantly. Organizations should seek to obtain AI output indemnification wherever their suppliers’ services include AI-generated content, analysis, or recommendations that the organization may rely upon in its operations or pass through to its own customers.
2. Consequential Damages Indemnity
Most technology agreements contain a mutual exclusion of consequential, incidental, and special damages. A non-standard but increasingly sought-after provision would require the supplier to indemnify the customer for consequential damages arising from specified high-risk events, such as a major data breach or a prolonged service outage affecting critical operations.
Obtaining a consequential damages indemnity requires significant negotiating leverage, but it is not unprecedented in high-value enterprise engagements. The key is to limit the consequential damages exposure to specific, well-defined categories of loss that are foreseeable and directly related to the supplier’s core obligations.
3. Regulatory Investigation Indemnity
Under this provision, the supplier would indemnify the customer against the costs of responding to regulatory investigations or enforcement actions that arise from the supplier’s acts, omissions, or service failures. This is particularly relevant in the context of data protection, where an organization may face regulatory scrutiny as a data controller even when the underlying breach was caused by the supplier acting as a data processor.
4. Autonomous Agent Indemnity
As the market shifts toward agentic AI solutions—systems that can autonomously plan and execute multi-step tasks—the traditional SaaS contracting model is being tested. Where an AI agent executes actions autonomously on behalf of the customer, the supplier should indemnify the customer against third-party claims arising from the agent’s autonomous actions in breach of the established delegation of authority or policy guardrails. This is an area where contracting practices are evolving from the traditional SaaS model toward a hybrid approach incorporating elements of business process outsourcing (BPO) agreements, including outcome-based SLAs and broader indemnification provisions.
D. The Critical Interplay Between Indemnities and Limitation-of-Liability Clauses
The relationship between a supplier’s indemnification obligations and its limitation-of-liability clause is among the most consequential—and most frequently misunderstood—aspects of technology contract negotiation. Understanding this interplay is essential for any organization to ensure that its indemnity protections have genuine economic substance.
1. The Problem: Indemnities Rendered Hollow by Liability Caps
A common scenario illustrates the issue. A supplier provides a customer with a broad indemnity covering IP infringement and data breach claims. However, the same agreement contains a limitation-of-liability clause that caps the supplier’s total aggregate liability at the fees paid in the preceding twelve months. If the customer pays the supplier $100,000 annually and subsequently suffers a data breach that results in $5 million in damages, the indemnity’s apparent breadth is illusory—the liability cap limits the supplier’s actual exposure to $100,000, leaving the customer responsible for the remaining $4.9 million.
This dynamic is not hypothetical. The average cost of a data breach now exceeds $4.5 million. A standard liability cap tied to annual fees would cover only a small fraction of the potential damages in many scenarios. This problem is particularly acute for smaller enterprises, where the annual subscription fees are lower and the resulting liability cap even less adequate relative to potential exposure.
2. The Solution: Carve-Outs and Super Caps
To ensure that indemnification obligations retain their economic substance, organizations should negotiate for specific carve-outs from the general liability cap. The standard market approach is a three-tier liability framework:
1. General Cap. A baseline liability cap, typically set at 12 to 24 months of fees paid, applies to all claims except those specifically carved out.
2. Super Cap. An elevated liability cap, typically set at two to five times the annual fees, applies to specified high-risk obligations such as data breach liability and confidentiality breaches.
3. Uncapped Obligations. Certain obligations are excluded from the liability cap entirely and subject to unlimited liability. In most enterprise technology agreements, uncapped obligations include the IP infringement indemnity, the customer’s payment obligations, and liabilities arising from willful misconduct or gross negligence.
This three-tier structure is well-established in the market and is accepted by most major technology providers. Courts have upheld the principle that indemnification obligations, particularly in high-value commercial transactions, often fall outside general liability caps. Courts have also recognized separate or uncapped treatment for data breach liabilities in cases involving significant privacy failures.
3. Practical Impact in Litigation
The interplay between indemnities and liability caps has significant practical implications in a litigation context:
• Scope of the Indemnity. The court or arbitral tribunal would first determine whether the claim falls within the scope of the indemnity provision. Well-drafted indemnity provisions with clear trigger events and defined categories of covered losses will be more readily enforceable than vague or ambiguous provisions.
• Applicability of the Liability Cap. The tribunal would then determine whether the liability cap applies to the indemnity claim. If the agreement contains a clear carve-out stating that indemnification obligations are excluded from the general liability cap, the cap will not limit the supplier’s exposure. If the agreement is silent on this point, or if the carve-out is ambiguous, the result is unpredictable and may depend on the governing law and the tribunal’s interpretation of the parties’ intent.
• Consequential Damages Exclusion. Even if the indemnity survives the liability cap, a separate consequential damages exclusion may still limit the types of damages recoverable under the indemnity. Organizations should ensure that the consequential damages exclusion contains a clear carve-out for indemnification obligations.
• Duty to Mitigate. The supplier may argue that the customer failed to mitigate its damages. Organizations should maintain evidence of their mitigation efforts and should consider including a contractual provision specifying the parties’ respective mitigation obligations.
• Insurance. The availability and scope of the supplier’s insurance coverage (including cyber liability, errors and omissions, and general commercial liability insurance) may affect the practical enforceability of the indemnity. Organizations should require their suppliers to maintain specified minimum insurance coverages and to provide certificates of insurance upon request.
E. Summary Table: Standard vs. Non-Standard Indemnities
Indemnity Category | Classification | Typical Liability Treatment |
IP Infringement | Standard | Uncapped |
Data Breach / Security Failure | Standard (emerging) | Super Cap (2–5x annual fees) |
Regulatory Compliance | Standard | General Cap or Super Cap |
AI Output Liability | Non-Standard (emerging) | Varies; often General Cap |
Consequential Damages | Non-Standard | Negotiated case-by-case |
Regulatory Investigation Costs | Non-Standard | Super Cap or General Cap |
Autonomous Agent Actions | Non-Standard (emerging) | Negotiated; BPO-style terms |
V. Selecting and Working with Technology-Focused Legal Counsel
A. Why Specialized Counsel Matters
Technology contracting has become sufficiently complex that generalist legal counsel—however competent in other areas—may lack the specialized knowledge needed to identify and address the full spectrum of risks in modern digital supplier agreements. The intersection of intellectual property law, data protection regulation, cybersecurity standards, AI governance, and commercial contracting requires a level of technical fluency and market awareness that develops only through sustained, focused practice.
Organizations should seek legal counsel with demonstrated experience in the following areas when evaluating or negotiating technology agreements:
• Enterprise Software and SaaS Agreements. Counsel should be familiar with the standard contracting positions of major technology providers and should understand the typical range of negotiated outcomes across the market.
• Data Protection and Privacy. Counsel should have working knowledge of the GDPR, U.S. state privacy laws, sector-specific data regulations, and the practical implications of cross-border data transfers.
• Artificial Intelligence and Machine Learning. Counsel should understand the technical architecture of AI systems, the legal risks associated with model training and AI-generated outputs, and the rapidly evolving regulatory landscape.
• Intellectual Property. Counsel should be able to assess IP infringement risk in technology agreements, negotiate appropriate indemnification provisions, and understand the implications of open-source licensing, patent exposure, and trade secret protection.
• Cybersecurity and Incident Response. Counsel should understand security standards, breach notification obligations, and the practical mechanics of incident response.
B. Key Questions to Ask Prospective Counsel
When selecting counsel for technology contracting matters, organizations should consider the following questions:
1. Does the firm have experience negotiating with the specific providers the organization uses or is considering (e.g., AWS, Microsoft, Salesforce, Oracle, Google Cloud, SAP, ServiceNow)?
2. Can the firm demonstrate familiarity with the provider’s standard terms, known areas of flexibility, and recent changes to its enterprise agreement templates?
3. Does the firm have attorneys who combine legal expertise with genuine technical fluency—understanding the underlying technologies well enough to identify risks that a purely commercial lawyer might miss?
4. Does the firm maintain current market intelligence on negotiated positions, enabling it to advise on what concessions are achievable?
5. Can the firm scale its services to match the organization’s needs, from a focused single-agreement review to a comprehensive vendor management program?
6. Does the firm have experience with AI-specific legal issues, including AI addenda, data processing agreements for AI services, and AI governance frameworks?
C. Types of Advisory Services to Expect
Organizations engaging technology-focused legal counsel should expect the ability to obtain the following categories of advisory services:
• Advice and Consultation. Strategic guidance on structuring and negotiating commercial and technology agreements, including risk assessment, negotiation strategy, and benchmarking against market terms.
• Clause Review and Drafting. Detailed review and redrafting of specific contractual provisions, including data-usage restrictions, AI training prohibitions, SLA frameworks, indemnification clauses, and liability limitations.
• Regulatory Monitoring. Up-to-date notifications regarding developments in AI legislation and regulation, data protection law, and other legal domains that affect technology contracting.
• AI Transaction Support. Assistance in structuring and negotiating transactions and agreements relating to AI, including vendor procurement, data licensing, joint ventures, and AI-related M&A.
• Corporate AI Strategy. Support in developing and refining corporate governance frameworks, policies, and strategies relating to AI adoption and risk management.
D. Negotiating with Major Software Providers
Negotiating enterprise agreements with large technology vendors presents unique challenges. These providers typically present highly standardized agreements, supported by large legal teams, with well-established positions on data usage, liability, and service levels. The following practical guidance applies to organizations of all sizes:
1. Understanding the Provider’s Standard Position
Before entering negotiations, organizations should obtain and review the provider’s standard enterprise agreement, published SLAs, data processing addenda, and any AI-specific terms. Understanding the baseline allows the customer to focus its negotiating efforts on the provisions that matter most.
2. Prioritizing Negotiation Objectives
No organization will obtain every concession it seeks from a major technology provider. Effective negotiation requires clear prioritization. The following hierarchy reflects common enterprise priorities:
1. Data protection and AI training restrictions — non-negotiable for most enterprises.
2. Indemnification carve-outs from liability caps — essential for meaningful risk allocation.
3. SLA commitments with non-exclusive remedies — important for operational assurance.
4. Warranty protections — important but often achievable through documentation references.
5. Termination flexibility and data portability — critical for avoiding vendor lock-in.
3. Leveraging Competitive Dynamics
Organizations with the ability to credibly evaluate alternative providers have greater negotiating leverage. Even where an organization has selected its preferred provider, maintaining awareness of competitive alternatives and communicating that awareness to the provider’s sales team can improve negotiating outcomes. Multi-cloud and multi-vendor strategies, where feasible, provide natural leverage.
4. Addressing AI-Specific Terms
As major providers embed AI capabilities throughout their platforms, organizations must pay particular attention to AI-specific terms that may be buried in standard agreements or introduced through supplemental addenda. These terms may govern how the provider uses customer data in connection with AI features, the scope of any AI output indemnity, and the allocation of liability for AI-generated content or decisions. Organizations should proactively raise these issues in negotiations rather than accepting default terms.
5. The Small and Mid-Market Challenge
Smaller organizations often face take-it-or-leave-it dynamics with major providers. Even in these situations, several strategies can improve outcomes. Organizations can ask for the enterprise-tier version of the agreement rather than the standard click-through terms, request written confirmation of specific data-handling practices (even if the standard agreement is not modified), negotiate through channel partners or resellers who may have greater flexibility, and pool purchasing power with peer organizations or industry consortia to negotiate collective terms.
VI. AI-Specific Legal Considerations
A. The Imperative for AI Legal Preparedness
Artificial intelligence is no longer a speculative technology—it is a foundational element of enterprise operations, and it is the subject of a rapidly proliferating body of legislation, regulation, and judicial decision-making. Organizations that deploy AI across their operations—whether in customer service, supply chain optimization, financial analysis, research and development, or any other function—face legal risks that are both significant and multifaceted. These risks do not respect company size: a small company deploying an AI-powered hiring tool faces the same anti-discrimination compliance obligations as a multinational corporation.
B. The Emerging Regulatory Landscape
1. European Union
• EU AI Act. The EU AI Act entered its phased implementation period in 2025, with obligations for general-purpose AI models taking effect that year. Providers of foundation models must publish detailed summaries of training data. High-risk AI systems—including those used in employment, credit decisions, education, and healthcare—face rigorous requirements for data quality, transparency, human oversight, and conformity assessment. Organizations deploying AI within the EU or affecting EU data subjects must determine whether their systems qualify as high-risk and ensure compliance with the applicable tier of obligations.
• GDPR AI Enforcement. GDPR Article 22 restricts decisions based solely on automated processing that produce legal or similarly significant effects. Organizations using AI for automated decision-making must ensure they have a lawful basis, provide meaningful information about the logic involved, and offer the right to human intervention.
• EU Data Act. The EU Data Act introduces data-sharing obligations that may affect AI data pipelines, particularly in IoT and connected-device contexts.
2. United States
• Federal Agency Enforcement. The FTC, EEOC, CFPB, and SEC have all signaled active enforcement against AI systems that produce discriminatory outcomes, unfair trade practices, or inadequate disclosures. The NIST AI Risk Management Framework has become a de facto reference point for governance and vendor assessments.
• State AI Legislation. The Colorado AI Act (effective February 1, 2026) requires impact assessments for high-risk AI systems in areas including housing, employment, education, healthcare, insurance, and lending. The Texas TRAIGA (effective January 1, 2026) bans certain harmful AI uses and requires disclosures when AI systems interact with consumers. California’s SB-942 and AB 2013 (both effective January 1, 2026) impose transparency requirements for AI-generated content and training data disclosure. The Utah AI Policy Act requires disclosure when consumers interact with generative AI.
• Federal Contracting. GSA’s proposed GSAR 552.239-7001 clause (March 2026) would impose sweeping data-ownership and AI-training restrictions on government contractors. Its principles are increasingly being adopted by private-sector buyers as well.
3. United Kingdom and Other Jurisdictions
• United Kingdom. The UK has adopted a sector-specific approach to AI regulation, with guidance from the ICO, FCA, CMA, and Ofcom. Organizations with UK operations should monitor developments in UK data protection law as it diverges from EU GDPR requirements.
• Asia-Pacific. AI regulatory developments are accelerating in China, Japan, South Korea, Singapore, and Australia, each with distinctive approaches to governance.
• Middle East. The UAE, Saudi Arabia, and Qatar are developing AI-specific regulatory frameworks relevant to organizations with operations in the region.
C. AI Copyright and Training Data Litigation
Pending litigation involving major content creators and AI developers is entering decisive phases. Courts are beginning to signal whether training AI models on copyrighted data constitutes fair use. Adverse rulings against AI developers could increase pressure for licensing regimes or other significant remedial measures, including potential limits on model deployment. Organizations should audit their use of generative AI tools to distinguish between input risks (from data used to train the model) and output risks (from generating content that infringes existing works).
D. AI Agents and Autonomous Action
AI has evolved from chatbots and co-pilots to autonomous agents capable of executing code, signing contracts, and booking transactions. This evolution raises novel questions of agency law. If an AI agent executes a disadvantageous contract, is the user bound by it? Courts are scrutinizing whether users or developers bear liability for autonomous errors. Organizations should review vendor contracts for AI agents to ensure indemnification clauses specifically address autonomous actions and errors resulting in financial loss.
E. Building an AI Governance Framework
Organizations at every scale should consider establishing a formal AI governance structure. The essential components include:
• AI Inventory. A comprehensive catalogue of all AI systems in use across the organization, including both internally developed and third-party solutions.
• Risk Classification. A methodology for classifying AI systems by risk level, aligned with applicable regulatory frameworks (such as the EU AI Act’s risk tiers).
• Responsible AI Policies. Formal policies addressing bias testing, transparency, human oversight, and ethical boundaries for AI use.
• Risk Appetite Statement. A board-level statement defining the boundaries of acceptable AI use within the organization, calibrated to the company’s risk tolerance and regulatory environment.
• Vendor Assessment Criteria. Standardized criteria for evaluating AI vendors, covering security, data residency, model transparency, bias testing, and regulatory compliance.
• Incident Response Plans. AI-specific incident response procedures addressing model failures, biased outputs, data leakage, and regulatory inquiries.
• Training and Awareness. Organization-wide training programs ensuring that employees understand the risks associated with AI tools and the organization’s policies for their use.
F. AI Intellectual Property Strategy
Organizations should develop proactive strategies for protecting AI-related intellectual property. Key considerations include:
• Patent Protection. Evaluating whether AI innovations developed internally are patentable, and building a patent portfolio that protects key innovations.
• Trade Secret Protection. Implementing safeguards for proprietary models, training data, fine-tuning methodologies, and prompt engineering techniques.
• Defensive Strategies. Assessing exposure to AI-related IP claims from third parties and ensuring that vendor indemnification provisions adequately address this risk.
• Open-Source Compliance. Auditing the use of open-source AI components to ensure compliance with applicable license terms and to avoid unintended IP exposure.
VII. Conclusion and Summary Recommendations
The legal landscape governing enterprise technology procurement has undergone a fundamental transformation in recent years. The convergence of artificial intelligence, cloud computing, and increasingly stringent data protection regulations has created a contracting environment that demands greater precision, broader protection, and more sophisticated risk allocation than ever before.
These challenges are not confined to the world’s largest corporations. Any organization that relies on digital services—and in 2026, that is virtually every organization—faces a version of the same risks. The specifics of negotiating leverage, contract value, and risk exposure differ by company size and industry, but the foundational principles of data protection, service quality, and risk allocation apply universally.
Based on the analysis presented in this article, the following summary recommendations are offered:
Data Protection
1. Insist on express, unconditional prohibitions against the use of Customer Data for AI model training, fine-tuning, or improvement, regardless of anonymization or aggregation.
2. Require logical data segregation, industry-standard encryption, role-based access controls, and comprehensive audit trails in all supplier agreements.
3. Negotiate data localization rights and robust data return/destruction obligations upon contract termination.
4. Ensure that confidentiality obligations survive for a minimum of five years, and indefinitely for trade secrets.
Service Standards
1. Systematically reject “as is” and “as available” disclaimers in favor of specific, measurable SLA commitments.
2. Target a minimum of 99.9% availability for core enterprise applications, with higher commitments for safety-critical and mission-critical systems.
3. Negotiate for tiered service credit frameworks with meaningful financial remedies, and ensure that service credits are not the sole and exclusive remedy for SLA failures.
4. Require affirmative warranties covering performance, conformance, professional standards, compliance, non-infringement, and malware.
Indemnification and Liability
1. Insist on uncapped IP infringement indemnities with clear duty-to-defend and duty-to-indemnify obligations.
2. Negotiate for data breach indemnities subject to elevated super caps (two to five times annual fees).
3. Pursue AI output indemnities and autonomous agent indemnities where applicable.
4. Implement a three-tier liability framework (general cap, super cap, uncapped obligations) with clear carve-outs ensuring that indemnification obligations are not rendered hollow by general liability limitations.
5. Ensure that the consequential damages exclusion contains explicit carve-outs for indemnification obligations.
AI Governance and Compliance
1. Establish a comprehensive AI legislative monitoring program covering the EU AI Act, U.S. state and federal AI legislation, UK AI regulation, and developments in other jurisdictions relevant to the organization’s operations.
2. Build a formal AI governance framework, including an AI inventory, risk classification methodology, responsible AI policies, and vendor assessment criteria.
3. Develop AI-specific contractual addenda, data processing agreements, and acceptable use policies for deployment across all supplier relationships.
4. Proactively address AI intellectual property strategy, including patent protection, trade secret safeguards, and open-source compliance.
Appendix A: Model Contractual Language
The following model provisions are provided as starting points for negotiation. Each provision should be adapted to the specific context of the engagement, the organization’s risk profile, and the applicable governing law. These provisions are designed to be usable by enterprises of all sizes, though larger organizations may seek to expand their scope while smaller organizations may need to accept certain modifications based on their bargaining position.
A.1 AI Training Prohibition Clause
"Supplier shall not, and shall ensure that its subcontractors, affiliates, and service providers do not, use Customer Data, in whole or in part, directly or indirectly, to train, fine-tune, validate, test, improve, or otherwise develop any artificial intelligence model, machine learning algorithm, large language model, neural network, or other automated system, whether owned or operated by Supplier, its affiliates, or any third party. This prohibition applies regardless of whether Customer Data has been anonymized, de-identified, aggregated, or otherwise modified. Any violation of this Section shall constitute a material breach of this Agreement."
A.2 Data Segregation Clause
"Supplier shall at all times logically segregate Customer Data from the data of Supplier’s other customers. Customer Data shall be encrypted using AES-256 encryption (or equivalent) both at rest and in transit. Supplier shall implement role-based access controls limiting access to Customer Data exclusively to those personnel who require such access for the performance of Services. All access to Customer Data shall be logged in an immutable audit trail that is available for Customer’s review upon reasonable request."
A.3 Indemnification with Liability Carve-Outs
"Supplier shall defend, indemnify, and hold harmless Customer and its affiliates, officers, directors, employees, and agents from and against any and all third-party claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys’ fees) arising from or related to: (a) any actual or alleged infringement of any patent, copyright, trademark, trade secret, or other intellectual property right by the Services; (b) any unauthorized access to, disclosure of, or loss of Customer Data resulting from Supplier’s breach of its security obligations; and (c) Supplier’s violation of applicable laws or regulations. The obligations set forth in this Section are not subject to, and shall not be limited by, the general limitation-of-liability provisions of this Agreement."
A.4 Service Level Commitment with Remedy Preservation
"Supplier guarantees that the Services shall be available for a minimum of 99.9% of each calendar month, as measured by Supplier’s monitoring systems. In the event that availability falls below the guaranteed level, Customer shall be entitled to a service credit calculated as follows: [credit schedule]. Service credits shall not constitute Customer’s sole and exclusive remedy for availability failures, and Customer reserves all rights and remedies at law or in equity with respect to availability failures that result in material harm to Customer’s operations."
A.5 Anti-“As Is” Warranty Clause
"Supplier warrants that: (a) the Services shall perform materially in accordance with the Documentation; (b) the Services shall comply with all applicable laws, regulations, and industry standards; (c) all professional services shall be performed in a workmanlike manner consistent with generally accepted industry standards; (d) the Services do not infringe any third party’s intellectual property rights; and (e) the Services shall be free from viruses, malware, and other malicious code. THE FOREGOING WARRANTIES ARE IN LIEU OF ALL OTHER WARRANTIES, WHETHER EXPRESS OR IMPLIED, AND THE SUPPLIER EXPRESSLY DISCLAIMS ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE TO THE EXTENT SUCH DISCLAIMER IS PERMITTED BY APPLICABLE LAW."
A.6 Data Return and Destruction Clause
"Upon termination or expiration of this Agreement for any reason, Supplier shall, at Customer’s election: (a) return all Customer Data to Customer in a commercially standard, machine-readable format within thirty (30) days; or (b) irreversibly destroy all Customer Data, including all copies, backups, and archives, and provide written certification of such destruction signed by an authorized officer of Supplier within forty-five (45) days. Supplier’s obligations under this Section shall survive termination of this Agreement."
Appendix B: Key Legislative and Regulatory References
Legislation / Framework | Effective Date | Key Relevance |
EU AI Act | Phased: 2025–2027 | High-risk AI system classification; transparency and data-quality obligations; GPAI model obligations |
GDPR (incl. Art. 22) | In force | Automated decision-making restrictions; data processing agreements; cross-border transfers |
Colorado AI Act | February 1, 2026 | Impact assessments for high-risk AI; risk-based governance |
Texas TRAIGA | January 1, 2026 | Bans on harmful AI uses; disclosure requirements |
California SB-942 / AB 2013 | January 1, 2026 | AI content transparency; training data disclosure |
Utah AI Policy Act | In force | Consumer interaction disclosure; AI-mediated deception liability |
NIST AI RMF | Voluntary | De facto governance standard; vendor assessment reference |
GSA GSAR 552.239-7001 (proposed) | Proposed March 2026 | Government AI procurement; data ownership and training restrictions |
UK Data Protection Act / ICO | In force | AI processing obligations; sector-specific guidance |
No FAKES Act (proposed, U.S.) | Pending | Protection from unauthorized AI-generated likenesses |