Insurance coverage for trademark infringement lawsuits is far narrower than most executives realize.

In this video, Bill Wagner, a partner in Taft’s Indianapolis office, explains what CGL policies may cover, why willfulness allegations destroy coverage, and how insurer-appointed defense counsel can put companies at risk.

Under new regulations effective January 1, 2026, California regulators now expect businesses to conduct an annual “cybersecurity audit” that assesses “how the business’s cybersecurity program protects personal information from unauthorized access, destruction, use, modification, or disclosure; and protects against unauthorized activity resulting in the loss of availability of personal information.”

Now is the time to prepare for these requirements.

As explained below, these requirements are detailed and contemplate a rigorous, professional, independent, evidence-based audit. Audit results must be shared with the California regulator under penalty of perjury.

Applicability & Distinction from Risk Assessments

California cybersecurity audit requirements apply generally to businesses which process the personal information of at least 250,000 consumers or households (50,000 if sensitive personal information), or any business that derives 50% or more of its revenue from “sale” or “sharing” of data. The cybersecurity audit requirements apply to any business which meets the volume and activity thresholds. Businesses should likely consider whether they may meet these requirements given their processing of any California-origin data for any reason (e.g., website visitors, customer data, etc.).

The cybersecurity audit requirements stand separately and distinctly from California risk assessment requirements. Unlike risk assessments, cybersecurity audits generally do not aim to assess specific processing activity separately. Instead, the cybersecurity audit aims to assess the quality of the overall cybersecurity program. The implicit assumption of the regulation therefore seems to be that businesses have such an over-arching cybersecurity program, and that such extend protections to all California resident information. That program will be assessed by the cybersecurity audit.

Timing

The regulation contemplates that larger businesses (gross revenue > $100MM) will be the first to submit a comprehensive cybersecurity audit report in April of 2028, for the period January 1, 2027-January 1, 2028. Eventually, however, all businesses meeting applicability requirements will need to conduct and submit requirements. After the first submission, the expectation is that audit reports will need be submitted annually thereafter in April for the preceding year.

Businesses should strongly consider conducting an advance cybersecurity assessment – a “mock” audit – in 2026. An advance assessment can potentially provide an opportunity to identify and repair certain issues before mandatory audit and submission to the California regulator.

Auditors – Professional, Independent, and Evidence-Based

The cybersecurity audit must be conducted by a “qualified, objective, independent professional, using procedures and standards accepted in the profession of auditing.” Auditors must have both specific cybersecurity knowledge and knowledge of “how to audit a business’s cybersecurity program.” The regulations give auditors real authority to compel the business to provide relevant information. Companies should think carefully about their selection of auditor under these standards.

For companies with a strong internal audit function, internal auditors are permitted. However, crucially, the lead internal auditor must report directly to a member of the business’s executive team who does not have direct responsibility for the business’s cybersecurity program. This likely means that the cybersecurity audit function cannot fall under the information security organization itself or be the responsibility of the CISO.

The regulation contemplates that auditors will independently examine evidence to prepare their findings. Auditors may not base findings “primarily on assertions or attestations by the business’s management.”

Reliance on Other Audits

Given certain detailed audit requirements particular to California law, it is likely that existing audits conducted by a business likely will not suffice for purposes of satisfying California requirements. Businesses may, however, partially utilize and supplement existing audits, assuming adequate scope and that such audits otherwise meet California requirements. Organizations may want to consider conducting a crosswalk or mapping to identify how their existing audit frameworks correspond to California requirements.

Detailed Audit Requirements

California regulations provide a detailed list of issues and controls that must be assessed as part of the audit. This detail defies any sort of usable general summary. For very limited example, the cybersecurity audit must assess:

  • “oversight of service providers, contractors, and third parties to ensure compliance with [detailed California contracting requirement]”
  • “Personal information inventories (e.g., maps and flows identifying where personal information is stored, and how it can be accessed) and the classification and tagging of personal information (e.g., how personal information is tagged and how those tags are used to control the use and disclosure of personal information)”; and
  • “Internal and external vulnerability scans, penetration testing, and vulnerability disclosure and reporting (e.g., bug bounty and ethical hacking programs)”

Among many other detailed requirements. The audit report must detail gaps, weaknesses and remediation plans in areas covered.

Submission Under Penalty of Perjury

Once completed, the cybersecurity audit must ultimately be certified by an executive responsible for the audit and knowledgeable enough to provide accurate information. This executive will need to submit the audit to the California regulatory under penalty of perjury. Submission in this form strongly suggests that the executive may be held personally liable for inaccuracies, perhaps especially if deemed to be intentional falsehoods. Companies should probably anticipate that in the event of any adverse interaction with the regulator, their past audit reports may become a particular point of regulatory scrutiny, and certifying executives may be asked to give testimony.

Legal Support

Experienced counsel can help businesses prepare in at least a few different ways for cybersecurity audits. First, counsel can help assess and confirm applicability requirements. Counsel can help bridge the gap between regulatory text and implementation by working with internal or external auditors who validate audit design and ensure that detailed audit requirements are understood. Once an audit report is prepared in draft form, experienced counsel can advise on the final form of the document that will ultimately need to be submitted to an increasingly active and punitive privacy regulator.   For more information, please do not hesitate to contact a member of Taft’s Privacy, Security, and Artificial Intelligence team.     

Under newly implemented regulations of the California Consumer Privacy Act (CCPA), California now requires a formal risk assessment “before initiating any processing activity” of certain (sensitive) sorts. The regulation explicitly contemplates that businesses will complete risk assessments now, in 2026.

Eventually, such risk assessments – including those completed this year – must be signed by an executive and submitted to the California regulator under penalty of perjury.

Businesses and executives subject to the CCPA must prepare now to address these requirements. In particular, the regulation may impact businesses and services including SaaS/technology firms, payments or financial technology solutions, services, consumer services, employment or HR applications, AI solutions, or other processing involving California resident data.

The statute and regulations arguably provide for some narrowly tailored exceptions. Exceptions may include financial information subject to the Gramm-Leach-Bliley Act, certain limited employment-related uses, and/or certain health care institutions/information uses governed by HIPAA. However, relevant companies should consult competent legal counsel to assess whether they may fall within the scope of such an exception before relying on it.

Certain key requirements are noted below.

Any Processing of “Sensitive” Personal Information
Businesses must conduct a risk assessment of any processing of “sensitive” personal data. Such “sensitive” data includes:

  • SSN, driver’s license, state ID card, or passport number.
  • Financial account, debit card, or credit card numbers in combination with any required security or access code, password, or credentials allowing access to an account.
  • Precise geolocation of a consumer.
  • Race, ethnicity, citizenship or immigration status, religion or philosophy, or union membership.
  • Mail, email, or text messaging content (apparently including messages sent to the consumer).
  • Individual genetic data.
  • Neural data and/or measurements.
  • Biometric information processed for identification purposes.
  • Personal information collected or analyzed regarding health, sex life, or sexual orientation.
  • Children’s data (< 16 years of age).

Note that some of these categories (e.g., messaging content) may be trivially easy to meet for almost any business that interacts with or provides consumer services. Impacts are only potentially heightened for businesses operating with sensitive data, such as financial, health, or payment data.

ADMT: Use for a Significant Decision & Training

Businesses must also conduct a risk assessment regarding the use of “automated decision-making technology” (ADMT) for a “significant decision.” ADMT can include artificial intelligence technologies or technology intended to replace human involvement. An ADMT makes a “significant decision” when the decision “results in the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or health care services.”

Business may be in scope of risk assessment requirements if, e.g., its products, services, or automated activities involve:

  • Providing risk scoring or assessments, or otherwise helping decide when to extend credit or a loan, to exchange funds, offer housing, or to set up installment payment plans.
  • Searching and sorting job candidates into an auto-reject category.
  • AI-based screening for health care services.
  • Other “significant decision.”

Risk assessments are also required for certain uses of personal information to train an ADMT that will be used for significant decisions, including facial recognition, emotion recognition, and/or identity verification.

Selling or Sharing Data

Businesses must conduct a risk assessment when “selling” or “sharing” data within the meaning of California law. Based on statutory definitions and prior enforcement by California authorities, note that “selling” and “sharing” can include ordinary online tracking and analytics, technologies common across many commercial websites. “Selling” and “sharing” can include other common activities, such as service provider arrangements that are not subject to the strict contractual controls under California law limiting personal data use. For consumer finance businesses, the regulation also specifically notes as an example that consumer budgeting calculators may involve regulated data “sharing” if, e.g., including a third-party advertisement.

Automated Processing to Infer

Businesses must conduct a risk assessment before using automated processing to infer certain categories of information related to a consumer, including economic situation, behavior, personal preferences, or interests. Businesses should consider a risk assessment given the use of AI or other technology to assess job candidates, perform analytics, or form other assessments of individuals regarding “intelligence, ability, aptitude, performance at work, economic situation, health (including mental health), personal preferences, interests, reliability, predispositions, behavior, or movements in a sensitive location

Conclusion

The updates to the CCPA regulations went into effect on Jan. 1, and businesses may now be required to perform a risk assessment before commencing the relevant processing. Content requirements for risk assessments are detailed and require identifying potential harms to consumers, offsetting benefits, and mitigating factors. The regulation provides detailed guidance on both the substance and the form of such assessments; existing assessment procedures are unlikely to meet California requirements unless specifically designed to do so. Finally, as noted above, assessments will ultimately have to be submitted to the California privacy regulator by a managing executive, along with a written statement under penalty of perjury that the risk assessment information submitted is true and correct.

For more information about the updated CCPA requirements, contact a member of Taft’s  Privacy, Security, and Artificial Intelligence and Technology and Artificial Intelligence groups.

Warranties and representations, ownership of intellectual property, limitations of liability, and indemnity are among the most important issues when negotiating a software contract with an AI Vendor.

  • What’s reasonable?
  • What should you ask for?

That’s what I talk about in my latest video.

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

Artificial Intelligence (AI) is rapidly transforming the business world, moving from a niche technology to an integral part of operations across nearly every industry. Whether you are acquiring a technology company or simply using AI services such as customer chatbots or data analysis programs, businesses are being exposed to a new class of legal risks. To address these unique challenges, businesses and investors are increasingly including AI-specific representations and warranties in contracts and agreements. These clauses are becoming a crucial method for effectively allocating and mitigating AI-related uncertainty.

Risks, Benefits, and Key Considerations

While the benefits of AI in terms of efficiency and pattern recognition are immense, the technology also presents novel and significant legal risks. The importance of AI technology in business means that acquirers are now seeking tailored assurances even when a target’s AI use is not material to the core business, recognizing that any unmanaged risk can lead to future liability.

Key AI Risks and Considerations

  • Intellectual Property (IP) Infringement and Ownership: AI models, particularly GenAI tools, are trained on vast datasets that may contain web-scraped data, images, text, or other content protected by copyright. Developers are increasingly facing allegations that their tools were trained by “ingesting protected content without a license.” This creates a risk of infringement claims against both the developer and the user of such AI tools. There also remains uncertainty over the extent of IP protection for both AI inputs and content created by AI.
  • Data Quality and Bias: AI outputs are only as good as the inputs. If the data used to train an AI model is inaccurate, biased, or otherwise flawed, the resulting model and its outputs may be equally flawed. Users of AI services should ask how the data used to train the model was sourced. This risk must be addressed through careful due diligence.
  • Data Privacy: Large datasets used to train machine learning models may inadvertently incorporate personal, sensitive, or inaccurate information. Public generative AI tools cannot guarantee deletion or non-retention, as any information submitted is used to further refine the model. Beyond the risks associated with personal information, companies need to ensure that their employees do not disclose sensitive company IP or customer information to a publicly accessible system.

Types of AI Representations and Warranties

AI-specific representations and warranties that go beyond standard IP and technology warranties are a method for buyers and investors to obtain contractual assurances that risks unique to AI have been addressed. These specialized clauses can tailor risk allocation by assigning responsibility for AI-specific issues to the seller, backstop the buyer’s due diligence by providing contractual assurances, and offer a clear path for recourse (like indemnity) if a post-transaction lawsuit arises. The inclusion of AI clauses can help protect investments and mitigate exposure.

Common Topics Addressed by AI Clauses

  • Data Use and Training Data: Warranties concerning the target company’s rights to use data for AI training and assurances as to the source, accuracy, and ownership of the training data set are being more frequently utilized. A clause may require a specific representation that the AI model was trained only with permissioned data (i.e., data that was obtained through legally binding consent or licenses for use). For companies using third-party GenAI, a representation may require the disclosure of the specific tools being utilized and the terms of the applicable license.
  • Intellectual Property: Clauses certifying ownership of AI-specific assets, such as algorithms, models, and parameters, can be included in contracts. For example, a clause may state that the user or licensee will own the IP for any works generated by the AI model, especially when a model is used for product design or content creation. These clauses may also address the risk of infringement associated with a model’s training and output, such as through indemnification provisions in service contracts.
  • Governance and Compliance: Contracts may now include assurances that there are internal AI governance frameworks, including documented policies for testing and monitoring, “human in the loop” requirements, and that the entity complies with any applicable AI laws and regulations. A representation might be that no AI models or platforms were utilized in the generation of a product, or that all employees have signed a data use agreement that prohibits entering any company information into GenAI models.

Conclusion

The market for generative and agentic AI is expanding rapidly, and AI clauses will only become more common. Failing to understand the AI utilization of a business and to address AI risk contractually is an enterprise-level weakness. Downstream non-compliant service providers or contractors may taint any upstream use of data and create liability. Companies must be prepared to answer questions about their use of AI tools and processes. Even if not contemplating a sale or acquisition in the near future, questions about AI use are now appearing as part of the underwriting and renewal process for certain liability and cyber insurance policies. Close reading of any contractual provisions relating to the use of AI or AI-generated data is necessary, as is ensuring compliance with existing restrictions.

Protecting trade secrets starts with preparation. Building strong systems and habits helps keep valuable information secure and limits the risk of leaks.

  1. Inventory Trade Secrets: List what information is confidential and record its value. Keeping good records helps if you ever need to prove your rights.
  2. Regular Employee Training: Teach employees how to recognize and handle trade secrets. Refresh this training regularly so protocols stay top of mind.
  3. Implement Strong Agreements: Have anyone with access sign clear contracts that set expectations during and after their time with your business. Written agreements make enforcement easier if there’s ever a problem.
  4. Control the Use of AI Tools: Limit the use of confidential data in public AI tools. Use only secure, approved systems for handling private information.
  5. Enhance System Security: Enable safeguards such as multifactor authentication, monitor for threats, and block bots and unapproved software to guard against leaks.
  6. Prepare for Employee Exits: Clarify the company’s right to review devices and accounts upon an employee’s departure. Address relevant rules and dispute procedures in advance.

Many companies review trade secret protection only after problems arise, but taking these steps now can help reduce the risk of losing valuable information. In this video, I explain how to safeguard your company before a breach occurs.


#LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #tradesecrets #businesslaw #BillWagnerLaw #aiandlaw #intellectualproperty #dataprotection #artificialintelligence

ERP vendors are notorious for creating a false sense of urgency with arbitrary support deadlines, promises of expanded functionality, and artificial intelligence to force customers to the Cloud.

  • Vendors are not pushing you to the Cloud for your benefit.
  • Just because a vendor is sunsetting support doesn’t mean you don’t have options.
  • One of the worst options is paying the vendor a premium for extended support beyond the drop-dead date.

The reality is that the Cloud is not always better; it can be detrimental.

  • If you have a highly customized system that incorporates your business processes and provides you with a competitive advantage, moving to the cloud could be detrimental.

Do you really need to move to the Cloud? The answer might surprise you.

CONTACT ME AND MY TEAM

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

Your company’s most valuable assets may not appear on your balance sheet. They’re in your systems, your processes, your technology, and your people. Trade secrets don’t require registration and don’t expire, but they only remain protected if you actively safeguard them.

This video explains what qualifies as a trade secret under U.S. law and how to know if your company is doing enough to protect its most valuable information.

#insurancecoverage #LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #tradesecrets #businesslaw #ipprotection #BillWagnerLaw #InnovationLaw

Vendors tout cloud software as a cheaper alternative to traditional on-premise solutions.

  • While cloud solutions can often be implemented at a lower cost, the cost of accessing and using the cloud solution over the life-cycle of the product is often more expensive than an on-premise solution.

I discuss these issues in my latest video.

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

When a lawsuit hits your manufacturing business, the last thing you want is uncertainty about your insurance coverage. 

In this video, I’ll walk you through how to position your company to recover fast and fully when facing legal trouble. If your operations are evolving, your insurance plan should be too.

#insurancecoverage #LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #manufacturingindustry #cyberinsurance #ProductLiability #legalinsights

Stay Connected with Us!

👉 WWagner@taftlaw.com

👉Dir: 317.713.3614 | Cell: 317.431.5979

👉Tel: 317.713.3500 | Fax: 317.715.4537

👉One Indiana Square, Suite 3500 Indianapolis, Indiana 46204-2023

👉 Website: https://www.taftlaw.com/people/willia…

Want to see if we can help you with your charges? 📞 Reach out today for a consultation!