UPDATE: AI AND COPYRIGHT LAW

In the last 30 days, the landscape of AI and copyright law has progressed due to three significant policy mandates.

I. The CLEAR Act: A Statutory Summons for Transparency

Introduced on February 10, 2026, by Senators Adam Schiff and John Curtis, the Copyright Labeling and Ethical AI Reporting (CLEAR) Act seeks to pierce the “black box” of generative AI training datasets.

  • The Disclosure Mandate: AI developers must submit a “sufficiently detailed summary” of all copyrighted works within their training datasets to the Register of Copyrights. This notice is required 30 days prior to any commercial or internal release.
  • The “Bounty” Provision: The Act introduces a potent enforcement mechanism. Copyright owners may seek $5,000 in civil penalties per undisclosed work, alongside injunctive relief and attorney’s fees.
  • Legal Implication: While the Act does not settle the “fair use” debate, it creates a new category of liability for mere failure to report. AI developers musts maintain meticulous logs of all training data to avoid the catastrophic stacking of $5,000 penalties.

II. The White House National Policy Framework: A Federal Supremacy Strategy

On March 20, 2026, the administration unveiled the National Policy Framework for Artificial Intelligence, a comprehensive set of legislative recommendations designed to consolidate federal authority. This framework is the architectural capstone to Executive Order 14365 (issued in December 2025), which sought to dismantle the “patchwork” of state-level AI regulations.  Three sections are relevant to my practice and clients.

The Fair Use “Judicial Deference” Doctrine.  The administration has adopted a calculated stance on the most contentious issue in AI law: the training of models on copyrighted works.

  • The Pro-Innovation Presumption: The Section explicitly states the administration’s position that training AI on copyrighted material does not violate current law.
  • Neutrality in Litigation: Despite this stance, it recommends that Congress refrain from passing laws that would tip the scales of the judiciary. Instead, it advocates for a “wait and see” approach, allowing the courts to resolve the application the Fair Use doctrine under 17 U.S.C. § 107 to the data training sets and output.
  • Targeted Protections: While deferring on training, the Framework supports proactive legislation for digital replicas and collective licensing frameworks, signaling a preference for protecting human identity and market-based compensation over restricting data ingestion.

Federal Preemption and the “Onerous” Standard.  The most aggressive element of the Framework is its call for broad federal preemption of state AI laws. The administration views the growing body of state-level algorithmic accountability and transparency statutes (such as those in California, Colorado, and Texas) as “undue burdens” when the issue is one of national application and interest. There should not be a stacking of federal laws on state laws.

The AI Litigation Task Force: Enforcement of the Policy.  Building on the December 2025 EO, the Framework reinforces the role of the Department of Justice’s AI Litigation Task Force. This unit is tasked with identifying and challenging state laws that:

  1. Restrict activity that would be lawful if performed without AI.
  2. Impose liability on developers for the actions of third-party users.
  3. Require the alteration of “truthful” AI outputs (framed as “anti-censorship”).

III. GSA Clause 552.239-7001: The End of the “Black Box” in Procurement

The General Services Administration (GSA) has introduced a formidable new clause for government contractors, requiring public procurement to be AI transparent.

  • The 30-Day Unmasking: Contractors have 30 days post-award to disclose the full architectural blueprint of their AI stack, including third-party tools and “Decision Logic Summaries” that explain the journey from data input to final output.
  • The Data Iron Curtain: The clause mandates the logical segregation of government data. Crucially, it prohibits using government inputs or outputs to train or improve commercial models.
  • Work-for-Hire Ownership Expansion: The GSA asserts ownership over “Custom Developments,” including model weights and fine-tuning performed under contract.

Posted in ,

Connie J. Mableson

Categories

Subscribe!