View all AI news articles

California's Deepfake Bill Needs an Overhaul

July 29, 2024
This is Ad for Anytime Mailbox
But it won’t solve the problems with AI-generated media.

Summary:

  • California's AB 3211, aiming to regulate AI-generated content, is poorly drafted.
  • It mandates watermarks and digital fingerprints for AI outputs.
  • The bill's requirements could hinder innovation, especially for open-weight models.
  • The mandated watermarks are easily bypassed, raising questions about their efficacy.
  • Overly stringent rules could create more problems than they solve.

The Impact on Open Weight Models

California's AB 3211 could significantly disrupt the development of open weight generative models. The bill requires that all AI systems include watermarking features, and platforms hosting these systems must comply. This could force sites like HuggingFace to remove most of their generative models. The bill's definitions are confusing, and its broad scope could lead to unintended consequences. The legislation mandates digital fingerprints for any potentially deceptive content, creating a heavy burden for AI developers, especially those working with open weight models.

Annoying Notifications and Privacy Concerns

Under AB 3211, every generative AI system must notify users that it is a chatbot at the start of each conversation, and users must acknowledge this before continuing. This could lead to frequent, annoying notifications similar to cookie notifications on European websites. Additionally, the requirement for a "maximally indelible watermark" poses significant challenges. The industry must constantly update watermarking techniques to meet this standard, leading to potential privacy and usability issues.

Flaws in Current Watermarking Standards

Watermarking standards, such as C2PA, have significant flaws. Metadata attached to AI-generated content can be easily removed or altered. For text-based models, simple edits can inadvertently remove watermarks. AB 3211's reliance on these imperfect standards could create a false sense of security, making it easier for malicious users to distribute deceptive media without detection.

Overreach and Technical Infeasibility

The bill's broad application means it affects every generative AI system, regardless of its size or purpose. This includes AI models used for scientific research, such as predicting DNA sequences. The requirement for platforms to host only systems with watermarking features could lead to the removal of many AI models. Additionally, the bill mandates that developers keep a public database of all potentially deceptive outputs, which is impractical for open weight models.

Conclusion

AB 3211, in its current form, is an overly broad and poorly drafted piece of legislation. While addressing the issue of deceptive deepfakes is important, the bill's stringent requirements could stifle innovation and create new problems. A more carefully crafted approach is needed to effectively regulate AI-generated content without hindering progress in the field.

Recent articles

View all articles
This is Ad for Anytime Mailbox