Santa Barbara Current

Santa Barbara Current

Legal Perspective

AI and Copyrighted Written Words

by Brent E. Zepke, Esq.

Mar 24, 2026
∙ Paid

Artificial Intelligence, or just “AI,” is a new technology. New technologies tend to be “disruptors” by increasing the powers of the users to “disrupt” the existing rights of others. Lawyers and sometimes courts of law must apply existing laws in order to try and balance the rights of both of those groups.

This Occurred in the 1990s

In the 1990s, the new technology of personal computers provided employees with the power to electronically carry their employer’s intellectual property off the employer’s facility.

This disrupted the effectiveness of employers protecting their proprietary information by inspecting the belongings, such as briefcases and vehicles, of employees before they exited the employer’s facilities.

For example, when asked about the risks of permitting an accountant to work from home, I responded that it was permitting the employee to electronically remove the employer’s proprietary information by sending it off site to their personal computer which may not be as secure as the corporate system that deflected hundreds of attempts by hackers every week.

Today

The new technology Is AI, a branch of computer science that creates systems capable of performing tasks typically requiring human intelligence, such as reasoning, learning, problem-solving, perception, and interpreting input from cameras.

In essence, Ai functions by analyzing the data fed into it in order to identify patterns. In many cases, if not all, the data fed into includes some data that others have copyrighted.

The challenges to the legal system is to find the proper balance between AI being able to use the data without “disrupting” the rights provided authors by copyrights.

Legal Cases

In August 2024, authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a class action law suit against AI developer Anthropic alleging that AI developer Anthropic’s unauthorized use of books to train LLMs violated their copyrights.

In 2025 Judge Willaim Alsup, appointed by President Bill Clinton, in Bartz v Anthropic, N. D. Cal 2025, held that the digitization of purchased works was permitted under the “fair use” exception under the Copyright Act.

However, Judge Alsup also certified a class of rights holders whose books were obtained from pirated sources like LibGen (Library Genesis) and PiLiMi (Pirate Library Mirror).

As frequently happens, after a class was certified, Anthropic proposed a $1.5 billion class-action settlement covering approximately 500,000 works.

If this settlement is approved by the courts, it will be two things:

1) One of the largest in copyright history and

2) Provide significant, yet nuanced, guidance for AI companies, confirming that while training on data purchased is often permitted under the fair use doctrine, this may not be true for data from pirated works.

In Kadrey v Meta Platforms, N.D. Cal. 2025, 13 authors filed a class action lawsuit against Meta for downloading their copyrighted books to train its “llama” LLM.

An “LLM” is a “large language model” AI that developers “train” by “feeding” them an immense amount of text which the models use to recognize patterns and analyze the statistical relationship among words and punctuation marks. LLMs then generate new texts by predicting which words are most likely to come next in sequence.

Note: Under the theory of “garbage in, garbage out,” “trainers” can influence the outcomes by the types of text they “feed” to the LLMs. This may be particularly relevant for political issues.

Federal Judge Vince Chhabria, appointed by President Barack Obama, granted summary judgement for Meta by holding that Llama’s use qualified for the “Fair Use” exception to the Copyright Act as it was “transformative,” meaning it was used for scholarship or research.

In the Kadrey case, Judge Chhabria granted summary judgement in favor of Meta based on the plaintiff-authors failing to produce sufficient evidence of the effects of similar works produced by an LLM.

Judge Chhabria either advertently or inadvertently created two problems with his holding:

1) Trying to produce “sufficient evidence of the effects of similar works produced by an LLM” would require introducing evidence of another case that may well be prohibited under federal law, and

2) There are no similar works, as all copyrighted works are unique.

Conclusion

The legal rights of the AI users and the holders of copyrights will continue to develop based on how various judges view particular sets of facts.

Leave a comment

Share

https://secure.anedot.com/bob-smith-for-congress/timeforchange

Keep reading with a 7-day free trial

Subscribe to Santa Barbara Current to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 Santa Barbara Current · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture