US judge puts Anthropic $1.5bn proposed AI training copyright settlement on hold; asks for assurances all claimants will get fair pay-out
10 Sept 2025





Author
Martin Croft
PR & Communications Manager
Photo by Rubidium Beach on Unsplash
The US Federal judge hearing the court case over AI company Anthropic’s unauthorised use of copyright material in training its AI model, Claude, has delayed giving approval for a proposed $1.5bn settlement – the biggest award ever in copyright law history – until he gets assurances from the lawyers representing copyright holders that anyone with a valid claim will get proper compensation.
Media reports, such as this story from Bloomberg, say U.S. District Judge William Alsup was very concerned over the proposed settlement, and in particular how it will be administered so that all legitimate claimants get fair treatment. He has now scheduled another hearing to take place on September 25 when he will hear from lawyers for both parties how they plan to address his concerns.
In June 2025, Judge Alsup delivered a mixed verdict in which he ruled that using copyright material to train AI was not in itself illegal; however, Anthropic broke the law by accessing copyright material from pirate websites.
Under the terms of the proposed settlement, Anthropic would pay $3,000 for each piece of copyrighted work that was pirated.
The lawsuit – Bartz v Anthropic – was originally filed in August 2024 on behalf of three authors whose work had been downloaded from pirate sites. Thriller writer Andrea Bartz is one of three authors who were the original plaintiffs; however, the case is now a class action on behalf of many thousands of authors and publishers. Anthropic, an AI developer backed by big tech firms including Amazon and Alphabet, parent company of Google, was claimed to have a library of nearly 500,000 pirated books, according to a lawyer for the plaintiffs quoted in The Independent.
However, the judge has now demanded further information about how many books were pirated, a draft of a claims form for authors and publishers who believe their works are involved to fill in, and details of how the settlement will be publicised to ensure people who have a legitimate claim are aware of it. He said in a court statement that he felt that lawyers had left “important questions to be answered in the future” including the list of pirated works, claimants, how much each might get, and how disputes might be resolved (for example, for works with multiple authors). He wants all these questions to be sorted out before October 10, 2025, if he is to grant preliminary approval.
Further, he also suggested that the Authors Guild and the Association of American Publishers, two major book industry bodies which supported the claimants, might act “behind the scenes” in a way that meant some potential claimants did not get the payout they might be entitled to. Representatives of both bodies have questioned this belief, according to media reports.
The issue of how AI developers have used copyrighted material in training their AI models has sparked major legal battles, not just over written material but also for film, photography and visual arts, music and even personality rights. How the Anthropic case plays out is being seen as a watershed for AI development.
Photo by Rubidium Beach on Unsplash
The US Federal judge hearing the court case over AI company Anthropic’s unauthorised use of copyright material in training its AI model, Claude, has delayed giving approval for a proposed $1.5bn settlement – the biggest award ever in copyright law history – until he gets assurances from the lawyers representing copyright holders that anyone with a valid claim will get proper compensation.
Media reports, such as this story from Bloomberg, say U.S. District Judge William Alsup was very concerned over the proposed settlement, and in particular how it will be administered so that all legitimate claimants get fair treatment. He has now scheduled another hearing to take place on September 25 when he will hear from lawyers for both parties how they plan to address his concerns.
In June 2025, Judge Alsup delivered a mixed verdict in which he ruled that using copyright material to train AI was not in itself illegal; however, Anthropic broke the law by accessing copyright material from pirate websites.
Under the terms of the proposed settlement, Anthropic would pay $3,000 for each piece of copyrighted work that was pirated.
The lawsuit – Bartz v Anthropic – was originally filed in August 2024 on behalf of three authors whose work had been downloaded from pirate sites. Thriller writer Andrea Bartz is one of three authors who were the original plaintiffs; however, the case is now a class action on behalf of many thousands of authors and publishers. Anthropic, an AI developer backed by big tech firms including Amazon and Alphabet, parent company of Google, was claimed to have a library of nearly 500,000 pirated books, according to a lawyer for the plaintiffs quoted in The Independent.
However, the judge has now demanded further information about how many books were pirated, a draft of a claims form for authors and publishers who believe their works are involved to fill in, and details of how the settlement will be publicised to ensure people who have a legitimate claim are aware of it. He said in a court statement that he felt that lawyers had left “important questions to be answered in the future” including the list of pirated works, claimants, how much each might get, and how disputes might be resolved (for example, for works with multiple authors). He wants all these questions to be sorted out before October 10, 2025, if he is to grant preliminary approval.
Further, he also suggested that the Authors Guild and the Association of American Publishers, two major book industry bodies which supported the claimants, might act “behind the scenes” in a way that meant some potential claimants did not get the payout they might be entitled to. Representatives of both bodies have questioned this belief, according to media reports.
The issue of how AI developers have used copyrighted material in training their AI models has sparked major legal battles, not just over written material but also for film, photography and visual arts, music and even personality rights. How the Anthropic case plays out is being seen as a watershed for AI development.
Photo by Rubidium Beach on Unsplash
The US Federal judge hearing the court case over AI company Anthropic’s unauthorised use of copyright material in training its AI model, Claude, has delayed giving approval for a proposed $1.5bn settlement – the biggest award ever in copyright law history – until he gets assurances from the lawyers representing copyright holders that anyone with a valid claim will get proper compensation.
Media reports, such as this story from Bloomberg, say U.S. District Judge William Alsup was very concerned over the proposed settlement, and in particular how it will be administered so that all legitimate claimants get fair treatment. He has now scheduled another hearing to take place on September 25 when he will hear from lawyers for both parties how they plan to address his concerns.
In June 2025, Judge Alsup delivered a mixed verdict in which he ruled that using copyright material to train AI was not in itself illegal; however, Anthropic broke the law by accessing copyright material from pirate websites.
Under the terms of the proposed settlement, Anthropic would pay $3,000 for each piece of copyrighted work that was pirated.
The lawsuit – Bartz v Anthropic – was originally filed in August 2024 on behalf of three authors whose work had been downloaded from pirate sites. Thriller writer Andrea Bartz is one of three authors who were the original plaintiffs; however, the case is now a class action on behalf of many thousands of authors and publishers. Anthropic, an AI developer backed by big tech firms including Amazon and Alphabet, parent company of Google, was claimed to have a library of nearly 500,000 pirated books, according to a lawyer for the plaintiffs quoted in The Independent.
However, the judge has now demanded further information about how many books were pirated, a draft of a claims form for authors and publishers who believe their works are involved to fill in, and details of how the settlement will be publicised to ensure people who have a legitimate claim are aware of it. He said in a court statement that he felt that lawyers had left “important questions to be answered in the future” including the list of pirated works, claimants, how much each might get, and how disputes might be resolved (for example, for works with multiple authors). He wants all these questions to be sorted out before October 10, 2025, if he is to grant preliminary approval.
Further, he also suggested that the Authors Guild and the Association of American Publishers, two major book industry bodies which supported the claimants, might act “behind the scenes” in a way that meant some potential claimants did not get the payout they might be entitled to. Representatives of both bodies have questioned this belief, according to media reports.
The issue of how AI developers have used copyrighted material in training their AI models has sparked major legal battles, not just over written material but also for film, photography and visual arts, music and even personality rights. How the Anthropic case plays out is being seen as a watershed for AI development.
Photo by Rubidium Beach on Unsplash
The US Federal judge hearing the court case over AI company Anthropic’s unauthorised use of copyright material in training its AI model, Claude, has delayed giving approval for a proposed $1.5bn settlement – the biggest award ever in copyright law history – until he gets assurances from the lawyers representing copyright holders that anyone with a valid claim will get proper compensation.
Media reports, such as this story from Bloomberg, say U.S. District Judge William Alsup was very concerned over the proposed settlement, and in particular how it will be administered so that all legitimate claimants get fair treatment. He has now scheduled another hearing to take place on September 25 when he will hear from lawyers for both parties how they plan to address his concerns.
In June 2025, Judge Alsup delivered a mixed verdict in which he ruled that using copyright material to train AI was not in itself illegal; however, Anthropic broke the law by accessing copyright material from pirate websites.
Under the terms of the proposed settlement, Anthropic would pay $3,000 for each piece of copyrighted work that was pirated.
The lawsuit – Bartz v Anthropic – was originally filed in August 2024 on behalf of three authors whose work had been downloaded from pirate sites. Thriller writer Andrea Bartz is one of three authors who were the original plaintiffs; however, the case is now a class action on behalf of many thousands of authors and publishers. Anthropic, an AI developer backed by big tech firms including Amazon and Alphabet, parent company of Google, was claimed to have a library of nearly 500,000 pirated books, according to a lawyer for the plaintiffs quoted in The Independent.
However, the judge has now demanded further information about how many books were pirated, a draft of a claims form for authors and publishers who believe their works are involved to fill in, and details of how the settlement will be publicised to ensure people who have a legitimate claim are aware of it. He said in a court statement that he felt that lawyers had left “important questions to be answered in the future” including the list of pirated works, claimants, how much each might get, and how disputes might be resolved (for example, for works with multiple authors). He wants all these questions to be sorted out before October 10, 2025, if he is to grant preliminary approval.
Further, he also suggested that the Authors Guild and the Association of American Publishers, two major book industry bodies which supported the claimants, might act “behind the scenes” in a way that meant some potential claimants did not get the payout they might be entitled to. Representatives of both bodies have questioned this belief, according to media reports.
The issue of how AI developers have used copyrighted material in training their AI models has sparked major legal battles, not just over written material but also for film, photography and visual arts, music and even personality rights. How the Anthropic case plays out is being seen as a watershed for AI development.
Read Recent Articles
Inngot's online platform identifies all your intangible assets and demonstrates their value to lenders, investors, acquirers, licensees and stakeholders
Accreditations



Copyright © Inngot Limited 2019-2025. All rights reserved.
Inngot's online platform identifies all your intangible assets and demonstrates their value to lenders, investors, acquirers, licensees and stakeholders
Accreditations



Copyright © Inngot Limited 2019-2025. All rights reserved.
Inngot's online platform identifies all your intangible assets and demonstrates their value to lenders, investors, acquirers, licensees and stakeholders
Accreditations



Copyright © Inngot Limited 2019-2025. All rights reserved.
Inngot's online platform identifies all your intangible assets and demonstrates their value to lenders, investors, acquirers, licensees and stakeholders
Accreditations



Copyright © Inngot Limited 2019-2025. All rights reserved.