Anthropic proposes targeted transparency framework for 'AI at scale' to encourage innovation without being overly prescriptive

Anthropic, the developer of the AI chatbot '
Today we published a targeted transparency framework for frontier AI development.
— Anthropic (@AnthropicAI) July 7, 2025
Our framework focuses on major frontier model developers while exempting startups and smaller developers to avoid burdening the broader ecosystem. pic.twitter.com/UrU8Vjtsm4
A Framework for AI Development Transparency \ Anthropic
https://www.anthropic.com/news/the-need-for-transparency-in-frontier-ai
In an official blog post dated July 8, 2025, Anthropic stated, 'Cutting-edge AI development requires greater transparency to ensure public safety and to hold companies accountable for developing this powerful technology. AI is advancing rapidly. Industry, government, and academia are working to develop agreed-upon safety standards and comprehensive evaluation methods, but this process could take months or years.' They argue that interim measures are necessary to advance AI development while ensuring transparency and safety.
Therefore, Anthropic proposed a transparency framework for AI development that would apply only to 'the largest AI systems and their developers' and could be implemented at the federal, state, and national levels. The new framework is intentionally designed to avoid being overly prescriptive. Anthropic says that strict standards would stifle AI innovation and make it difficult to realize the benefits of AI, such as life-saving drug discovery, realizing the public interest, and ensuring important national security.
The core principles that guide Anthropic's AI development transparency framework include:
◆ Limit application to the largest model developers
Anthropic believes that the AI Transparency Framework should be applied only to the 'largest cutting-edge model developers,' classified by a combination of criteria such as computing power, cost-performance, annual revenue, and research and development expenses. The purpose is to not burden smaller developers who are less likely to pose national security risks or catastrophic damage. Anthropic is considering benchmarks such as 'annual revenue of $100 million (approximately 14.6 billion yen)' and 'annual research and development and capital expenditures of $1 billion (approximately 146 billion yen).'
◆ Building a secure development framework
Developers of cutting-edge models will be required to create secure development frameworks that outline how they will assess and mitigate unreasonable risks to their models, including chemical, biological, radiological and nuclear harm, as well as harm posed by problems with the autonomy of AI models.
◆ Release of secure development framework
The Secure Development Framework must be published, with any confidential information redacted, on a public website registered and managed by the AI development company, allowing researchers, governments and the public to keep informed about published AI models.

◆ System card release
◆Prohibition of false statements and protection of whistleblowers
It explicitly makes it a violation of the law for AI companies and developers to make false representations about their compliance with the framework, and provides legal protections for whistleblowers who uncover wrongdoing.
◆Transparency standards
A practical AI transparency framework needs to set minimum standards that can ensure security and public safety while keeping up with evolving AI developments. Because AI safety and security practices are still in their infancy, it is important to have flexible requirements that can be adapted to meet agreed-upon best practices.

'As AI models evolve, we have an unprecedented opportunity to accelerate scientific discovery, healthcare, and economic growth. Without safe and responsible AI development, a single critical failure could undermine decades of progress. Our proposed transparency framework offers a practical first step,' Anthropic said.
Related Posts:
in Note, Posted by log1h_ik