Computerworld.com reported that “The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available.”  The May 6, 2026 article entitled ” US government agency to safety test frontier AI models before release” (https://www.computerworld.com/article/4168137/us-government-agency-to-safety-test-frontier-ai-models-before-release-3.html) included these comments:

According to a release from CAISI, which is part of the department’s National Institute of Standards and Technology (NIST), it will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.”

The three join Anthropic and OpenAI, which signed similar agreements almost two years ago during the Biden administration, when CAISI was known as the US Artificial Intelligence Safety Institute.

An August 2024 release about those agreements indicated that the institute planned to provide feedback to both companies on “potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety Institute (AISI).”

Microsoft said Tuesday in a blog about the latest agreement that it, and others like it, are essential to building trust and confidence in advanced AI systems. As AI capabilities advance, it said, so too must the rigor of the testing and safeguards that underpin them.

Interesting, but not a surprise!

First published at https://www.vogelitlaw.com/blog/center-for-ai-standards-and-innovation-caisi-to-vet-ai-models