ComputerWorld.com reported that “Microsoft wasted little time last fall after reaching a deal to finalize its new relationship with OpenAI to find a new AI dance partner — Anthropic, the second most valuable AI startup in the world. Even though the relationship between Microsoft and Anthropic is only a few months old, it appears as if Microsoft sees a future with Anthropic that’s at least as valuable as the one it had with OpenAI.” The January 27, 2026 article entitled ” Will the Microsoft-Anthropic deal leave OpenAI out in the cold?” (https://www.computerworld.com/article/4122251/will-the-microsoft-anthropic-deal-leave-openai-out-in-the-cold.html) included these comments:
Anthropic was founded in 2021 when seven AI researchers, including OpenAI Vice President of Research Dario Amodei, quit OpenAI because they felt the company was compromising AI safety with its full-speed-ahead-damn-the-consequences approach.
Anthropic said it tried to build its genAI chatbot Claude in a way that ensures the chatbot can “avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is helpful, honest, and harmless.”
The company outlined how it would do that in a set of principles it calls Claude’s Constitution. Avivah Litan, distinguished analyst at Gartner Research, told Computerworld that Anthropic’s decision to publicly outline its principles, “starts the dialogue and, more importantly, [describes] actions regarding the principles that generative AI should be trained on to keep it safe, trustworthy, and aligned with human values and the preservation of human civilization.” (Anthropic just this month released a greatly expanded version of the Constitution.)
Those are fine words and worthy goals, but the company hasn’t always been able to live up to them. A judge ruled the company illegally downloaded pirated books and used those books to train Claude without asking permission from the copyright holders. Eventually, Anthropic agreed to pay authors and publishers $1.5 billion for what it did – the largest payout ever for copyright violations. Beyond those unscrupulous training practices, Anthropic’s technology doesn’t always work as it should; last May, the company’s own researchers found that the Claude Opus 4 model resorted to blackmail 84% of the time when threatened with a shutdown.
That hasn’t stopped Anthropic from getting investments from multiple tech giants, including Amazon, which has invested $4 billion in it so far, and Google, which has so far invested $2 billion — and has promised to invest $1 billion more.
What do you think?
First published at https://www.vogelitlaw.com/blog/microsofts-investment-in-anthropicclaude-may-hurt-openai
