Staff Writer Kaya Newhagen explores how two firms came to shape a technology no government is yet equipped to govern.
Ask ChatGPT or Claude what they think of one another and you are likely to receive a measured, almost disarmingly empathic response. The tone is cooperative, even poetic. One would hardly suspect that behind these two large language models stand two of the most fast-scaling firms in what may be the technological race of the century.
At the centre of this rivalry are Sam Altman, CEO of OpenAI, and Anthropic CEO Dario Amodei. Their relationship is not merely competitive, but historical. Amodei once worked under Altman before departing to establish a rival frontier AI laboratory. Since then, professional divergence has hardened into strategic enmity, such that even symbolic displays of unity, such as holding hands at global summits, have proved a bridge too far. The result is not simply corporate competition, but two distinct approaches to managing the risks, power and opportunities inherent in frontier AI development before any governance framework exists to scrutinise them.
OpenAI was founded in 2015 and is often credited as the first mover in consumer AI, though that title is debatable. What is beyond dispute is that the 2022 launch of ChatGPT was a genuine cultural moment. Sam Altman became, almost overnight, a household name. The company has not been short of drama since. Altman was ousted by his own board in 2023, only to be reinstated five days later following pressure from investors and staff. That episode, more than any product launch, revealed the fragility beneath the hype. OpenAI is a non-profit governance structure sitting uneasily atop a for-profit company valued in the hundreds of billions. Anthropic was founded in 2021 by Amodei and several colleagues who departed OpenAI over concerns about the pace and direction of its development. Where OpenAI moved fast, Anthropic positioned itself as the firm that wanted to proceed move carefully. Whether that distinction is philosophical or commercial is a question worth sitting with.
Both firms are closed-source, meaning their underlying models are not available for public use or modification. This is notable given that OpenAI was originally founded on open-source principles, before restructuring into a capped-profit company in 2019, a shift that coincided with its increasingly closed approach to model releases, justified by safety concerns. The transition was not without controversy. Critics, most prominently Elon Musk, pointed to the convenient alignment between that decision and the company’s growing commercial interests.
Built differently
OpenAI’s corporate structure is, to put it mildly, complex. Founded as a nonprofit, it added a capped-profit arm in 2019 to attract investment, before pivoting toward a full public benefit corporation structure in 2024—while retaining non-profit board oversight. The arrangement has drawn sustained criticism from competitors, regulators, and former co-founder Elon Musk, who claims that the present structure betrays the organisation’s founding mission. Anthropic also operates as a public benefit corporation, but without the nonprofit history. Its stated commitment to safety is exemplified through its Long-Term Benefit Trust, a body designed to influence board composition and guard against short-termism. Both firms, it should be said, are commercially driven, privately held enterprises with some of the largest valuations in the world. The safety rhetoric, however sincere, does not change that underlying fact.
Their business models reflect different bets on where AI value will be captured. OpenAI has built a dominant consumer franchise: roughly 180 million users, with its $20 monthly subscription accounting for an estimated 80 percent of revenues. Anthropic is slightly more B2B focussed. An outlier from traditional revenue streams like OpenAI’s subscription models, an estimated 75 percent of revenue comes from API or Application Programming Interface consumption. API can be understood as the connector of software systems: think of the waiter at a restaurant. Their role means you, as the guest, don’t have to get up and go to the kitchen yourself to get your food. The waiter, as the intermediary, connects these two places, API does that for software, and Anthropic does it well.
The regulatory gap
On paper, the two firms’ political footprints look similar. In 2025, OpenAI spent $2.99 million on federal lobbying; Anthropic spent $3.13 million. Both in the past have secured contracts with the Department of Defense in the region of $200 million. But beneath the symmetry, the firms have begun to diverge in meaningful ways. When California’s legislature passed SB 1047, a state-level AI safety bill requiring transparency in model testing, safety reporting and whistleblower protections, Anthropic endorsed it, while expressing a preference for federal regulation. OpenAI neither endorsed nor condemned it, agreeing only that a patchwork of state laws was suboptimal. The bill was subsequently vetoed by Governor Gavin Newsom. The episode illustrated a widening gap in how the two firms approach regulation, and a broader problem: when industry sets the terms of its own oversight, the governance gap widens rather than closes.
Pentagon contract
The Pentagon brought that gap into sharper relief. Anthropic signed a contract with the Department of Defense in 2024 to deploy Claude on classified and military tasks. However, the relationship soured when Anthropic sought explicit contractual language prohibiting its models from being used for civilian surveillance or autonomous weapons. Secretary of Defense Pete Hegseth responded by calling the company “woke” and labelling it a supply chain risk to national security. OpenAI moved quickly to fill the vacuum, with Altman claiming the new agreement contained more safeguards than any previous Pentagon contract. These actions have struck some observers as opportunistic given the circumstances. Anthropic has since indicated it will challenge aspects of the arrangement in court. The episode produced an unusual inversion in the market: ChatGPT saw a surge in cancellations through the QuitGPT movement, while Claude gained users.
The Pentagon episode makes vivid the tension at the heart of both firms. Anthropic’s constitutional AI framework and Amodei’s long-form essays on existential risk are serious intellectual contributions. They are also good marketing. Sometimes it is not always easy to tell where one ends and the other begins. Altman’s move-fast approach is more familiar, if no less contestable. Neither firm is arguing it should write the rules alone. If anything, both have called for stronger government oversight. The concern is rather that governments are not yet equipped to provide it.
That lag is what makes this moment genuinely unsettling. Both firms are moving fast and breaking things — Anthropic just claims it wants to break fewer things. But good intentions and safety rhetoric do not resolve the underlying problem: that decisions with profound consequences for how we think, communicate, and wage war are being made in real time; in closed contracts, courtroom decisions and through lobbying, before any democratic framework exists to scrutinise them. The question is not whether these firms mean well. It is whether meaning well is good enough.
To read more Comment analysis, click here.