Handicapping The AI IPO Race

TOPSHOT – This photograph shows a figurine in front of the logo of the US artificial intelligence safety and research company Anthropic during a photo session in Paris on February 13, 2026. (Photo by JOEL SAGET / AFP via Getty Images)
AFP via Getty Images
What Anthropic’s $900 billion round and OpenAI’s first miss tell us before the S-1s drop
When it comes to two of the most hotly sought after potential investments, investors are in the position of the six blind men with the elephant. Aside from some well-placed private investors, fleeting glimpses of OpenAI and Anthropic’s financials come through CEO statements at conferences and podcasts, competitor leaks, and not for attribution insider chats with journalists.
Both companies are reported to be preparing for IPOs and, given their voracious appetite for pricey compute, they are expected to raise tens of billions at eye-popping valuations. Until they file their registration statements with audited financials with the SEC, all the information swirling around should be taken with a grain of salt. But for now, it appears that while OpenAI’s ChatGPT was first to win the mind share of consumers, Anthropic may be lapping the competition when it comes to building a sustainable business model.
Anthropic’s Enterprise-Grade Cash Machine
Anthropic’s annual run rate of revenues (ARR) as of May is reported to be $44 billion, up 15X from $3 billion at this time last year and up from a $30 billion ARR just one month prior in April 2026. In addition, SemiAnalysis reported that Anthropic’s gross margin on inference surged from 38% to over 70% since the beginning of the year.
To be clear, having a positive margin on inference is not the same thing as being profitable on a net income basis. Inference refers to the costs that frontier labs like Anthropic and OpenAI incur to directly serve intelligence to paying customers, including the inference-related costs of running data centers, depreciation on the GPUs and other hardware, and dedicated engineering teams.
However, these labs also use massive amounts of compute on pre-training of the next generation of foundational models, which could end up under R&D or operating expense, as well as reinforcement learning (RL) and post-training required to prepare models for commercial launch. We won’t know exactly how the AI giants will categorize all these expenses under U.S. GAAP or how clear a runway they have to profitability until the full financials are made public in their SEC filings.
But the combination of blistering topline growth with improving margins suggests that Anthropic’s prescient bet on the enterprise market, including the early lead in software coding and subsequent move into financial, legal, and other verticals, is paying off in a big way. Private market investors are rewarding the company with a reported $900 billion valuation in its latest approximately $50 billion raise, according to TechCrunch.
At this point, the greatest constraint on Anthropic’s cash machine appears to be access to compute, rather than lack of customer demand for its tokens. Notably, the company struck two major deals last week to alleviate that bottleneck. On May 5th, the company committed to $200 billion with Alphabet’s (Nasdaq:GOOGL) Google Cloud to lock up 3.5GW of tensor capacity over the next five years. And the very next day, it struck a deal with Elon Musk’s SpaceX to secure 300 megawatts of compute from its new Colossus 1 data center in Memphis.
Visitor look at their phone next to an Open AI’s logo during the Mobile World Congress (MWC), the telecom industry’s biggest annual gathering, in Barcelona on February 26, 2024. The world’s biggest mobile phone fair throws open its doors in Barcelona with the sector looking to artificial intelligence to try and reverse declining sales. (Photo by PAU BARRENA / AFP) (Photo by PAU BARRENA/AFP via Getty Images)
AFP via Getty Images
OpenAI Scrambles to Regain Its Lead
While Dario Amodei was cutting deals with Elon Musk, OpenAI’s Sam Altman was doing battle in a courtroom in San Francisco over the transactions that set OpenAI on the path to convert from its origins as a non-profit to a soon to be public for-profit company.
At the same time, the Wall Street Journal issued a report that OpenAI had missed its internal targets for adding new users and revenues. Weekly active users had been expected to top 1 billion by the end of 2025, but were still stuck at 900 million as of April, whereas sales were at an ARR of $25 billion, according to The Information.
Even more damaging, OpenAI’s CFO Sarah Friar was said to have expressed doubts that the company would be ready to launch its IPO in 2026 and if it was positioned to meet its commitments to spend $600 billion on future compute from its hyperscaler and other partners. (After the article appeared, Altman and Friar issued a statement denying their differences and labeling it “ridiculous.”)
Having recently raised another $122 billion at an $852 billion valuation, OpenAI can probably afford to wait a few more quarters to launch its IPO. The company is scrambling to regain territory in the enterprise and coding segments where Anthropic established a lead in 2025 while preserving its broad consumer user base. Because of its massive capital commitments, it will not be compute-constrained if its paying customers begin to scale. And early reviews of it’s latest GPT 5.5 are positive.
The race among the best frontier AI labs, including Anthropic, OpenAI, Alphabet, and even xAI’s Grok, is fierce and the models jostle for the lead position across various use cases each month.
Can AI Be Profitable? China’s Early Look
Given the massive cap ex requirements to develop and maintain leading edge AI models, it is fair to ask if any of these companies can ever be profitable? The question is relevant not only for the future performance of the AI labs themselves, but for the whole “AI ecosystem” that has grown up to support them.
Morgan Stanley recently estimated that the major hyperscalers – Amazon, Alphabet, Meta, Microsoft, and Oracle – will invest cap ex of $805 billion this year, twice the level of 2025, and then rising to $1.1 trillion by 2027. To provide some perspective, that is more than the budget of the Department of Defense at $962 billion this fiscal year. Those investments only make sense if Anthropic, OpenAI, and ultimately their customers can build profitable businesses that supports AI usage continuing to scale.
Until U.S. labs file their S-1s, the only audited LLM gross margins anywhere are from two Chinese commercial deployers — and what they show is sobering.
Two of China’s leading AI companies, Zhipu AI ((HKEX 2513) and MiniMax (HKEX 0100) went public on the Hong Kong stock exchange in 2025 and have already reported their audited financial results for that year. Zhipu’s revenues were up by 132% to $105 million with a gross margin of 41% and a $680 million loss. MiniMax’s revenues grew at 159% to $79 million with a 25% gross margin and $250 million loss.
Neither company is anywhere near the scale or velocity of the top U.S. AI labs. And DeepSeek, which is the closest thing China has to a frontier lab, remains private with very limited public information available. But both companies illustrate that it is possible to be profitable at the gross margin level in AI and still lose bucketloads of money.
Waiting for the S-1s
Once Anthropic and OpenAI’s registration statements flip public, there will be several financial issues that investors will want to home in on:
- Gross vs. net revenue treatment: OpenAI has sought to deflate Anthropic’s explosive revenue growth by suggesting that it may book revenue that comes through partner relationships with AWS and Google on a gross rather than net basis, which could skew comparisons of ARR. Audited financials of both companies will settle this dispute.
- Attribution of expense to training versus inference: Gross margin is highly sensitive as to cost of compute is allocated across inference and training. Analysts will be scrutinizing if the reported lift in Anthropic’s gross margins is structural due to factors like architectural efficiency and Trainium utilization, or due to accounting changes like capitalization policies and expense attribution.
- Partner investments and contracts: The bear case on the AI explosion is that it is built on circular customer relationships in which the labs commit to purchase massive amounts of compute in exchange for investments from the hyperscalers and NVIDIA. All this will need to be disclosed in the S-1s, bring clarity to exactly how economically justifiable these deals are.
The bet on America’s AI economy is on par with that the capital markets made in past eras on the railroads, electrification, and the internet. In a matter of months we will have a lot more data on whether this bet is economically sound.




