Friar and other executives are betting on OpenAI’s future by focusing more on business-oriented products while scaling back on certain consumer offerings as a path to profitability.
OpenAI plans to unveil a new AI model designed for “high-value professional work,” as the company faces increased competition from rival Anthropic in securing corporate clients for AI assistants in their workplaces.
“You’ll see a new model coming from us soon. We’re very excited about it,” Friar mentioned in an interview with The Associated Press.
OpenAI reports having more than 900 million weekly users of its ChatGPT product, and Friar noted that around 95% “don’t pay anything” for the popular chatbot. Although these interactions foster habit and reliance, they also put pressure on the costly computing resources required for the company’s AI systems, underscoring the need for substantial business customers to help cover expenses.
Also Read: Apple, Google apps enable “nudify” images, reach 483 million downloads
OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both operate at a loss, which puts them in stark competition to generate more revenue as they race toward becoming publicly traded on Wall Street.
The push to enhance performance and sales of OpenAI’s business solutions — already the lifeblood for Anthropic — has led OpenAI to discontinue certain consumer projects, including the AI video generator app Sora.
“It was a little heartbreaking, but we’re like, OK, it’s not the main event right now,” Friar said. “We need to ensure that our upcoming model has sufficient computing power.”
Codenamed Spud, OpenAI claims its “smartest model yet” delivers “stronger reasoning, improved understanding of intent and dependencies, better follow-through, and more reliable output in production.” This model is part of OpenAI’s response to Anthropic’s new Claude Mythos, which Anthropic asserts is so “strikingly capable” that it is limiting its use to select clients due to its apparent ability to outpace human cybersecurity experts in identifying or exploiting computer vulnerabilities.
Friar, previously CEO of neighborhood social platform Nextdoor, stated that business customers made up about 20% of OpenAI’s revenue when she joined as CFO in 2024. This figure has now risen to 40%, and she anticipates it will reach half of OpenAI’s sales by year’s end.
This marks a significant change from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-defunct partnership with Disney regarding Sora, discussing plans to sell ads on ChatGPT, and even suggesting allowing ChatGPT to engage in erotica with paid adult users.
Altman expressed on the “Mostly Human” podcast earlier this month that a more focused approach was necessary — and Friar concurs.
“Tech companies naturally tend to explore many cool possibilities during growth,” she noted, stressing that such ventures can lead to poor outcomes if companies overextend. “Great companies excel at narrowing down and refocusing within a reasonable timeframe, even though it’s incredibly painful.”
Signaling this shift, OpenAI hired Slack CEO Denise Dresser three months ago as its first chief revenue officer.
Dresser shared in a recent AP interview that she has been focused on connecting with corporate leaders and positioning OpenAI as the preferred platform for workplaces that leverage AI agents to automate various computer-based tasks.
“It’s evident that companies have moved past experimentation and are now using AI for real work,” Dresser remarked. “Corporate leaders see AI as perhaps the most consequential shift of their lifetimes.”
However, those leaders face a choice, particularly with Anthropic’s Claude gaining traction among software professionals. Founded in 2021 by former OpenAI leaders who aimed to prioritize AI safety, Anthropic has promoted itself as the more responsible AI vendor. This distinction gained attention when the Trump administration penalized the startup over a contract dispute regarding AI’s military application, allowing Altman to solidify OpenAI’s own contract with the Pentagon.
Consumer interest in Anthropic has surged, with the company claiming its annualized revenues reached $30 billion, a figure higher than OpenAI’s reported revenue, though the calculation methods differ. Friar and Dresser declined to disclose OpenAI’s latest figures but hinted that Anthropic’s number may be inflated as it does not factor in the revenue shared with cloud computing providers like Amazon and Google.
Nonetheless, the competition remains fierce, closely linked to stock market conditions and the economy’s outlook.
“They’re likely quite close,” said Luke Emberson, a researcher at nonprofit institute Epoch AI. “Certainly, the trends show Anthropic is outpacing OpenAI. If this continues, they may soon surpass them.”
The urgency prompted Dresser to issue a memo to OpenAI employees on Sunday, first reported by The Verge, asserting that Anthropic’s coding focus “provided them an early edge” but expressing confidence that OpenAI possesses the “real structural advantage” as AI usage broadens beyond software developers and the company develops sufficient computing capacity for its AI systems.
“Their narrative is built on fear and restriction, suggesting a small group of elites should control AI,” Dresser’s memo remarked regarding Anthropic. “Our positive message will win over time: create powerful systems, implement appropriate safeguards, broaden access, and help people achieve more.”
However, for skeptics concerned about the financial sustainability of AI products like ChatGPT and Claude, the trajectory of both money-losing companies raises alarms, especially as smaller startups increasingly rely on their AI tools. Anthropic has already enforced usage limits on heavy users, causing some to wait hours to access Claude, while both firms have established service tiers that incentivize premium subscriptions, noted author and AI critic Ed Zitron.
“This is what I refer to as the subprime AI crisis,” Zitron explained. “People have built their lives and businesses around these companies that, as they attempt to cut costs, will start tightening the screws.”
One consensus among both AI leaders and critics is that the technology is costly, although whether it justifies the expense of those energy-hungry AI computers remains uncertain.
“People may argue that once they go public, they’ll be secure. That’s not true,” Zitron cautioned. “Public companies can and will fail, particularly those reliant on $100 billion to $200 billion annually just to survive.”