top of page
Search

AI Unpacked: Price Wars, Agent Computing, and the China Question

  • Writer: Luke Gardner
    Luke Gardner
  • 7 days ago
  • 5 min read

AI Unpacked


Welcome to this week's edition of AI Unpacked, where we break down the latest developments in artificial intelligence and what they mean for today's financial landscape. In this issue, we cover the accelerating model war between U.S. and Chinese AI labs, the growing pressure of AI capital spending on big tech earnings, the emergence of agentic computing as the industry's next frontier, and what the Stanford 2026 AI Index reveals about where the industry actually stands.


The Model War Heats Up: DeepSeek V4 vs. GPT-5.5

April 2026 was the most intense month in the history of AI model releases. GPT-5.5 shipped on April 23. DeepSeek V4 Preview dropped 24 hours later. Claude Opus 4.7 launched on April 16. Gemini 3.1 Pro, Llama 4, Qwen 3 — all within the same six-week window.


The back-to-back launches were not coincidence. They were a signal that the frontier model race has no finish line.


DeepSeek has slashed prices on its artificial intelligence models, including its latest V4, which now costs 97 percent less than OpenAI products, potentially triggering a price war in the highly competitive AI market. OpenAI charges $30 per million output tokens for its GPT-5.4 model, and Anthropic charges $25 for Claude Opus 4.6. DeepSeek's V4-Pro comes in at $3.48 — nearly a 9x price gap against OpenAI for a model that, by coding benchmarks, performs within striking distance.

Like DeepSeek's previous models, V4 is open source, meaning it is available for anyone to download, use, and modify. That open-weight strategy is reshaping the competitive landscape in ways U.S. labs are still responding to.


For investors, the implications are significant. A sustained AI price war compresses margins across the entire software stack. Companies that built business models around premium model pricing — and the cloud providers that host them — now face a structural cost challenge that is unlikely to reverse.


Meta's $145 Billion Bet and the Capital Spending Problem


Another dominant story this week is the sheer scale of AI capital expenditure hitting technology earnings.


The most notable and debated update was the increase in 2026 capital spending to $125 to $145 billion, up from the previous range of $115 to $135 billion. This is due to higher memory chip prices and extra data center costs for AI training and inference — a big jump from about $72 billion spent in 2025, with total 2026 expenses now expected to reach $162 to $169 billion.


Zuckerberg highlighted goals for personal superintelligence and progress at Meta Superintelligence Labs, but markets responded cautiously. The tension is straightforward:


  • AI infrastructure spending is accelerating faster than revenue growth

  • Memory chip shortages are pushing costs higher across the supply chain

  • Investors are asking whether returns will materialize before capital runs out


Q1 operating income rose about 30 percent to $22.9 billion, but the jump in capital spending shows Meta is betting that AI will bring greater efficiency and new revenue sources, such as AI agents, improved ads, and possible enterprise tools.


Meta is not alone. This dynamic is playing out across every major hyperscaler. Morgan Stanley Research estimates that nearly $3 trillion of AI-related infrastructure investment will flow through the global economy by 2028, with more than 80 percent of that spending still ahead.


The Agent Era: AI Is Becoming an Execution Layer


Beyond model releases, the most consequential structural shift underway is the move from AI as a tool to AI as an autonomous executor.


OpenAI's GPT-5.5 represents a shift from passive language models to proactive, agent-driven systems capable of executing complex tasks across applications with minimal instruction. The model improves coding, computer control, and general business use cases, reflecting a broader focus on real-world utility over benchmarks.


OpenAI is also reportedly developing a smartphone designed around AI agents that replace traditional apps, potentially in partnership with major chipmakers and manufacturers. The concept envisions a device that continuously understands user context and executes tasks directly, combining on-device and cloud-based models.


For enterprise software companies, this is an existential question. Salesforce announced a headless architecture that exposes its entire platform via APIs, enabling AI agents to directly access data, workflows, and tasks without traditional user interfaces. The model suggests outcome-based pricing, reduced reliance on implementation services, and new competitive advantages tied to distribution and domain expertise.


The shift toward agents means the value in AI is migrating away from models themselves and toward whoever controls the workflow, the data, and the distribution channel.


Geopolitics: China Builds a Parallel AI Stack


The geopolitical dimension of AI intensified this week as DeepSeek's V4 release drew direct accusations from U.S. officials.


The same day DeepSeek dropped V4, U.S. science advisor Michael Kratsios published a White House memo accusing Chinese AI developers of running "industrial-scale campaigns" to copy U.S. technology. The specific claim is that Chinese labs including DeepSeek have been conducting "illicit distillation attacks," meaning they train their models on the outputs of U.S. models. China's foreign ministry called those claims "groundless."


More significant than the diplomatic spat is the hardware story underneath it. DeepSeek V4 was both trained on and deployed using Huawei's Ascend AI processors. This marks a significant shift away from Nvidia hardware and is seen as a key step toward a self-contained Chinese AI supply chain.


As Huawei scales its Ascend 950 supernodes through the second half of this year, DeepSeek's API prices will likely fall further. OpenAI, Anthropic, and Google are all buying Nvidia compute at rates that are flat to rising. The cost curves point in opposite directions.


This is the part of the story markets have not fully priced in. U.S. export controls were designed to limit Chinese AI capability by restricting chip access. DeepSeek V4 suggests that strategy is being outmaneuvered through alternative hardware development.


The Stanford AI Index: What the Data Actually Says


Amid all the noise, the Stanford 2026 AI Index provided some of the clearest data on where the industry stands.


Generative AI reached 53 percent population adoption within three years, faster than the personal computer or the internet. The estimated value of generative AI tools to U.S. consumers reached $172 billion annually by early 2026, with the median value per user tripling between 2025 and 2026.


AI data center power capacity rose to 29.6 gigawatts — roughly what it takes to power the entire state of New York at peak demand. The environmental cost of AI infrastructure is becoming a material risk for companies with ESG commitments and regulators paying attention to energy consumption.


On the competitive landscape: the U.S. and China are almost neck and neck on AI model performance. As of March 2026, Anthropic leads, trailed closely by xAI, Google, and OpenAI. Chinese models like DeepSeek and Alibaba lag only modestly. With the best AI models separated by razor-thin margins, they are now competing on cost, reliability, and real-world usefulness.


The Bigger Picture


The key takeaway from this week's developments is that AI is entering a phase defined by commoditization, competition, and capital discipline.

Three structural forces now dominate the industry:


Price Compression — DeepSeek V4 has reset expectations for what frontier AI should cost. Every business model built on premium AI pricing is now under pressure to justify its margin.


Agent-Driven Computing — AI is no longer a chatbot layer. It is becoming the execution engine for enterprise workflows, device interfaces, and business automation. The companies that control agent infrastructure will control the next cycle of value creation.


Geopolitical Bifurcation — The U.S.-China AI race is no longer just about model benchmarks. It is about hardware ecosystems, supply chains, and parallel infrastructure. DeepSeek running on Huawei chips is a signal that two separate AI stacks are forming — and investors need to understand which stack their portfolio companies depend on.


The companies that succeed in this next phase will be those that can deliver real utility at defensible cost structures — not just the ones with the largest models or the biggest marketing budgets.

 
 
 

Comments


  • LinkedIn
  • Instagram

SIGN UP AND STAY UPDATED!

bottom of page