The AI Dilemma: Promise, Power, and the Price of Progress
- Miles McCarthy
- May 15, 2025
- 4 min read
As AI enters the operational bloodstream of every industry, the most sophisticated minds in software and capital allocation are not asking if it will change the world — but how, how fast, and at what cost.
On one end of the spectrum are the techno-optimists — those who see AI not as a threat, but as an accelerant. Robert F. Smith, founder of Vista Equity Partners, is one of them. “Every one of our companies uses AI in some way,” he told Axios and Yahoo Finance, referencing tools like GitHub Copilot and AI contract analyzers that are “freeing up talent to do higher-level work.” Smith is not just implementing AI; he’s operationalizing it at scale — across product development, customer service, and even back-office functions. Crucially, he adds, AI isn’t replacing workers: “We’re not seeing job elimination... What we’re seeing is job redefinition.”
Orlando Bravo, founder of Thoma Bravo, shares a similar forward-leaning posture — but with a strategic edge. He views AI through the lens of cybersecurity, not just software. “AI is going to be weaponized,” Bravo says, “and that means cybersecurity needs to become exponentially better.” With over $30 billion invested in cyber platforms, Bravo isn’t just betting on software innovation — he’s building the defense grid for the AI arms race.
These are the builders, the believers — echoing the worldview of Marc Andreessen, who recently wrote: “Yes, AI will take some jobs. Just like tractors did. Just like electricity did. Just like software has for 50 years.” In his techno-optimist manifesto, Andreessen warns against “techno-pessimists” who “fear everything new.” His position is unambiguous: AI will bring abundance, not collapse — and our collective future depends on embracing it.
But not all voices in the room are singing the same chorus.
Bobby Yazdani, founder of Cota Capital and an early investor in Google and Oracle, urges investors to look past the “hype cycle” of generative AI. “It’s not reliable yet,” he states bluntly. Yazdani instead points to infrastructure — not flashy apps — as the real alpha zone. “We’re seeing data centers draw 2,000% more energy. That’s where the opportunity — and challenge — lies.”
Josh Wolfe of Lux Capital agrees on the gravity of what’s coming, but adds an ominous frame: “Our new world order is one where algorithms can wield as much influence as armies.” To Wolfe, AI is not just a technology — it is a geopolitical accelerant, capable of rebalancing power structures across industries and nations. He is a techno-realist: bullish on AI’s capacity, but deeply aware of its second-order effects.
Dan Loeb, of Third Point, splits the difference. He’s deployed capital into Nvidia, TSMC, and Amazon — the infrastructure and tooling of the AI boom — but does so with analytical caution, not exuberance. “This is the most important innovation cycle since mobile,” he notes in his Q2 2023 investor letter, but he stops short of utopianism.
So where does this leave us?
The optimists — Smith, Bravo, Andreessen — see AI as the new electricity: transformative, democratizing, inevitable. The skeptics — Yazdani, Wolfe — see an unstable layer cake: fragile tools built on fragile infrastructure, with geopolitical powder layered underneath.
Both are likely correct — in different time frames. The question, then:
Can you build systems, portfolios, and companies that benefit from the upside of AI — without becoming exposed to the systemic risk it creates?
The next five years will reward those who get that answer right. Some tips and guidance as we all contemplate innovation:
1. Architect for Volatility, Not Just Growth
Most founders and investors are still playing the linear game: deploy capital → leverage AI → increase efficiency → scale. But AI isn’t linear. It’s probabilistic, compounding, and fragile in unexpected ways.
Mitigate systemic risk by designing for:
Explainability: Choose or build models that allow for audit trails (especially in regulated sectors).
Redundancy: Have non-AI fallback paths for critical decisions — AI is a tool, not a single point of failure.
Segmentation: Keep sensitive systems sandboxed from generative AI until validated. Don’t let your customer support LLM leak your roadmap.
2. Build Optionality Into Every Layer
The AI stack is evolving weekly. Your moat today could be deprecated next quarter. So don’t marry technologies — build optionality:
Multi-model infrastructure (Anthropic + OpenAI + in-house fine-tunes)
Interoperable data layers that outlive any one model’s architecture
Licensing structures that allow pivoting away from APIs that get too expensive or restricted
3. Treat AI Governance as a Core Competency
Systemic risk doesn’t come from bad prompts. It comes from cascading trust failure — when laziness becomes strategy, or a bad model spirals through payment systems, security protocols, or customer trust. Build:
Model monitoring teams
AI risk dashboards
Ethics policies that aren’t theater
You can build systems that extract asymmetric upside from AI without absorbing its full systemic downside — but only if you treat AI like infrastructure, not magic.
