Your GenAI Strategy is Theater
Your board wants to know what you're doing about AI. Your CEO forwarded another breathless LinkedIn post about autonomous agents. A VP just IMed you asking if we should be "building with Claude" or "going all-in on GPT-5."
Here's what you should tell them: We don't have a GenAI strategy, and that's intentional.
Not because AI doesn't matter. It does. But because the question itself reveals a fundamental misunderstanding of where we are in this cycle, what these tools actually do, and how technology adoption works in organizations that ship real products to real customers.
The Dunning-Kruger Effect Has Never Been Stronger
We're in the overconfidence trough of the GenAI hype cycle: that sweet spot where people with minimal expertise have maximum confidence. The Dunning-Kruger effect describes how incompetence often breeds certainty. Engineering leaders and executives who watched an OpenAI demo (or worse, read a think piece about "AI-first culture") are now convinced that everyone must use it now or risk extinction.
This is cargo culting at scale.
I'm not even sure the main players know how this will pan out. Microsoft is hedging with both OpenAI partnership and their own models. Google is scrambling to integrate Gemini everywhere while trying not to kill their search cash cow. Amazon is playing infrastructure landlord. Anthropic is racing to be the "safe" choice for enterprises.
If the companies building these things are still figuring out their angles, why are we pretending we need a fully baked strategy?
These Aren't Magic Boxes. They're Software
Most "AI-powered" products you're being sold aren't GenAI doing the actual work. They're traditional software applications that retrieve information from multiple places, use an LLM to synthesize or transform it, then hand it back to you.
That's not revolutionary. That's ETL with better natural language processing.
Look at the finance app cycle we've seen over the past 18 months. Every third startup promises to "revolutionize your financial planning with AI." What they actually built: a dashboard that connects to Plaid, pulls your transactions, categorizes them (the same way Mint did in 2007), and uses GPT-4 to write you a friendly summary.
The AI layer adds conversational polish. The value comes from aggregating your data and showing it to you in one place—the same value proposition as every fintech app since online banking existed.
The Token Economics Don't Work Yet
Here's what nobody wants to talk about in earnings calls: the unit economics of most GenAI products are broken.
Snorkel AI just laid off 13% of their workforce. Why? Because inference costs are still too high for sustainable software margins. When your product's variable cost scales linearly with usage, and you're paying OpenAI $0.01 per 1K output tokens ($10 per million), you're not building a sustainable business—you’re subsidizing experimentation with venture capital.
This will improve. Models will get cheaper. Local and open-source alternatives will get better. But right now, if your "AI strategy" depends on massive LLM calls for every user interaction, you're betting your P&L on OpenAI's roadmap, not your own.
What Not to Do: The Opendoor Memo
You want to see what desperation looks like? Opendoor's new CEO sent a memo on day 11 mandating that everyone "Default to AI." Not explore it. Not experiment with it. Default to it—for everything, starting now.
In his words, "Performance reviews will track how frequently does each person default to AI. If you open Google Docs before an AI tool, you're failing the mandate".
This is command-and-control masquerading as innovation.
When leadership doesn't understand a technology deeply enough to identify specific, high-value use cases, they issue mandates. "Everyone must use AI" is the 2025 version of "everyone must be agile" or "everyone must adopt microservices."
It's theater. And your best engineers see right through it.
The Vibe-Coding Trap
The most dangerous narrative right now is that GenAI lets you "vibe code" entire applications into existence. Just describe what you want, and Claude or Cursor or Copilot will build it for you.
This works great for demos. It falls apart in production.
Here's what LLMs are actually good at: generating boilerplate, suggesting completions based on patterns they've seen before, and synthesizing information from well-documented public knowledge.
Here's what they're terrible at: understanding your specific business context, navigating legacy codebases where tribal knowledge isn't documented, making architectural decisions that account for your team's skills and constraints, and debugging the subtle edge cases that only emerge when real users touch real systems.
If you have a messy codebase built on domain knowledge that isn't well-represented in LLM training data, AI isn't going to magically refactor it for you. It's going to hallucinate plausible-looking garbage that compiles but doesn't work.
The Real Strategy: Get Your House in Order
Here's the contrarian take that will actually serve you: Stop chasing AI strategy. Start shipping fundamentals.
Pay down your tech debt. Refactor that legacy code. Document your systems. Build a culture where engineers actually understand the business problems they're solving, not just the tickets they're closing.
Because when the dust settles—when we figure out which AI capabilities are genuinely useful versus which are demos—the teams that will win are the ones with clean codebases, clear documentation, and engineers who understand their domain deeply enough to know where automation creates leverage.
AI value compounds slowly, like dollar-cost averaging into index funds. It's not a step function. It's not a silver bullet. It's a set of capabilities that will gradually get cheaper, more reliable, and more integrated into the tools you already use.
Your Tools Already Have AI (You Just Don't Notice)
Every tool in your stack is getting AI features right now—GitHub has Copilot, Notion has AI writing assistance, and Warp offers AI command suggestions.
You don't need a "GenAI strategy" to take advantage of this. You need engineers who stay curious, experiment with their tools, and share what works.
The best AI adoption I've seen doesn't come from top-down mandates. It comes from individual contributors discovering that Cursor makes refactoring faster, or that Claude is actually good at writing test cases, or that GitHub Copilot is legitimately helpful for boilerplate.
They tell their teammates, and it spreads.
That's how technology adoption actually works in healthy organizations. Not from strategy decks. From practitioners solving real problems and sharing what worked.
Collaboration, People, Culture = Destiny
Your strategy cannot be the tool. If your answer to "What's our AI strategy?" is "We're going all-in on Claude" or "We're building everything with Copilot," you don't have a strategy. You have a vendor dependency.
What matters is whether your team collaborates well. Whether they're empowered to experiment. Whether your culture rewards thoughtful adoption over performative innovation.
AI is a capability, not a destination. The question isn't "What's our GenAI strategy?" The question is "What problems are we solving, and which of our current or future capabilities—AI or otherwise—give us leverage on those problems?"
That's a much harder question. But it's the right one.
LLMs Have Probably Peaked (And That's Fine)
Here's my bet: LLMs as they currently exist have plateaued. GPT-5, Claude 4.5, Gemini Pro—all roughly the same. The differences are marginal.
What happens next isn't "GPT-6 solves everything." It's "open-source models get good enough for most use cases, and companies like DigitalOcean, Vultr, and the major cloud providers make it trivially easy to host or infer them."
You'll be able to run a fine-tuned Llama model for your specific use case, at a fraction of the cost of API calls to OpenAI. The infrastructure layer will commoditize. The moat won't be "we have access to the best LLM." The moat will be "we have the best data, the cleanest integrations, and the deepest understanding of our customers' workflows."
AI may indeed advance in other ways—better reasoning models, more efficient architectures, breakthroughs in areas beyond LLMs. But the current generation of "throw tokens at everything" approaches has hit diminishing returns.
What to Do Instead
So what should you tell your board, your CEO, and that VP IMing you?
- We're experimenting. We've identified three specific workflows where AI might add value: [be specific]. We're running controlled experiments. We'll report back in 60 days.
- We're not mandating anything. Engineers who find AI tools useful are already using them. We're sharing what works. We're not forcing adoption for the sake of adoption.
- We're playing the long game. AI capabilities are improving fast. Infrastructure is getting cheaper. We're staying informed, but we're not betting the company on tools that might look totally different in 18 months.
- We're focused on fundamentals. Clean code, clear documentation, engineers who understand the business. When the right AI capabilities emerge, we'll be positioned to adopt them quickly—because our house is in order.
- We're ready to pivot. No matter what we pick today, it's going to change. We're optimizing for adaptability, not commitment to specific vendors or approaches.
That's not a strategy deck. It's not a roadmap with phases and milestones. But it's honest. And it positions you to actually capture value when the hype settles and the useful patterns emerge.
The Takeaway
The next time someone asks you "What's our GenAI strategy?", push back.
Ask them what problem they're trying to solve and why they think AI is the answer. Ask if they've actually used these tools for real work, or if they're just repeating what they heard at a conference.
Then tell them the truth: We don't have a GenAI strategy. We have a business strategy. We have an engineering strategy. We have a culture of experimentation and learning.
And when AI tools make us measurably better at executing that strategy, we'll adopt them. Not before.
Because the real risk isn't falling behind on AI. It's chasing hype at the expense of shipping software that actually works, for customers who actually exist, in service of business outcomes you can actually measure.
Build the thing that matters. Use the tools that help. Ignore the rest.