Five hard truths I learned building and scaling AI (and why founders should care)

By Andreea Wade

(Context: These thoughts have formed partly because I was asked this question at the Dublin Tech Summit last week but they also reflect things I see missing in startups that pitch me, specifically founders building all sorts of wrappers).

(Extra extra context: I built AI in what the EU AI Law — and other similar regulations across the globe (US included) see as high risk applications of AI. We ended up designing an award winning Responsible AI program. We also won, in 2024, the prestigious AI Breakthrough Award for “Best Overall AI Solution”, working with thousands of companies, across multiple industries, processing trillions of data points).

When I built my AI startup, I learned that this space doesn’t forgive half-baked ideas or fluffy narratives. After selling that company, I led global AI development and strategy in a large American enterprise (the acquirer), so I’ve seen these challenges from both sides of the table. Now, as a General Partner at Venture Capital firm Delta Partners, backing the next wave of founders, I see the same patterns and pitfalls again and again.

Here are five things we nailed early (we really did!!), and why they’re essential for anyone building AI products today.

1. Data Isn’t an Afterthought, It’s the Bedrock

If you’re building an AI product, data isn’t something you plug in later. It is the product. This seems obvious, but in practice, most founders treat data as a bolt-on instead of a core design principle.

  • Why it matters: Without a clear data acquisition and enrichment strategy from day one, you’re stuck relying on someone else’s dataset. That’s not a moat, it’s a risk.

  • What I see now: Investors are laser-focused on proprietary data loops. If you can show how your product generates new data as people use it, that’s your defensibility, especially as the models themselves become commoditised.

What I learned: In my own company, every feature was also a data-collection engine (or an explainability feature). After the acquisition, I saw how global enterprises prioritise data ecosystems as the backbone of their AI roadmaps. Build for this from the start.

2. Scaling Isn’t a “Later” Problem

AI loves to scale, and it punishes you when you’re not ready. Model size, inference costs, and latency all become real business blockers if you treat scale as an afterthought.

  • Why it matters: Sure, compute costs can kill your margins, but scaling isn’t just about money. In my own startup, we had to build our own vector database (before the category even existed) to make our recommender engine fast and accurate enough. That was about getting the right infrastructure in place to actually deliver on what we promised customers.

  • What I see in the enterprise world: Scaling AI isn’t just about throwing money at GPUs, it’s about architecture decisions that don’t box you in later. It’s about making sure your data pipelines, retrieval strategies, and compute layers can handle the complexity and speed your customers expect.

My advice: Don’t assume you’ll figure out scalability “later.” Design for scale early, whether that’s custom infra for unique needs (like we did with vector search) or careful orchestration across cloud and edge. These decisions shape how your product performs, and whether it can survive real-world demands.

3. It’s Never Just One Model

When I started out, I thought “the AI model” was the product. But the reality? You’ll end up with a fleet of models, some LLMs, some classification engines, some domain-specific retrieval models.

  • Why it matters: One-size-fits-all doesn’t exist in AI. Different tasks need different tools, and you need to orchestrate them.

  • What I’ve seen post-acquisition: At scale, big enterprises use a zoo of models, some home-grown, some external, because modularity means faster adaptation and cost control.

For founders: Think modular, not monolithic. Build a stack where you can swap models in and out as better ones emerge, or as your use cases evolve.

4. Agentic UX Is a Different Ballgame

Traditional UI/UX patterns just don’t cut it for AI-powered experiences. AI isn’t a feature you plug in, it’s a co-pilot that changes how people work.

  • Why it matters: Users need to trust the AI enough to let it do its job, while still feeling in control. That means you need to design for things like “pause,” “override,” “explain,” not just buttons and sliders.

  • What I saw leading enterprise AI: Even at massive scale, companies that didn’t rethink UX for agentic workflows (or just AI first features/products) struggled with adoption — because users didn’t feel safe letting the AI lead.

My advice: Work with great UX designers early. Build experiences that let users stay in control, even as AI takes on more of the heavy lifting.

5. Responsible AI Isn’t Just a PR Line

“Responsible AI” can feel like a checkbox, especially when you’re moving fast. But in reality? Poor governance will tank user trust, and investor confidence.

  • Why it matters: Regulations are here (and getting stricter), but more importantly, trust is everything. If your AI is opaque, biased, or sloppy, users (and enterprise buyers) will walk away.

  • What I learned post-exit: In a big enterprise, responsible AI wasn’t a “nice to have”, it was table stakes. Model governance, data lineage, bias testing: these are the foundations of any serious AI effort.

For founders: Assign someone to own this early. Make it part of your build culture, not an afterthought. It’s not glamorous, but it’s how you build a company that lasts.

Final Thoughts

These are the hard truths I’ve lived through, first as a founder, and then leading global AI development at scale. Now, as a VC, I’m backing founders who respect the complexity of building real AI products — founders who treat these truths not as hurdles, but as design constraints for something truly defensible.

If there’s one takeaway, it’s this: AI isn’t magic. It’s a stack of trade-offs, choices, and engineering realities. The founders who win are the ones who lean into this complexity, who build for it, not around it — I genuinely can’t say this enough.

PS: If you’re building something in this space and want to compare notes, drop me a line.

Previous
Previous

Your Pitch Deck Isn’t About Ticking Boxes. It’s a Window Into Execution

Next
Next

Delta announces Andreea Wade to join as new General Partner