Date the models before you commit
Why the smartest builders are keeping their AI integrations flexible
👋🏻 Hi, it’s me Nickey Skarstad and welcome to Builders, where I break down what it actually takes to build products that people love (and actually use). No fluff, just hard-earned lessons.
As DeepSeek made a splash this week it got me thinking about how to choose which model to use and how to think about what you might need in a week, a year, and in ten. Your ability to switch it up quickly could make or break you down the road. Read on for how to build for flexibilty👇🏻
If the launch of DeepSeek this week taught us anything, the model you're using today probably won't be your best option six months from now. And yet, some folks are still building themselves into a corner.
Here's what some AI integrations look like right now:
Teams are diving deep into OpenAI's SDK, building directly against its specific patterns
Prompt templates are being hardcoded around GPT-4's quirks and behaviors
Products are being built around model-specific features like OpenAI's function calling or Claude's XML capabilities
Error handling and retry logic are tightly coupled to one provider's patterns
When you're racing to ship features, taking time to build abstraction layers feels like unnecessary work. But let me share a story that shows why this matters.
Last year, I was chatting with a friend at a startup that had deeply integrated GPT-4 into their product workflow - custom prompts, fine-tuned response handling, the works. Then Claude 3 dropped, offering cheaper tokens and notably better performance for key tasks. The catch? Switching would require rewriting significant portions of the codebase. They had inadvertently built GPT-4's specific patterns into their system's foundation.
Here’s how to set up your product for an inevitable model change:
1. Create clean abstractions
Treat your AI layer like a pluggable service:
Build model-agnostic prompt templates
Standardize output parsing
Create unified error handling across different providers
Maintain a clear separation between business logic and AI integration points
2. Design for continuous testing
The teams winning in AI aren't just picking the best model they're constantly evaluating new ones:
Set up straightforward A/B testing between models
Measure performance metrics consistently across providers
Enable using different models for different features based on their strengths
3. Think long-term about costs
Every time you tightly couple your system to a specific model's patterns, you're accumulating technical debt that will cost you later. Don’t optimize dev speed at the expense of operational flexibility.
Example time
Say you test a handful of different models to power your core feature. In reality, you see something like his:
Claude 3 was 40% more accurate
GPT-4 was 30% faster
DeepSeek was 60% cheaper with comparable results
Now, what you choose and prioritize depends on what you’re building and your unique constraints. Furthermore, what is important today could change as you seek out product market fit. It’s hard to predict the future, building your system to be model-flexible from the beginning will help in the long term.
Next steps
If you're already deep into a rigid implementation, panic! Just kidding, you’re building software and not saving lives, so calm down. Here's where to start:
Audit your current AI integrations: how many assumptions are you making about your chosen model's behavior?
Document your core AI requirements independent of any specific provider
Start building a simple abstraction layer for your next AI feature
Set up basic performance monitoring that works across different providers
Experiment constantly to stay on top of what’s best for your product
The most important thing is to start thinking about this now. The AI landscape is evolving at an unprecedented pace. New models are emerging monthly (daily?), each with unique strengths and capabilities. The winners in this era won't be the teams that pick the right model today, they'll be the teams that can quickly adapt to the right model tomorrow.
Stay flexible, stay curious, and keep shipping!
---
What are you building? Leave a comment and let me know how you're approaching AI model flexibility in your product. 👇🏻
Really enjoyed this one, Nickey! The date the models analogy hits perfectly especially in an ecosystem where the hottest model today could be old news by next quarter. The point about technical debt from rigid integrations is spot on. have you seen any tools or frameworks that help teams set up those abstraction layers without adding too much dev overhead?