Examples metrics and fitness functions for Evolutionary Architecture
In my previous article, Your SaaS's most important trait is Evolvability I talk about the need to define fitness functions that ladder up to core company metrics like NPS, CSAT, GRR, and COGS. Just today I had a great followup where a connection on LinkedIn ask me for specifics for an early stage SaaS. I think it'd be valuable to follow up that post with some examples from that conversation.
Pick a general metric and find a specific that ladders up to it.
Pick a metric first that's important to the company at large. For early stage SaaS, I'd say that's NPS. It's easy to collect, low touch, and Promoters are the people who will help you clinch down renewals and propagate your SaaS to their colleagues at other organizations. The more promotable your software is, the less work your sales and renewals folks will have to do to move their pipeline.
Promoters are people who think your software is a joy to use, and that everyone should be using it over whatever they're using today. At an early stage, whatever your software is, you have one or two killer features that really drive engagement and dominate a user's experience of your product. You're asking yourself, "What metrics do I have control over that make the experience Promotion Worthy?"
- If your killer feature is messaging, how long does it take for messages and read-receipts to arrive? How long until someone notices lag? How fast is fast enough that improvements aren't noticed?
- If your killer feature is delivering support through AI, how many times does a user redirect the AI agent for a single question? How complex an inquiry can your AI handle before that's too great? How long does it take for a response to come back?
- If your killer feature is a calendar, how long does it take for someone to build an appointment, how long does it take to sync to their other calendars, and how close to "on-time" are reminders being delivered?
- If your killer feature is your financial charting, how up to date are the charts, and how long does it take for a dashboard to load and update?
The point is to make it concrete and measurable. Once you can measure it, you want to know two things:
- What's the minimum acceptable bound?
- What's the point of diminishing returns?
Build a now. Measure continuously. Find the trend. Build that into your Site Reliability practice. Push your engineering team to understand what levers they have to control that function and know how quickly they can adapt if it starts trending negative.
As your software and company grows, you'll accumulate functions like this for measuring the fitness of your software for common use-cases. It won't be "one key metric" but one or two metrics for each persona.
The meta-metric is how quickly your tech team can adapt to change.
Pivots happen. M&As happen. Product requirements shift as the horizon gets closer. For the kinds of changes you learn to expect as an executive, how well does your tech team adapt to change?
- Do they get thrown into crunch-time in the last 30 days of every project?
- Does software ship with loose ends and fast-follows that impinge on the next project's start time?
- Does technical debt accumulate and affect customer experience, support burden, or COGS?
As a top software architect or VP of Engineering, these are the kinds of things you measure to see if the team is healthy and if the software is healthy under it.
Change is life. Change is necessary for growth. In a healthy, growing company, change is constant. But change introduces stress. Your software architecture's ability to absorb this stress and adapt to new circumstances faster than your competition without creating longer term problems is the ultimate measure of its quality.