After recently speaking with several product managers and designers, I’ve noticed a quiet but risky belief.
In the name of being “objective”, we often scramble to find metrics that validate our design solutions, but this approach is fundamentally flawed: until a design is shipped and used, no data can proves it works.
All existing data only reflects the current or past user experience. It cannot predict how users will respond to something new.
Data is Not a Crystal Ball
Data reveals what has already happened. While it is powerful for reviewing the past and challenging incorrect assumptions, it cannot predict future outcomes.
For example, using current conversion rate to guarantee a new onboarding flow will succeed overlooks a basic truth: design changes alter user behaviour.
Another pitfall is becoming strictly “data-driven”. In practice, this often means reacting to numbers without understanding their underlying causes. If your interpretation is off, your entire strategy off course.
That’s why, instead, I lean toward these two approaches:
- Data-informed: Leveraging data to assess the current landscape.
- Data-inspired: Synthesizing multiple data points to map the problem space and spark new ideas.
In both approaches, data doesn’t provide the answer. Rather, it enables us to ask better questions and grounds discussion in reality.
Don’t Use Data to Back Up Bias
Another dangerous misuse of data is selective interpretation.
Teams cherry-pick metrics that support their favoured solution, ignore conflicting signals or mentally extrapolate conclusions that the data never actually supports.
Consider this: “Users who adopted the new feature show higher retention, therefore the feature succeeds”, without examining:
- Were these just your most engaged users?
- Why did no one else try the feature?
This is classic confirmation bias. Data should serve as a tool for investigation, not just to validate.
If Data Can’t Justify a Design, What Can?
So how do we confidently ship a design if we cannot “prove” it in advance with data?
The answer is straightforward: causal reasoning.
When a design proposal is challenged, the goal isn’t to generate endless variations, but to clearly articulate the logic behind it:
The problem is A. Change B directly addresses A, and here’s why.
Being able to defend a decision through reasoned logic, rather than “gut feeling” or “past data”, is the essential analytical skill that distinguishes senior designers from their peers.
The Shortest Causal Chain Wins
I use a straightforward principle when evaluating design solutions:
Prioritise the solution with the shortest causal chain and the fewest assumptions.
In practice, this means you:
- Address the root cause directly
- Reject solutions built on “stacked” assumptions
- Avoid “solution-first” thinking—no random feature dumping just to show progress
If the problem is “users can’t find the next step,” enhancing the CTA visibility is more direct and measurable than adjusting colours, adding animations or rewriting all the copy.
Reinforcing Your Logic with Evidence
Once your causal logic is solid, you can strengthen it with supporting evidence:
- Qualitative research: Conducting interviews and user sessions to uncover intent and mental models.
- Visual evidence: Analysing heatmaps, click maps or eye-tracking to identify usability friction.
- Benchmarks: Refining flows based on how industry leaders handle similar frictions (drawing from references from Baymard Institute or Mobbin).
- Business alignment: Validating the solution directly impacts specific KPIs, not indirect or vague goals.
Iteration Beats Perfection
Design rarely succeeds on the first attempt. This is why risk management and iteration are crucial.
For large products:
- Deploy A/B tests or prototypes to catch fatal flaws early
- Release MVPs or test versions to a limited audience with clear expectations
When shipping for real:
- Maintain an MVP-first approach
- Constrain scope and avoid releasing everything at once
- Roll out changes incrementally and validate them one by one
Introducing too many changes at once doesn’t just increase risk, it also makes it impossible to tell what actually worked.
Only after launch does the real design work begin, where real user behaviour and real data finally test your assumptions. Fast iteration means that even if you’re wrong, you can correct it quickly.
The Bottom Line
In the end, product and design decisions are never made by data alone. In innovative spaces where no clear references exist, we have to rely on:
- Deep understanding of the problem
- Clear causal reasoning
- Professional intuition and strong product sense.
We can’t test every possible solution. We choose the one that feels most inevitable from a logical standpoint, then use data not as proof, but as feedback to keep us honest.
