Stop Using Data to “Prove” Your Design Is Right: Causal Thinking in Product Decisions

Stop Using Data to “Prove” Your Design Is Right: Causal Thinking in Product Decisions

After chatting with a few product managers and designers recently, I’ve noticed a belief that is incredibly common and quietly risky:

We say that we want to be “objective” by “design with data”, so we rush to find numbers that justify a solution, but that premise is flawed.

Before a design is shipped and used, there is no data that proves it works. All existing data only describes the old experience. It can’t predict how users will behave with something new.

Data is not a crystal ball

Data tells you what already happened. It is great for reviewing the past and challenging wrong assumptions, but it cannot guarantee a new solution will work.

For example, using current conversion rates to claim a new onboarding flow will definitely improve conversion ignores a basic truth: user behaviour changes when the design changes.

Another common trap is going “data-driven”. In practice, this often means reading numbers without asking why. Once the interpretation is off, the entire direction drifts with it.

That’s why, instead of being strictly data-driven, I lean on these two approaches:

  • Data-informed: Use data to understand the current state
  • Data-inspired: Connecting multiple data sets to shape the problem space and spark new ideas

What they have in common: data doesn’t give answers. It helps us ask better questions and keep discussions grounded and rational.

Don’t use data to back up bias

Another dangerous misuse of data is selective interpretation.

Teams cherry-pick data that support a preferred solution, ignoring conflicting signals or mentally “complete” conclusions that the data never actually proves.

For instance: “users who tried the new feature have higher retention” and followed immediately by “so the feature is success”, without asking:

  • Were these users highly engaged users?
  • Why didn’t others try the feature at all?

This is classic confirmation bias.

If data can’t prove a design, what can?

If data can’t validate a design solution before launch, how do we ship with confidence?

The answer is simple: causal reasoning.

When a design proposal is challenged, the goal isn’t to throw out more options, but to clearly explain:

The problem is A. The change B directly addresses A, and here’s why.

This is the real analytical skill that senior designers need, and what it actually means to defend a design decision.

The shortest causal chain wins

I use a simple rule to evaluate design solutions:

Choose the solution with the shortest causal chain and the fewest assumptions.

That means:

  • Target the root cause directly
  • Avoid solutions built on stacked assumptions
  • Avoid “solution-first” thinking, no random feature dumping o changes just to feel progress

If the problem is “users can’t find the next step,” making the CTA more visible and clearer is far more direct and testable than changing colours, adding animations or rewriting copy.

Strengthening decisions with supporting evidence

Once the causal logic is clear, these inputs can make decisions more robust:

  • Qualitative research: Interviews and feedback to understand user intent
  • Visual evidence: Heatmaps or eye-tracking to reveal attention and usability issues
  • Benchmarks & competitors: If similar products handle a critical step better, that’s a strong improvement signal (refer to Baymard Institute case studies or flow references on Mobbin)
  • Business alignment: Can the solution directly move KPIs or OKRs, instead of relying on indirect assumptions?

Iteration beats perfection

There’s no perfect solution. Design is never “right” on the first try, and that is exactly why risk management matters.

For large products:

  • Use A/B tests, prototypes or benchmark testing to catch fatal flaws early and cheaply
  • Release MVPs or test versions to a limited audience with clear expectations

When shipping for real:

  • Stick to MVP
  • Control scope, avoid release all-at-once
  • Roll out changes incrementally and validate them one by one

Changing too much at once doesn’t just increase rise, it also makes it impossible to tell what actually worked.

Only after launch does the real design work begin. Real user behaviour and real data are what validate (or invalidate) your assumptions. Fast iteration means that even if you’re wrong, you can correct it quickly.

At the end of the day, product and design decisions are never made by data alone.

Especially in innovative spaces with no clear references, what we rely on are:

  • Deep understanding of the problem
  • Clear causal reasoning
  • Professional intuition and strong product sense.

We can’t test every possible solution. We choose the one that feels most inevitable from a logical standpoint.

Posted in