Measuring the customer knowledge gap

Eckhart Böhme

graduate in industrial engineering, Founder & Managing Director

unipro solutions GmbH & Co. KG

Making the case for Customer Discovery by Sizing the Customer Knowledge Gap

Up to 90% of the factors crucial in the customer journey are unknown.

We make decisions based on what we know or assume. The quality and accuracy of this input determines the quality of our decisions. Garbage in – garbage out.

So, the goal of many who strive to better understand customers is to create a better decision making basis by providing insights about customer needs, desires and context. An investment of resources should naturally be justified.

We know our customers” or a variation “We know what customers want” are among the most frequent objections against performing primary customer research. I am most certain, anyone involved in offering “customer insight creation” has heard these claims.

But how true are these assertions? How can the value of customer research be proven? How can the gain in customer knowledge be measured?

After all, it is almost impossible to convince stakeholders without any evidence that “talking to customers” makes a measurable difference. According to my experience, they are also usually not interested in the research results but in the gain in insights. Demonstrating the “delta” becomes your strongest argument.

The case for customer discovery

In this article we will show that a targeted and structured approach to customer discovery provides insights that have been invisible to the organization prior to the research project. And not only that. We will demonstrate that the gain in “customers knowledge” can be measured. The analysis of three customer projects revealed that the “customer knowledge” of organizations, even those with long track records, was significantly incomplete and influenced by biases.

The analysis of the project with the most thoroughly performed hypotheses phase demonstrated that the customer research project made a deciding difference in the quality of the data created. It showcased the ability of customer discovery to create a much more realistic view of the “market” and therefore a much better basis for decision making.

Biases and knowledge gaps

The results of the item-by-item analysis were quite surprising. The main problems with unvalidated “knowledge” about customers is not necessarily false assumptions. All of the recorded knowledge items and assumptions appeared to be plausible in retrospect. After the project’s evaluation it became clear however, that the issues with the customer knowledge quality existed in two areas:

  1. Biases of the stakeholders. The data shows that stakeholders were “in love” with their idea. They wanted certain trends or facts to be true. Many of the items brought up in the hypothesis phase of the project were not found in customer interviews we performed. Thus, the confirmation and wishful thinking biases may have played a role, reflected by the reliance on their own experience and tainted by the objective to develop a new product.
  2. Blind spots: Another issue was not seeing things brought up in customer interviews. This hinted at a lack of awareness of the customer’s experience according to the customer’s use cases and within the customer’s context. The gaps were significant and not only consisted of curiosities.

A closer look at blind spots

Blind spots appeared to be significant in areas typically not discussed with customers, so they are invisible to the company. Obviously, many of the action limiting factors (constraints) were not apparent to the project team members. Yet, constraints play a big role in the ability of the customer to move through the customer journey frictionlessly. Also, many of the customer benefits through the use of the product were not visible, probably because customers like to talk more about problems than benefits and successes. Both types of insights, plus the other data the company was unaware of can be used to create competitive advantages in form of products, features, marketing measures and buying aids.

Blind spots (unaware) and not validated items. Percent "unaware" is related to total # observed, therefore the percentages do not add up to 100.

The analysis of the customer project was based on over 40 customer interviews. In total, we collected 2,252 data points. The items related to customer knowledge were compared to the 244 clusters created based on observed data.

The qualitative data model

In order to conduct an analysis of the knowledge gain, to assess validated and overlooked items, a data model is necessary. We used 6 of the 12 variables of our Customer Progress Design data model in order to capture the “customer knowledge” at the beginning of the project.

The dangerous perception of being customer competent

The problem with the “we know customers” claim is, that it is an absolute statement. It implies that a person or an organization knows everything necessary about customers to make decisions with great certainty.

The perception of being “competent” can lead to the impression that there is no need for primary qualitative research. Often, if not most of the time, this perception provides a false sense of security as evidenced by droves of new products or services being developed that fail in the market due to irrelevance.

Unchallenged, the “We know our customers” claim becomes a killer phrase for otherwise highly valuable insights.

Challenging the "we know our customer" claim

Here are some critical questions to get to the bottom of the “we know well enough” claim.

Clarify what “knowing” means:

  1. What Constitutes “Knowing”? Clarify what exactly is meant by “knowing” customers. Is it surface-level information or a deep understanding of their needs, behaviors, and motivations?
  2. Item Categories: Are the items of knowledge structured by clearly defined variables or unstructured? Unstructured data is difficult to process.
  3. Data Validation: Has the data we rely on been rigorously validated? Unverified data can lead to inaccurate assumptions.
  4. Ownership of Knowledge: Who within the organization possesses this knowledge? Is it widespread or limited to specific teams?
  5. Market Context: Is our knowledge focused on existing markets, or does it extend to new markets we’re exploring?
  6. Data Granularity: Is our knowledge based on aggregated clusters of data or individual data points? Granularity matters.
  7. Organizational Structure: How is our knowledge organized? Is it siloed or integrated across departments?
  8. Strategic Decision-Making: What critical information do we need to make strategic decisions? Identifying gaps is essential.
  9. Impact on Decisions: How will our knowledge directly influence decision-making? Connecting it to better outcomes is key.

Call to Action

There is an opportunity to make a powerful case for customer research. Useful customer discovery data is the life blood necessary to make decisions and be more competitive, grow the business, and serve customers better.

The next time you hear “we know our customers” or “we know what customers want”, question the assumptions and discuss the need to reduce blind spots and biases. Use a targeted and structured qualitative research approach to develop a robust set of qualitative information.

Nach oben scrollen