As an analyst at a research and advisory firm, I come across a lot of articles that provide what some call “evidence-based” or “research-informed” advice. These are white papers, blogs, and thought models written by professionals and consultants who offer specific stats to back up their advice. While neither the advice nor the stats are necessarily wrong, these articles take an approach that I think is very dangerous if taken in isolation or out of context by readers.
Take an example from the field of corporate learning: the 70-20-10 model is a broadly-accepted thought model suggesting that 70% of learning should come from challenging assignments (experiential learning), 20% from other people (relational learning), and 10% from coursework (formal learning). While this model has a time and a place, it has become so mainstream that many professionals and consultants take it a little too literally. In their articles they advise that every workplace learning experience should literally be 70% experiential, 20% relational, and 10% formal. However, when you read the broader body of research on learning, you find that the more important finding is that you should design whatever learning is best for the situation, whether it’s 70-20-10, 40-40-20, or something entirely different. The idea conveyed by the 70-20-10 model is simply that, generally, experiential learning is more important than formal learning.
The way that the evidence-based approach is often used – that is, providing advice first, research second – is more pervasive than you might expect. Recall a presentation you’ve attended where the presenter threw out some dramatic stats like “80% of leaders agree that X is a problem”, and then proceeded to recommend a single solution. Think about that: do the stats back up that specific solution? If yes, did the presenters consider any alternatives? And if the stats don’t directly support that solution, where did the solution come from?
As you can see, there are a few gaps in this approach. If you lack the research context, you end up having to trust others’ suggestions. And that’s simply poor decision-making.
Another area where evidence-based decision-making is often misused is in government. While “evidence-based” suggests that some kind of evidence was considered, it does not mean that the right evidence was used, that it was used in the right context, or that alternatives were considered. Just because you’re using numbers and stats doesn’t mean you’re making good decisions.
One alternative is what I call “research-guided” decision-making. In this approach, you provide research first, advice second. You first look at what the research says, then you make a conclusion about what the solution is based on that. The decision-maker is guided by what the latest research says, rather than looking for specific pieces of research or evidence that support the opinion they already have. That is what better decision-making looks like.
So should you always use a research-guided approached? No. Is a research-informed approach sometimes appropriate? Yes.
There are two scenarios in which I’ve found it makes sense to use a research-informed approach:
The first scenario is when you look at the research and find that it’s inconclusive: there isn’t enough evidence to point you in the direction of a solution. Yet, you still need to make a decision to solve the problem you’re addressing in your organization. In this case, you can look at trends reports and whatever research is available to make an educated guess. However, it’s important to make note of the limitations so that you can return to them later when new research emerges. This helps keep track of your decision to test its success (the same way you would test a hypothesis).
The second scenario builds on the first: when you’re making a business case. If you want to recommend that your organization invest in a solution for which there isn’t a clear direction in the research, you need to rely on pieces of research to make the case. This is where you can throw out some stats and trends that point in a possible direction, though untested. For example, many trends reports are saying that our economy is undergoing a “digital transformation” amid rapid advancements in technology. While most research doesn’t point to a clear solution, there is enough evidence suggesting that organizations need to take action.
In today’s fast-paced environment, we often feel pressured to make snap judgments. However, this can lead to taking shortcuts like over-relying on the evidence-based or research-informed approach, in which we don’t look at all of the research. Unfortunately, this can also lead to poor decisions. To make good decisions, we need to take the time to fully understand what the research says before we make a decision. This will save us time in the long run as we avoid poor decisions that need to be corrected.
Summary of the two approaches and when to use them:
Evidence-Based / Research-Informed Decision-Making
|When to use it||