Data + Framework
Data doesn’t drive decisions. Interpretation does.
There’s an old adage that you can’t judge a fish by how it climbs a tree.
Similarly, in organizations two leadership teams can look at the same customer KPI dashboard and reach opposite conclusions—not because the data is wrong, but because their lens, or framework for analysis, is different. “Evidence” for decision-making results only after you apply data to a framework for interpreting the data and that framework is built upon assumptions about what matters, what’s normal, and what’s motivating in the life of the person or group you’re focusing on.
Is getting more data, quicker, an advantage?
We’re entering an era where the risk of misinterpreting data can accelerate.
AI and algorithms will generate infinite “facts”, infinite correlations, and infinite “insights.” They create the impression that we can know more, faster—and that better decisions are simply a matter of more “objective” measurement. But more output doesn’t guarantee more understanding. It can produce false certainty at scale: more dashboards, more models, more confidence—built on the same unexamined assumptions.
When the framework used to interpret data about people is not accurately grounded in the reality of the group you’re studying, you can easily get expensive outcomes: misallocated spend, the wrong product bets, ineffective messaging, avoidable churn, and internal friction between teams who think they’re “following the data.”
Also, decisions that might be rewarded with sales conversions and transactions in the short run could quietly erode credibility, trust and loyalty in the long run.
That’s why human-centered, contextual research has become more valuable, not less as a first line strategy in today’s AI-charged environment. It clarifies the meaning behind the signals. It reveals what customers and employees inherently trust and value, and why —the things the data can’t explain on its own.
When the lens is wrong, the numbers get expensive
In consumer and CPG lore, for example, the most common (and costly) analytics failures weren’t due to bad data—they involved what was likely good data interpreted through the wrong framework.
The Gap (logo change).
In October 2010, Gap abruptly replaced its iconic blue-box logo with a new wordmark and small blue square. After immediate consumer backlash, Gap scrapped the new logo within days and reverted.
Framework lesson: brand assets are not decoration; they’re recognition and trust infrastructure.
Trust/fairness risk: Netflix (2011 price/structure change).
In 2011, Netflix separated DVD-by-mail and streaming into distinct plans and effectively raised the price for customers who wanted both. The company then reported a loss of 800,000 subscribers in the following quarter as backlash and trust damage hit.
Framework lesson: customers don’t just react to price—they react to broken expectations.
Pricing/perception risk: JCPenney (logic vs. the lived experience of “a deal”)
In 2012, JCPenney tried to eliminate constant promotions and coupons in favor of “everyday” pricing. The market signal was brutal: the company reported 18.9% comparable-store sales decline in the first quarter of that strategy.
Framework lesson: “value” is often a social/emotional construction, not a number.
Permission/creepiness risk: Target (propensity ≠ permission)
Target built predictive models to identify shoppers likely to be pregnant and mailed targeted offers to win their loyalty early. Public reporting described how this kind of targeting could cross the “creepy line,” triggering backlash and privacy concerns even when predictions were accurate.
Framework lesson: prediction isn’t the same as permission—meaning matters.
In the AI era, the competitive advantage shifts from more data to better interpretation discipline—grounding what the data means in how people actually interpret their lived experience and identity.
The Proper Data Interpretation Framework
is the Innate One
Using data to make decisions should focus on the framework first: the necessary lens for understanding customer and employee data is their own Guiding Narrative®. This personal, inner story determines how individuals and groups interpret value, risk, trust, and choice. We call it the only story that matters® because it’s the structure that reveals an individual’s or group’s internal understanding of what their behavior means, as opposed the meaning observers interpret using their own lens. It’s what customers and employees use to make meaning of the world so it’s ultimately the filter that determines and explains how they will behave, and why.
In other words, the framework you should use for making accurate meaning of data about a persona’s or segment’s behavior should be the same framework the persona or group uses to make meaning of their own lived experience.
If you want better forecasting, better ROI, and fewer unforced errors, don’t just measure more. Use the proper framework. Start with the innate narrative that inspires and gives meaning to the behaviors you’re focused on, then use the data to execute with confidence.
It’s a better way.





