There is no shortage of AI readiness assessments. Consulting firms sell them. Software vendors bundle them. Analyst houses publish frameworks full of capability matrices and maturity models. They all measure roughly the same thing: can your organization implement AI? Do you have the data, the infrastructure, the talent, the governance? These are important questions. They are also insufficient ones.
What none of them measure is whether the people making deployment decisions have the judgment to make good ones. They measure capability. They never measure taste.
Every organization has leaders who can evaluate a spreadsheet. Far fewer have leaders who can evaluate a decision. Taste is the difference between “this AI initiative has good metrics” and “this AI initiative is solving the right problem in the right way at the right time.”
Taste is not subjective — it’s observable through the quality of decisions an organization makes when certainty is low and stakes are high. It’s what separates the 4% of organizations that achieve scaled AI deployment from the 96% that don’t.
Where the absence of taste shows up
The organization deploys AI because competitors are, because the board is asking, because the CEO saw a demo at Davos. Use case selection is driven by what sounds most impressive in a press release, not by where AI creates the most value.
Numerous enterprise chatbot deployments in 2023–2024 launched because “everyone needs a chatbot.” Many handled 5% of queries, frustrated customers on the other 95%, and cost more than the human agents they replaced. Companies with taste invested in back-office document processing — unglamorous, high-ROI, and invisible to the press.
Then there’s metric fixation: optimizing for the number that’s easy to measure rather than the outcome that actually matters. Amazon’s internal AI recruiting tool was trained on 10 years of hiring data and got very good at predicting which resumes matched historical hires — which meant it systematically penalized resumes including “women’s.” The metric (match rate) was excellent. The judgment (training on biased data) was terrible. Amazon killed the project. That kill decision itself was an act of taste.
What taste looks like in practice
John Deere didn’t try to make AI do everything on a farm. They focused on one problem: identifying and spraying only the weeds, not the entire field. See & Spray reduced herbicide use by 77%. The taste wasn’t in the technology — it was in the restraint.
The highest expression of AI taste is recognizing when AI is the wrong solution. Basecamp has been vocal about NOT deploying AI where simpler solutions work. If a rule-based system solves 95% of cases, don’t build a machine learning model to get to 97%. The additional 2% rarely justifies the complexity.
And then there’s second-order thinking. When Shopify deployed AI for merchant support, they explicitly designed for the second-order effect. They knew AI would handle routine queries, leaving humans with harder cases. So they simultaneously restructured the human support role — different title, higher pay, different training. They anticipated that “AI handles the easy stuff” would change the human job, and they designed for that change proactively.
Why taste can’t be self-reported
Nobody says “my AI judgment is poor.” So instead, taste has to be tested through scenario-based choices where there’s no obviously correct answer. The pattern across multiple scenarios reveals whether you default to speed, safety, sophistication, or inertia.
This is why the Jewell Assessment doesn’t ask organizations to rate their own judgment. It reveals judgment through the choices they make when certainty is low and stakes are high. Readiness is the table stakes. Taste is the multiplier.