· #ai#design

Why reason codes matter more than model accuracy

A pricing AI that can't explain itself doesn't get shipped. Here's how we think about explainability at MojitoIQ.

By Data Mojito

There’s a pattern we see in every revenue team we talk to. A pricing AI shows up with a flashy accuracy number — “92% forecast accuracy!” — and the team never actually uses it.

The reason is almost always the same: the team can’t tell why the model is recommending what it’s recommending, so they can’t tell when to trust it. And for pricing decisions that move real revenue, “can’t tell when to trust it” means “won’t use it.”

Accuracy is a lagging indicator

Accuracy numbers are backward-looking. They tell you how the model did on yesterday’s data. They don’t tell you whether it’ll hold up when something unusual happens — a new competitor enters the market, a hurricane reroutes capacity, a conference gets cancelled.

Revenue teams live in the land of the unusual. That’s exactly the moment they need to second-guess the model. And that’s exactly the moment a black-box “92% accurate” tool is useless.

Reason codes are what get used

The thing we keep coming back to: every AI recommendation the platform surfaces has to come with a short, auditable set of reason codes.

A Yield recommendation to raise a rate by 7% isn’t “trust the model.” It’s:

  • Event demand +32% — a detected signal from event calendars
  • Competitive set raised +6% — shift in the compset over the last 24 hours
  • Booking pace +18% vs forecast — actuals are running ahead

A revenue manager can look at those codes in five seconds and decide: yes, I believe that — or no, the event calendar is wrong about that one, ignore. Either way they’re making a decision, not rubber-stamping a number.

What this means in practice

Three design choices fall out of this:

  1. No single-number confidence. Every prediction gets a range, and every recommendation gets reason codes. If the AI isn’t sure, we say so.
  2. Human-in-the-loop by default. Approve, simulate, override — all one click. The AI never pushes rates on its own.
  3. Audit trail everywhere. Every recommendation, every approval, every override — logged and queryable. Because once things go sideways, “what did the model recommend last Tuesday and why” is the first question.

The AI doesn’t have to be perfect. It has to be honest enough that a human can trust it when it’s right — and catch it when it’s wrong.

That’s the bar MojitoIQ is built to clear.

See MojitoIQ on your own rates.

Tell us about your properties or routes — we'll walk you through the platform and show you how MojitoIQ fits your team's workflow.