AI-driven CPQ is not just an enhancement—it’s a revolution. By 2028, quoting will be automated,...
Measuring Conversational Quality: The Next Evolution in CPQ
As CPQ starts to interpret and explain configuration logic, the next step is interaction. Instead of clicking through options, users describe what they need and get guidance back. The system applies configuration rules and explains what’s happening along the way.That sounds simple, but this is where things often break.
Conversational CPQ shifts the problem. Getting the logic right is no longer enough. You also have to get the interaction right. We’ve seen cases where the model is correct, but users still drop off. Not because of errors, but because the AI is slow, too verbose, or misses what the user is actually asking. That’s the difference between something that works and something people want to use.
Measuring what actually happens
Traditional CPQ is judged on accuracy. Are the configurations valid? Is pricing correct?
In a conversational setup, that’s only part of it. You also need to look at how the interaction performs. Do users get to a decision, or do they get stuck? Do they keep moving, or do they repeat themselves and drop off?
At cpq.se, we review conversations the same way we used to review rules. We look for where things break: where the AI gives a technically correct but unhelpful answer, where it loses context, or where the user clearly gets frustrated. The goal isn’t to score the AI. It’s to see what needs to be fixed.
From logic to interaction
Configuration models are tested for correctness. Conversational systems need to be tested for behavior.
It’s not enough that the answer is right. It has to move the user forward. That means confirming intent, giving the right level of detail, and not overcomplicating things.
We’re already seeing this in early setups. The biggest issues aren’t logic errors—they’re interaction issues. Too much explanation. Not enough clarity. Or the AI answering a different question than the one asked.
Why this matters in real sales
In complex B2B sales, clarity matters more than speed.
Buyers need to understand what they’re choosing and why. If the AI can explain trade-offs clearly and stay on track, it helps. If it talks too much or loses focus, it slows things down.
This is also where you see when AI isn’t enough. Some conversations need a human. By looking at where users struggle or drop off, you can see where that handover should happen.
Keep it grounded
There’s a lot of talk about “explainable” systems. In practice, it comes down to something simple: can the system give a clear answer and back it up with real product logic?
If not, people stop trusting it.
Bottom line
Accuracy is still required. But it’s no longer enough.
If the interaction doesn’t work, the system doesn’t work—no matter how good the logic is.