The CPQ Blog

Next-Gen CPQ: Merging Rules with AI for Explainable Configuration

Written by Magnus Fasth | Nov 11, 2025 7:00:00 AM

For more than two decades, configuration systems have helped companies turn complex product portfolios into consistent quotes. That part works. Rules define what can be combined, how dependencies behave, and how pricing is calculated.Each rule captures knowledge from engineering or product management. Over time, that becomes a model of how the company actually sells its products. That’s why CPQ works as well as it does.

When rules start to hurt

The problem isn’t the first 500 rules. It’s the next 5,000.

Every product update, pricing change, or regional requirement adds more logic. Over time, it gets harder to understand and maintain. A small change can have side effects no one expected.

You see it in projects. Changes slow down. Fewer people feel confident touching the model. More testing, more double-checking.

CPQ isn’t failing. It’s just hitting a limit.

Where LLM and RAG come in

There’s a lot of talk about LLMs and RAG. In CPQ, they don’t replace the rules—they sit on top of them.

An LLM (large language model) helps interpret what the user is asking. It turns a question like “why can’t I select this option?” into something the system can work with.

RAG (Retrieval-Augmented Generation) makes sure the answer is grounded. It pulls from actual configuration logic, documentation, or product data instead of guessing.

And CPQ still does what it always did: enforce what’s valid.

So instead of replacing the model, you get a layer that helps you understand and use it.

Where this actually helps

You see the impact in small, practical moments.

Someone asks why an option is restricted. Instead of escalating, they get an answer tied to real rules.

A new salesperson needs to understand a product. Instead of reading static documentation, they ask questions and explore.

A change is proposed. Instead of guessing the impact, you can trace it back to the underlying logic.

That’s where “explainable” starts to matter. Not as a concept, but as something people actually use.

Keep it grounded

It’s tempting to treat this as a shift away from rules. It isn’t.

If your CPQ model is messy, LLM and RAG won’t fix it. They’ll expose it faster.

The same fundamentals still apply: structured data, clear naming, and well-defined logic.

What changes going forward

What does change is expectations.

People won’t accept black-box answers. If the system says no, it needs to explain why—and point to something real.

That’s where the combination works:

  • LLM interprets the question
  • RAG finds the right data
  • CPQ enforces the rules

Simple in theory. Harder in practice.

Bottom line

This isn’t a replacement for CPQ. It’s an extension.

Rules still matter. But how you access and understand them is changing.

If you get that right, the model becomes easier to work with—and people actually start using it the way it was intended.

Recommended reading