Skip to content

Reviving Explainability with Good Old-Fashioned AI

We're entranced by the spell of modern AI, but are we blindly accepting the enchantment without understanding the magic? As we sprint ahead with sophisticated machine learning models, we've lost sight of a valuable lesson from the AI past - the power of explainability, the cornerstone of Good Old-Fashioned AI (GOFAI). Shockingly, it seems we've lost our way in the AI labyrinth, leaving behind the Ariadne's thread of explainability.

The rise of deep learning and neural networks has placed modern AI on a pedestal, where it dazzles us with its power to learn from colossal amounts of data, predict with uncanny accuracy, and self-improve. However, as we stand awestruck, we often forget to ask a fundamental question: why did the AI make that decision?

In our rush to embrace the new, we've left behind the old yet significant principle of explainability. Good Old-Fashioned AI (GOFAI), the symbolic AI of the past, always had an answer to the 'why.' Based on pre-defined rules and explicit logic, it was open, transparent, and accountable.

The Black Box Problem: A Modern AI Dilemma

Fast forward to modern AI, and we have the "black box" problem. It's no secret that machine learning models, while powerful, often operate in ways that are opaque and difficult to understand. You feed it data, it produces results, but the reasoning behind these results remains a mystery. Even the creators of these AI models can't always explain their decisions.

The inability to comprehend the 'why' behind AI decisions becomes particularly problematic in high-stakes scenarios, such as healthcare diagnosis, financial services, or judicial decisions. A lack of transparency can lead to mistrust, hampering the acceptance of AI applications in such critical domains.

Back to Basics with GOFAI

This is where GOFAI shines through. Its strength lies not in crunching vast data or mimicking the human brain, but in offering clarity and accountability. GOFAI doesn't just give you a decision; it tells you why.

Understanding the reasoning behind a decision is crucial in sectors where there's no room for mistakes. For example, in healthcare, understanding why a certain treatment was recommended or in banking, why a loan application was rejected is paramount.

Bringing Back the 'Why' in AI

The goal here is not to replace modern AI with GOFAI but to bring back the essential principle of explainability into our AI systems. It's about infusing the best of both worlds to create AI systems that are not only intelligent but also accountable and transparent.

Resurrecting explainability might be the missing puzzle piece in our quest for responsible AI. The grandeur of modern AI is undisputed, but it's time we break the enchantment, step back, and question the 'why.' Remember, understanding the past can often guide us better into the future. Let's ensure our AI future is not just smart but also explainable.

You've reached the end of the page...

Ready to learn more? Check out the online ebook on CPQ with the possiblity to book a CPQ introduction with Magnus and Patrik at cpq.se