New Insights: The Future of RGM Report 2026 👉 Download now
AI won’t fix RGM on its own. The advantage goes to teams that translate it into repeatable decisions.
Most CPG organisations are already surrounded by AI demos, pilots, and point solutions. The real gap isn’t access to technology. It’s the ability to turn AI into better, repeatable RGM decisions that survive scrutiny from commercial leadership and work inside existing planning cycles.
This session reframes AI in RGM as a transformation discipline: lead with a business case rooted in pain, bring stakeholders onto the journey, prove value early, and keep iterating until the output is trusted enough to change how pricing, promotions, and portfolio decisions actually get made.
Build an AI business case the way leaders approve budgets: start with painkillers, then vitamins. How forecast inaccuracy creates concrete cost and penalty exposure, and why that “pain” unlocks sponsorship faster than promising abstract uplift. [15:24]
Avoid the “bright idea, no budget” trap by knowing where transformation money really sits. Why the funding conversation is usually only two to three steps away if the problem statement is sharp and leadership recognises the pain. [18:43]
Treat AI adoption as a knowledge and mindset shift, not a tool rollout. How teams derail projects when they expect deterministic outputs, and why you need a shared vocabulary to stop people interpreting the same AI result in incompatible ways. [22:08]
Decide who you’re solving for, then land AI inside the planning cadence that already runs the business. A practical lens on stakeholders and process touchpoints (strategy, annual planning, S&OP, QBRs) so AI outputs become decision inputs, not “extra analysis.” [24:30]
Create early wins the organisation will repeat: parallel runs, market tests, and credibility over time. Why trust is built by comparison and back-testing, not by adding narrative layers to dashboards, and how time savings can be a legitimate first wave of value. [29:59]
![]() |
Haja DeenCIO, CTO, Tech Business Growth Advisor |
Haja Deen is a tech leader with nearly 30 years of experience driving digital transformations for global companies like pladis Global, Saint-Gobain, and Holland & Barrett. A transformation expert, he helps leadership teams unlock revenue and enhance customer experiences through technology. Haja is also a startup co-founder, advisor, and author of "Build the Right Thing."
AI impact in RGM starts with a macro shift, not a feature set. [05:11]
Haja frames AI adoption against a changing economic environment, arguing that post-inflation conditions will change how pricing and promotion levers behave, making better prediction capabilities more valuable.
Excel-driven RGM is losing effectiveness because the data reality has changed. [09:35]
As consumer choice expands, data volume grows, and manual or intuition-led approaches struggle to keep up. The case for AI is not novelty—it’s the inability of older methods to process the decision space fast enough.
RGM maturity is a journey from cost-plus intuition to consumer-led prediction. [11:59]
He contrasts Excel-based, manual pricing approaches with advanced teams using AI to understand consumers and drive pricing, promotions, and PPA decisions with a “triple win” ambition.
Painkillers beat vitamins when you need leadership sponsorship. [18:03]
A transformation business case wins when it is anchored in measurable operational pain—like forecast inaccuracy driving reverse logistics, warehousing costs, and penalties—before it promises uplift.
AI adoption fails when teams expect deterministic behaviour from probabilistic systems. [23:22]
Haja highlights a common mindset trap: people assume consistent X-to-Y outputs, then lose trust when AI produces variable results. Projects derail if teams aren’t trained to work with probabilistic outputs.
Early wins should prove decision value, not generate prettier reporting. [27:38]
He warns against “shiny” add-ons like auto-narratives that don’t change decisions. Instead, he argues for parallel runs, market tests, and back-testing to build trust in predictions.
Value delivery is the language of the C-suite, not algorithms. [33:19]
He’s explicit that leaders don’t care which model is used; they care whether it improved top and bottom line outcomes. Transformation teams build influence by staying anchored to value, not tech.
Transformation is continuous course correction, not a launch event. [37:27]
Using a pilot analogy, he argues AI-enabled RGM needs repeated iteration based on changing technology, evolving needs, and user feedback—while keeping the destination fixed: value delivery.
What are the most powerful RGM use cases you’ve seen for AI?
Haja’s view is that the most powerful use case is improving the ability to predict consumer behaviour across the full set of RGM levers, rather than focusing on a single lever in isolation. The advantage comes when pricing, promotions, PPA, and related decisions are improved through better prediction in one coherent approach. [41:58]
How should an AI transformation business case be positioned to get approved?
He recommends starting with “painkillers” that leaders already feel—avoidable operational costs, penalties, and inefficiencies—then layering in “vitamins” like growth upside. That sequencing makes sponsorship faster because the pain is harder to dispute. [18:03]
How do you handle the budget problem when AI wasn’t planned in the annual cycle?
Haja argues budgets can usually be unlocked if the business case is clear, often through discretionary transformation funding held at C-level or in technology and strategy teams. His rule of thumb is that you are often only a few conversations away from funding if the problem is well framed. [19:14]
Why do AI projects derail even after teams build models?
Because teams expect deterministic, repeatable outputs and lose trust when AI behaves probabilistically and results vary. He stresses the need to level-set understanding and mindset so stakeholders don’t interpret variability as failure. [23:22]
How do you build trust in AI outputs while the business still has to run?
He recommends parallel runs and testing: compare AI outputs with existing approaches, back-test against historical periods, and validate in-market where possible. Trust grows through evidence over time, not through big-bang adoption. [29:59]
How do you keep AI work focused when the tool ecosystem is moving so fast?
His advice is to stay anchored on value delivery and treat technology as an enabler, not the objective. Continually refine the roadmap based on where pricing and commercial decisions can be made cheaper, faster, and better than competitors. [34:40]