Skip to main content

New RGM Academy Course "PPA: From OBPPC to Global Roadmap" 👉 Pre-register now

Free Webinar

AI in RGM: Agent-Based Simulations for Better Decisions

20251001 Webinar AI in RGM (5)

When “AI in RGM” turns into a decision system, not a content engine

From AI Hype to Real Commercial Decisions

Most AI efforts in RGM stumble in a familiar place: impressive outputs, unclear accountability, and no stronger answer to the decisions that actually move net revenue, profit, and share.

This session draws a hard line between where GenAI helps (speed, interface, interpretation), where it misleads (predicting shopper response), and what a workable approach looks like when the goal is better commercial trade-offs, executed with the right models and the right human control.

 

What You'll Learn

  • Treat “AI in RGM” as three different capabilities and decide deliberately what belongs where. 
    GenAI generates content, agentic AI pursues goals, and agent-based modelling predicts behaviour, and mixing them creates bad expectations and worse workflows.
  • Keep GenAI in leverage roles unless it is anchored to a behavioural demand engine. 
    Use it to reduce friction in research, ideation, interaction, and interpretation, but do not expect it to produce the right price without shopper response prediction behind it.
  • Force objective clarity before optimization, especially on the trade-offs leadership leaves implicit. 
    “Grow net revenue by 5%” only becomes actionable once you make margin tolerance, promo levers, competitive positioning, and constraints explicit.
  • Move beyond static elasticity habits by modelling switching and category entry and exit. 
    The decisions that matter depend on who shifts to which product, who leaves the category, and who gets pulled in under different offers.
  • Scale AI through workflow redesign and trust-building, not implementation alone. 
    Adoption is driven by how teams make decisions day to day and whether they can operationalize the system.

For senior CPG leaders accountable for pricing and promotion outcomes, this session clarifies what belongs in the AI stack and what must still be owned as a decision discipline.

Meet the Speakers

ingo

Ingo Reinhardt

Co-founder and Managing Director at Buynomics


Before Buynomics, Ingo was a Senior Director with Simon-Kucher & Partners, a global leader in pricing. He holds a Ph.D. in Management from the University of Cologne and Master's degrees in Management and Mathematics. Ingo was a PostDoc at the University of Oxford and published in the Strategic Management Journal.

Tim Schneider_Buynomics_profile picture

Tim Schneider

Head of Sales Engineering at Buynomics


Tim is the Head of Sales Engineering at Buynomics. Prior to joining Buynomics, Tim worked at Boston Consulting Group's industrial goods practice in the UK, Saudi Arabia, and Germany.

 

Session Highlights

GenAI produces fluent outputs, not behavioural truth, and that distinction matters in pricing.
Large language models can summarise and respond, but they do not inherently model shopper purchasing behaviour, which is what RGM decisions depend on. [23:25]

Agentic AI is only useful when it orchestrates the right tools for the job.
A chatbot alone will not reliably answer “what is the right price,” but an agent can coordinate data access, behavioural prediction, optimization, and execution systems to get to a defensible recommendation. [09:28]

Objective definition is where organisations lose the plot, and AI can help expose what is missing.
A target like “increase net revenue by 5%” is not actionable until trade-offs and constraints are spelled out, including margin tolerance, promotion levers, and competitive boundaries. [19:29]

Behavioural simulation wins because it captures switching, thresholds, and category entry and exit.
RGM impact comes from how shoppers move between products and whether the category grows or shrinks when offers change, not just from a single elasticity estimate. [28:28]

A pragmatic AI stack is layered: simulation first, co-pilot second, automation third.
The sequence described is to build the digital twin to predict demand response, add GenAI to reduce friction in interaction and interpretation, then automate optimization once objectives and constraints are clear. [34:36]

Scaling is mostly a workflow and trust challenge, not a technology gap.
Teams used to legacy approaches can struggle to trust AI-driven recommendations, and the bigger shift is changing how decisions are made and governed. [41:24]

 

Q&A