Your Company Doesn't Need More AI Pilots. It Needs an Operating System for Leadership.
How the Regenvita Value Engine turns direction into EBITDA, safely and at scale.
Editor note: This has become a rather long piece. I am considering turning this into a series consisting of three parts later.
By Clint Sookermany.
On a Monday at 8:03 a.m., the CFO of a mid-size insurer opens her weekly deck. Three lines matter: unit cost per claim has fallen for the fourth consecutive week, cycle time is down 37%, and exceptions flagged "high-risk" now arrive with a full decision trail. No more forensic email hunts when the auditor calls. No one launched a new "AI initiative" last week. What changed was simpler and more radical: the executive team stopped treating AI as a project and installed an operating system for leadership.
That operating system is a Value Engine. It is a closed loop that converts strategy into shipped outcomes. It does four things relentlessly: set a clear vector, fund a portfolio, build and supervise human plus agent workflows with embedded controls, and manage by evidence. This is the core of the Regenvita Value Engine Operational Framework. R-VEOF delivers these four disciplines through a five-stage loop: Direction, Design, Delivery, Evidence, and Reallocation, so leadership choices turn into shipped work and capital moves on proof. If you are a CEO or board chair and you are not running the company this way, you highly likely end up with theatre: slick demos, uneven adoption, and no reliable movement in the P and L.
Table of contents
  • The leadership problem nobody names
The leadership problem nobody names
Most AI programmes stall because leaders are not yet embedded users of AI. That is often the root cause. Before you talk cadence or culture, leaders must be comfortable using AI for their own decision making, strategy work, idea generation, counselling, and problem solving. Not theory. Daily practice on real work. Once leaders both know and feel what is possible, they stop guessing. They can name where AI belongs, which people should own it, what type of guardrails are needed, and what evidence to demand. They know enough to point the organisation at the right problems and appoint the right operators.
If not, the company's operating rhythm does not change. Meetings, decisions, and incentives remain unchanged. Tools get bolted onto legacy processes designed for a different era. Leaders ask for "use cases," not for redesigned workflows that compress time and error. The result is a zoo of pilots and dashboards, each promising value, none compounding.
The Problem
Leaders who don't use AI daily cannot guide AI transformation effectively
The Symptom
Legacy processes remain unchanged whilst tools are bolted on superficially
The Solution
Make leadership and value stream focus the primary variables, not technology
The Value Engine fixes this by making leadership and value stream focus the primary variables, not technology. It is an operating system because it prescribes how decisions, dollars, and oversight move every week, month, and quarter. Technology slots into that rhythm, not the other way around.
Vector leadership: direction, magnitude, coherence
Vector leadership is inherently straight forward. Strategy sets Direction and leaders make Decisions that fix what we will and will not do. Performance is Momentum, the rate and quality of progress toward that direction. Coherence is whether the whole company moves together. In short: Impact = Direction × Decision Quality × Momentum × Coherence. If any term is near zero, impact collapses. In practice, vector leadership means three to five value pools with guardrails and explicit financial and quality targets, that everyone can see, recited often enough to be almost tedious. Here, tedious is good. Tedious aligns.
"When a European retailer declares its vector: 'reduce stockouts 40% whilst cutting working capital'. It does not start with a model. It starts by rewriting the weekly sales-and-operations meeting."
Agents produce exception summaries. Planners approved or rejected with one click. The CFO tracks cash conversion weekly. Within a month, politics give way to evidence.
The loop that compounds
The Regenvita Value Engine runs a tight loop: Direction to Design to Delivery to Evidence to Reallocation. It is built for modern realities, including agentic workflows, evaluation harnesses, lineage, and cost caps. The loop does not just learn; it monetises learning. In practice, Design and Delivery happen inside the Factory, while Evidence and Reallocation drive Portfolio decisions against the published vector.
Direction
Clarifies the vector and the risk appetite in plain language: what value we pursue, the risks we will accept to get it, the jurisdictions we must respect.
Design
Rewrites workflows as human plus agent systems. Every step names who or what acts, what data are touched, the control at that point, and the decision trail it leaves behind.
Delivery
Ships in weeks, not quarters, because the unit of progress is a workflow slice with measurable deltas.
Evidence
Not a case study; it is the before and after in unit economics, with variance explained.
Reallocation
The muscle most firms lack: kill or double down based on the evidence, not seniority.
Four irreversible choices for CEOs
1) Choose a vector you are willing to publish
A private wish list is not a vector. Publishing three to five value pools with target EBIT or ROCE, along with non-negotiable guardrails, forces coherence. It also gives your board a clean line of sight: here is where value will appear; here is the risk we are taking to get it. When leaders hesitate to publish, it is usually fear of accountability or a vague strategy masquerading as optionality. Call it out and fix it.
2) Fund a portfolio, not pilots
Pilots are permission slips for inaction. Portfolios create obligation. The Regenvita approach, Run, Grow, Transform, is blunt on purpose.
  • Run cuts unit cost, cycle time, and defects in core workflows.
  • Grow improves conversion, cross-sell, retention, and share of wallet.
  • Transform opens new products and channels or enables step-change operations, such as agentic service or autonomous planning.
Each lane has hurdle rates, kill criteria, and stage gates. Your finance team should feel like it is managing capital, not sentiment.
3) Build a Factory, not a lab
Labs produce prototypes. Factories produce throughput with traceability. A working Factory has components such as connectors, prompt libraries, and evaluators. It enforces standards such as versioning and approval flows. It defines a clear human in the loop model. It also runs a showback rhythm where teams demo shipped workflows, not slides. Compliance speeds up when controls and audit trails are designed into the workflow from day one.

Consider a global bank's KYC revitalisation. Before: four systems, manual notes, and long tails of exceptions. After Factory treatment: pre-checks automated, risk narratives generated, high-risk cases routed with full context, and every decision signed by an agent plus human pair. Auditors do not just get a better story. They get the actual trail.
4) Make cadence your moat
Cadence is not calendar theatre. It is how value appears. Quarterly, leadership reaffirms the vector and reallocates capital across the portfolio. Monthly, the portfolio council greenlights, kills, or scales based on evidence, not charisma. Weekly, the executive team reviews the few KPIs that explain movement in the P and L and the health of the agent network: unit cost, cycle time, error rate, revenue lift, incidents, model cost and latency. Daily, operations keeps service level objectives (SLOs) green and responds to drift or anomalies with clear kill switches. When cadence is real, surprises shrink and compounding starts.
What changes for managers, and why it sticks
Managers stop being translators of strategy and become supervisors of networks, teams of people augmented by agents they can understand, tune, and retire. Their job shifts from reporting up to editing the workflow: moving the human in the loop to the right step, tightening evaluation thresholds, and escalating only when risk or value warrants it.
Three things make the change durable:
Auditability as a feature
Decision trails are visible, searchable, and regulator-ready. When audit pain disappears, resistance to automation collapses.
Unit economics in the open
Everyone can see the cost per transaction and the value per marginal improvement. Debate becomes maths, not politics.
Reusability beats constant reinvention
A common Factory turns one team's breakthrough into everyone's standard in days, not quarters.
The traps that quietly kill momentum
Pilot theatre
You are shipping demos, not value. Solution: convert pilots into portfolio entries with hurdle rates and owners, or shut them down.
Tool sprawl
If your prompt library and evaluator suite are not shared, you are paying a tax on every new workflow. Solution: centralise components and decentralise ownership of outcomes.
KPI fog
Dashboards with thirty charts do not explain the P and L. Solution: pick the five causal metrics that actually move EBIT and review them weekly.
Shadow AI
Unlogged prompts and unversioned agents feel fast until the first incident. Solution: put approvals and logging in the runtime, not in spreadsheets.
Why boards should demand a Value Engine
The Board's Dilemma
Boards are on the hook for value and risk. The Value Engine gives them both: a clear line of sight to value pools and a concrete risk appetite enforced through embedded controls.
Those controls include data lineage, model evaluation, cost caps, kill switches, and incident playbooks. Governance accelerates when it is inside the workflow, not stapled on after the fact.
"We didn't become more innovative. We became more consistent about turning decisions into cash, with fewer surprises."
— Chair at a conglomerate, after six months
That is the right bar to have.
The quiet power of coherence
When a company installs this operating system, the oddest thing happens: the technology becomes less interesting. Meetings tend to get shorter. Evidence replaces argument. People ship. Culture shifts not because a memo demanded it, but because the fastest path to recognition and promotion runs through the Value Engine. You can feel the firm's vector in the corridors: direction, magnitude, coherence.
The firms that succeed will not be the ones with the most models.
They will be the ones whose leaders run operational frameworks such as the Value Engine, who treat AI not as a gadget but as infrastructure for decisions. If you are serious about performance, stop asking for more pilots. Make sure you are in the right place yourself and establish the operating system.
Addendum: The Model Explained
This addendum sets out the leadership model that underpins R-VEOF and explains how to turn it into operating reality.
The Regenvita Value Engine Operational Framework
The Regenvita Value Engine Operational Framework codifies the loop, Direction, Design, Delivery, Evidence, Reallocation, into an operating cadence, a portfolio model, enabling a "factory" for human plus agent workflows, and controls by design. It is leadership's operating system ideally suited for the AI era.
The formula
\text{Impact} = \text{Direction} \times \text{Decision Quality} \times \text{Momentum} \times \text{Coherence}
If any term is near zero, overall impact collapses.
Direction
Where the company is going and why. It is expressed as a small, public set of value pools with targets and guardrails.
Decision Quality
The calibre of leadership choices about what to do and what not to do. It includes risk appetite, capital allocation, and what to kill or scale.
Momentum
The rate and quality of execution. It is quality adjusted throughput, not raw effort.
Coherence
Whether the organisation moves together. It requires aligned incentives, shared information, and a fixed cadence.
Why these terms and how they connect to practice
  • Direction without Decision Quality creates motion without choice. You get activity without advantage. Publishing three to five value pools forces clarity and makes trade offs explicit.
  • Decision Quality without Direction creates analysis without focus. You get debates that never end. Stating the vector and the boundaries gives choices a frame.
  • Momentum without Coherence creates local wins that do not compound. Teams row hard in different directions. Shared scoreboards, incentives, and showbacks fix this.
  • Coherence without Momentum creates tidy processes with little movement. Cadence matters only when teams ship and prove value.
How the model maps to R-VEOF components
  • Direction maps to the R-VEOF Value Map and the publication of targets and guardrails.
  • Decision Quality maps to R-VEOF portfolio selection, hurdle rates, stage gates, and reallocation rules.
  • Momentum maps to the R-VEOF Factory and its ability to produce, version, and operate human plus agent workflows with evidence.
  • Coherence maps to R-VEOF cadence, governance, incentives, and shared components.
How to operationalise each term inside R-VEOF
(Please note that some of this is with AI in mind)
Direction
Publish three to five value pools that roll up cleanly to P and L and risk. For each pool, include an Innovation Considerations section that guides exploration inside the pool. Use proportionality as the rule. The section is as long as required to support a high-quality decision, and no longer.
Structure the Innovation Considerations in two parts. Part A is a one-page executive summary that is mandatory for portfolio council decisions. It must state the problem, the value hypothesis, the specific outcomes to move, the learning goals for the next period, the capital request with the Run, Grow, Transform split, the guardrails and risk posture, the success criteria and kill criteria, the named owners, and the planned date for the first signal.
Part B is an annex that you include whenever risk, novelty, or capital at stake exceeds predefined thresholds. It may contain prior evidence and comparable cases, the experiment backlog, data access and masking plans, model and agent designs with evaluation plans and golden datasets, control design and logging requirements, regulatory considerations, integration dependencies, and a graduation plan from sandbox to production.
Allocate each pool's capital across Run, Grow, and Transform with explicit percentages and publish these alongside the pool's financial and quality targets. State the pool's risk appetite in plain language. Specify permitted data, privacy rules, acceptable error bands by use case, latency and cost budgets, when a human must be in the loop, what must be logged, and the triggers for incident escalation.
Decision Quality
Make different decisions for different horizons instead of applying a single hurdle to everything. For Transform work, take decisions on learning evidence first, then on P and L. Approve short discovery sprints when there is an earliest user or operator signal, technical feasibility, and regulatory plausibility. Approve a minimal workflow slice when there is a repeatable signal and a credible control design. Require a full evidence pack to scale, and tie uplifts to the pool's published financial and quality targets.
Publish success and kill criteria before work starts. Award kill credit to leaders who shut down weak paths quickly and cleanly, and recycle any useful components into the Factory library. Move capital in public on the strength of evidence packs. This keeps choice quality high and stops theatre.
Momentum
Design human-plus-agent workflows with embedded controls and versioned components, and ship in weeks rather than quarters. The Factory must provide a discovery runway for every pool. This includes synthetic or masked data, a default evaluation harness, and automatic logging of prompts, inputs, outputs, tool calls, overrides, incidents, and recovery actions.
Measure quality-adjusted throughput, not activity. In addition to the pool's financial and quality metrics, track model cost per transaction, latency distribution, evaluation scores on golden datasets, reviewer override rates with reasons, incident counts, and time to recovery. For exploration, also track time to first signal, learning velocity in tested iterations, graduation rate into production, option value realised through reusable components, and cost per validated learning.
Graduate slices from sandbox to production only when version pinning, control upgrades, cost and latency budgets, evaluation thresholds, and a successful red team review are in place. Once in production, the slice adopts the pool's standard unit economics and quality targets.
Coherence
Align incentives, information, and rhythm so that people pull in the same direction. Tie executive and manager bonuses to the value-pool scoreboards rather than vanity metrics. Replace slide reviews with showbacks of live workflows and evidence. Hold the quarterly, monthly, weekly, and daily rhythms steady. Each quarter, reaffirm the small set of pools, refresh targets and risk appetite, and reallocate capital. Each month, run the portfolio council and review each pool's exploration docket first and its production docket second, deciding to continue, change direction, or stop and recycle components. Each week, review a small set of causal KPIs for each pool and the health of the agent network. Each day, keep service levels green and act on drift or anomalies using defined kill switches.
To make coherence visible, standardise the scoreboard for every pool with four panels: financial outcomes, quality outcomes, AI system health, and exploration learning. Use the same scoreboard every week. People should be able to see movement, understand cause and effect, and argue with data rather than opinions.
Finally, codify exploration guardrails for every pool. Specify permitted data in sandboxes, mandatory logging, human oversight points, temporary error and latency bounds during discovery, and capped spend. Record any exceptions through the portfolio council and show them on the pool's scoreboard. This keeps novelty safe, auditable, and fast inside the same operating system.
Why three to five value pools
This is a practical bandwidth limit that links directly to the model.
Direction
Direction stays clear above noise when the set is small.
Decision Quality
Decision Quality remains high because the top team can revisit a limited number of choices with rigour each quarter.
Momentum
Momentum sustains because owners can ship and measure weekly without splitting attention across too many fronts.
Coherence
Coherence holds because rewards, dashboards, and governance can realistically align around a small set.

If you run a large group, apply the rule at each level. The group level holds three to five pools. Each division holds its own three to five.
Diagnostic questions for each term
Use these questions to assess whether your organisation has truly operationalised the Value Engine framework.
Direction
Can a new hire state the three to five value pools and the targets after one week in the company?
Decision Quality
Can the CFO show the last three capital reallocations and the evidence that drove them?
Momentum
Can the COO show weekly movement in quality adjusted throughput for each pool and the changes shipped to create it?
Coherence
Can any manager explain how their bonus links to the value pool scoreboard and where to view the live numbers?
Glossary of terms used
R-VEOF. Regenvita Value Engine Operational Framework, the leadership operating system that links Direction, Portfolio, Factory, and Evidence into one repeatable loop.
Value pool. A small, named domain where changes can be traced to financial and quality outcomes that roll up to the P and L and risk.
P and L. Profit and loss, the income statement that shows revenue, costs, and profit over a period.
Risk appetite. A plain language boundary for data, models, cost, latency, and human oversight that operators can act on.
Factory. The shared capability that provides components, standards, evaluation, versioning, and a repeatable way to build and operate human plus agent workflows.
Agent. A software component that uses a model to decide on an action, call tools, or draft content within defined guardrails and with a decision trail.
Evaluation harness. A repeatable set of tests that assess accuracy, robustness, and fairness for models and agents using golden datasets.
Golden dataset. A curated set of cases with known correct outcomes used to evaluate models and agents.
Override rate. The percentage of automated recommendations changed by human reviewers, segmented by reason.
Service level objective. A target for service performance such as latency or availability that operations manages every day.
Kill switch. A control that can stop an agent or a workflow quickly when risk thresholds are crossed.
Stage gate. A defined checkpoint in the portfolio where evidence is reviewed before a bet can proceed.
Hurdle rate. The minimum acceptable return for a bet before it can scale.
Evidence pack. A short, standard set of materials that proves a bet should scale or should stop.
Portfolio council. A cross functional group that meets regularly to greenlight, kill, or scale portfolio bets based on evidence.
Showback. A review where teams demonstrate shipped workflows and evidence rather than slide presentations.