Welcome back to OR-Path!

This series is about applying Operations Research inside real systems β€” with real data, real users, real overrides. Not just solving models, but operating them.

One recurring engineering mistake: we measure solver performance and ignore execution performance. We know the optimality gap. We don’t know how much of the solution was actually used.

That’s a system design failure.

So let’s get straight to it.

The KPI That Optimizes Reality: Adoption

Your optimizes cost, revenue, time, penalties.

The company measures margin, service level, throughput.

The bridge between them is:

Adoption = similarity between prescribed and executed decisions.

If adoption is low, business impact is low β€” regardless of how elegant your model is.

Stop treating this as a soft metric.
It’s a production KPI.

Instrument the Optimizer Properly

Every run must persist:

  • Versioned input snapshot

  • Full optimizer output (decision variables, objective value)

  • Timestamp / scenario ID

  • Executed or approved plan

Never overwrite results.
Append. Always append.

If you overwrite runs, you destroy your ability to debug trust.

Store this in something durable:

  • Relational DB (PostgreSQL, BigQuery, etc.)

  • Data warehouse

  • Even Google Sheets in early-stage setups

Then build a BI layer (Power BI, Looker, Tableau, Metabase β€” it doesn’t matter). What matters is visibility.

Adoption must be:

  • Computed automatically

  • Time-series tracked

  • Broken down by planner, region, constraint class

Why Historical Logs Matter

These logs are not just for reporting.

If planners systematically modify routes, sequences, or allocations:

  • That pattern is signal.

  • Your model is missing structure.

You can:

  • Detect recurring override patterns

  • Adjust constraint weights

  • Add missing operational constraints

  • Recalibrate penalties

The goal is not to force adoption.

The goal is to evolve the optimizer so that:

The plan it prescribes is closer to what experienced operators would actually run.

Higher adoption is often the result of better modeling β€” not better persuasion.

Final notes from the field

If you don’t store prescribed vs executed decisions over time, you cannot improve adoption systematically.

No history β†’ no diagnostics.
No diagnostics β†’ no refinement.
No refinement β†’ no trust.

Optimality gap tells you how well you solved the model.

Adoption tells you whether the model deserves to exist in production.

Track it like revenue.

In case you missed:

β†’ OR Field Notes # 1: Why OR Models Break Without Engineering Discipline

**Have a system, failure mode, or real-world OR problem you'd like me to cover?

Reply

Avatar

or to participate

Keep Reading