Welcome back to OR-Path!
This series is about applying OR inside real systems and organizations: software, data pipelines, users, constraints, and operational reality. Not models in isolation. Not whiteboard formulations. Systems that run every day and fail in very specific, repeatable ways.
One failure I keep seeing is teams treating the optimizer as the system. They build a solid model or heuristic, plug it into an application, and assume the hard part is done. It isnβt. Models donβt fail loudly. Systems do. And most OR systems break not because the math is wrong, but because the engineering discipline around the model is missing.
So letβs get straight to it.
1. Your model is an engine, not the product
An optimizer is just one component. It lives inside a system with APIs, UIs, databases, schedulers, and users doing unpredictable things.
If you donβt explicitly design how the application communicates with the optimizerβprotocols, schemas, contractsβyou are already accepting failure as a feature.
2. General engineering checks are necessary, but not sufficient
Yes, you still need the basics:
UI that prevents missing mandatory fields
Strong typing and format validation
Contract validation between services
But optimization systems need additional guardrails.
3. Optimization-specific validations you canβt skip
In practice, Iβve learned to enforce at least these:
Data sanity checks before the model sees anything
Validate ranges, domains, cardinalities, and structural assumptions. Donβt rely on the solver to βfigure it outβ.Constraint-level unit tests
Given a solution, automatically verify that critical constraints are actually respected. Do this outside the solver.Remove corrupted data aggressively
If a record is wrong and you know it can break feasibility or stability, remove it from the optimization input. Do not βtry anywayβ.Use slack variables intentionally
When infeasibilities keep appearing on constraints that are business preferences (not physics), soften them and track violations explicitly.
4. Never hide data problems from the user
This is non-negotiable.
If data is removed or ignored:
Exclude it from the model
Surface a clear warning in the output
List what was ignored and why, in user-friendly language
Silent correction destroys trust and makes debugging impossible.
5. The real separator: regression tests for optimizers
What separates high-performing OR teams from beginners is regression testing.
Keep a curated set of representative problem instances. On every PR or deployment:
Re-run the optimizer on all of them
Compare feasibility, objective values, and key decisions
Detect unintended changes immediately
If you donβt do this, youβre shipping blind.
Final notes from the field
If your optimizer only works with βcleanβ data and βperfectβ inputs, itβs not production-ready. Engineering discipline is what turns a good model into a reliable system.
In case you missed:
β Career Roadmap #3: Optimization is (not always) what you need
β Career Roadmap #2: Getting Your First Opportunity in OR
**Have a system, failure mode, or real-world OR problem you'd like me to cover?


