Welcome back to OR-Path!

This series is about applied Operations Research as it actually happens inside organizations β€” shaped by incentives, processes, power structures, and business constraints. Not textbook OR. Not idealized decision-makers. Real companies trying to make money.

Today’s topic addresses a common trap for OR-minded professionals: the belief that every meaningful decision problem deserves an optimizer. It’s an understandable instinct β€” and often the wrong one.

So let’s get straight to it.

Companies Don’t Pay for Models. They Pay for Results.

Companies exist to generate profit. To do that, they define objectives, translate them into measurable targets, and create areas and processes responsible for hitting those targets.

Every day, decisions are made at different levels β€” operational, tactical, sometimes strategic. Some of these decisions are sophisticated. Many are not. But all of them are judged the same way: by results.

Teams that consistently deliver are rewarded. Salaries go up. Bonuses and PLRs appear. Promotions follow. The internal logic is simple: value creation is what matters.

Where OR professionals often stumble is assuming that value creation is proportional to technical sophistication. It isn’t.

The Optimizer Fallacy

As OR enthusiasts, we like optimizers. We’re trained to think in terms of objective functions, constraints, and optimality. There’s nothing wrong with that β€” until it blinds us.

The uncomfortable truth is that many decision problems do not require prescriptive optimization to generate significant value. In fact, forcing an optimizer into the wrong context can slow things down, increase resistance, and dilute impact.

I hate to say it, but sometimes a dashboard is enough.
Sometimes a well-designed pivot table is enough.
Sometimes a simple ranked list based on a reasonable score creates enormous value.

The question is not β€œCan I optimize this?”
The question is β€œWhat level of decision support actually changes behavior here?”

Effort vs. Return Is a Business Question, Not a Technical One

One of the most important skills for an OR professional is judgment: knowing when a prescriptive approach makes sense β€” and when it doesn’t.

Before writing a single line of optimization code, you need to understand the business context. Talk to the people who make the decisions. Talk to the people who execute them. Observe how things actually work.

In poorly structured processes or low-maturity environments, a quick win can be transformative. A simple tool that clarifies priorities or removes ambiguity may β€œsave lives” operationally. Jumping straight to a complex optimizer in those settings is often wasted effort.

Always think in terms of effort versus return. Complexity has a cost β€” technical, organizational, and political.

Start Simple. Learn the Process. Build Credibility.

There’s a disciplined way to approach this.

Start by understanding. Do interviews. Shadow operators. Go to the gemba. Deliver something simple that removes a real pain point.

This does a few important things. It deepens your understanding of the process in ways no documentation ever will. It helps you identify where optimization could later make sense β€” often in places no one initially sees. And it builds trust with the business, because you solved a real problem instead of showcasing a clever model.

From a career perspective, this matters. Concrete success cases compound. They lead to recognition inside the company β€” and they travel well on your CV if you move on.

Optimization is powerful. But judgment comes first.

So the real question is: are you ready to decide when Operations Research should β€” and should not β€” be used in your work?

In case you missed:

β†’ Career Roadmap #2: Getting Your First Opportunity in OR
β†’ Career Roadmap #1: Career Roadmap #1: Practical insights to navigate and grow your OR career

**Have a specific career topic or practical advice you'd like me to cover?

Keep Reading

No posts found