Leading Complex Systems
Systems Thinking Applied
Welcome to Polymathic Being, a place to explore counterintuitive insights across multiple domains. These essays explore common topics from different perspectives and disciplines to uncover unique insights and solutions.
Today's topic wrangles with the idea that Systems Thinking is not enough. We’ll review the background, then critique the argument, demonstrating that Systems Thinking was never enough, and yet it’s always too much. In doing so, we’ll expose some common misconceptions and provide a few tactical steps that anyone can apply.
Recently, my buddy,
forwarded an article to me titled Why “Systems Thinking” Isn’t Enough Anymore in which the author, 1 describes the current state of the world as a Complex Adaptive System, on which point he’s spot on, and my regular readers will recognize how we work to break these problems apart.However, what caught my attention was how he described the failure of Systems Thinking. His angst appears focused on an article in the Harvard Business Review that merely introduced the topic. His main critique was twofold, with a delicious irony between them. The first was that Systems Thinking was no longer enough to deal with the levels of complexity we face today. The second was that Systems Thinking doesn’t provide enough ‘how tos’ for execution.
This is where he highlights a lack of understanding of actual Systems Thinking. He starts by critiquing an introduction, which means he’s missing the larger view of the problem space of… Systems Thinking… He felt it was an oversimplification of a complex space, which I, too, have criticized in the past. However, if distilling a concept down to provide an entry point is a failure, I’m a leading offender, having penned “Yes, And… So!” to help people begin to see beyond the curtain of myopia.
His next step was to narrow the scope of Systems Thinking by separating it from the complexity literature, which, incidentally, is an element of Systems Thinking. He then proceeds to describe everything as hyper complex, bordering on maniacally complex, and unknowably complex. This doesn’t make sense since Systems Thinkers grapple with complexity, we appreciate complexity, and yet, we don’t lose ourselves in the noise of complexity. This explosion of complexity is actually a common failure mode, and Systems Thinking works to bracket complexity into actionable elements
The irony is that, after narrowing the definition and drowning in complexity, the author then proposes an execution framework that sounds a whole awful lot like Systems Thinking. He recommends a series of ‘how tos’ that already exist in the literature and even riffs off of John Boyd’s OODA Loop without giving credit that his “new” how-to has been around for over half a century.2
He fails in Systems Thinking 101 by exploding the complexity, ignoring the larger environment, and then reinventing the wheel instead of looking for commonalities, not differences in execution. What emerges is a proposed solution that merely renames Systems Thinking.
The article is right in that we spend so much time trying to get people to even begin thinking in systems that we often fail to operationalize the system we want. He’s also right that we need to adopt a complexity thinking approach. However, Systems Thinking hasn’t failed; it’s just extremely difficult to actually get people to do it. Therefore, the goal today is to teach you how. In fact, I’m going to borrow many of the concepts he proposes and show them as the ‘how tos’ of classic Systems Thinking.
How Lead Complex Systems
The author provides a valuable distinction between risk and uncertainty. Risk applies where probabilities are measurable, while uncertainty describes domains where the distributions themselves are unknown or unknowable. Treating uncertainty like risk can lead to overconfidence in predictions. In the face of uncertainty, models degrade faster than plans account for, nudges cascade in unexpected directions, and optimization for efficiency often erodes resilience. Simply put, it’s too fragile.
Instead of measuring just risk, leaders should reward learning velocity, reconfiguration speed, and shock absorption. Simply put, leaders need to embrace complexity, let it flow, and reward those who can adapt, not those who follow strict guidelines and processes. This core philosophy is a blend of the certainty of knowing and the uncertainty of the unknowns.
The twist here is that this challenge doesn’t represent a failure of Systems Thinking but the opportunity space! The ability to embrace complexity is one thing, but operationalizing a solution within that complexity is what we need to do. Here’s how:
First, shift from metaphors to models. Use system dynamics, agent-based simulation, and robust scenario models to stress-test strategies before committing. The goal is to discover where your assumptions break and to plan around those breakpoints.
Second, adopt plural methods. Pair quantitative models with qualitative inquiry. It’s OK if not everything has firm numbers. Too often, the numbers are fudged anyway. Look for behaviors, incentives, and culture instead.
Third, manage portfolios rather than big bets. Multiple small, safe-to-fail probes reveal what the environment will tolerate. Then you scale what works and cull what doesn’t. The key here is ‘safe to fail’ because they should be designed to disprove a hypothesis. Big bets often try to prove a hypothesis. With this, you need to ensure that prototypes and proofs of concept can scale.
Fourth, build tight feedback loops so signals trigger action, not slide decks. Decide in advance what you will watch, how often you will adapt, and who has the authority to pivot. (This is a core function of OODA)3
Finally, align governance to the system’s seams. Decision rights, incentives, and interface controls need to live where handoffs and externalities occur. You must manage your boundaries because if you push that expectation on others, it’ll come back to bite you.4
Strategy and Operations change character under this discipline. Embracing complexity and establishing discipline allows resilience and adaptability to be designed in. It becomes less about predicting a “one right future” and more about maintaining optionality across plausible futures as you explore and learn. However, this still just feels complicated, so let me break it down.
Stop relying on hyper-structured processes that don’t allow flexibility
Execute with discipline while intentionally looking for problems
Do many things well, don’t niche down too fast. Embrace the system
Step back and ensure you are appreciating all of the complexity. Don’t get lost in it, and don’t pretend it doesn’t exist. Embrace and own it.
Summary
If all this sounds familiar, it’s because we’ve explored this in much greater detail in Systems Thinking: Skills and Insights to Resolve Wicked Problems, where we break down the philosophy and provide a key framework to continue decomposing. However, to swing back on the challenge that “Systems Thinking Isn’t Enough Anymore,” I’ll only add this counterintuitive insight: It was never enough, and yet it’s always too much.
It’s too much for many people to handle, and so they either tilt away from the complexity and into the comfort of models or explode that complexity to excuse poor execution. Yet it’s the balance of the right models and disciplined execution that allows us to manage complexity.
This is why we need to approach these challenges with insatiable Curiosity, the humility to admit we don’t know everything, and intentional reframing to ensure we are looking at the problems properly. Systems Thinking was never enough, and yet it’s always too much, which means it’s the right thing to apply.
I’ve hesitated to write this because the more I studied the essay in question, the more I realized it was most likely pasted from an LLM. For those of us who work with AI regularly, it just feels like AI. That, in part, explains the natural contradictions. As such, is it even worth discussing?
The reason I decided to continue publishing is that, in a world where any hack can produce AI-generated content that distracts and confuses, it’s helpful to continue producing human-generated content that fosters clarity.
Did you enjoy this post? If so, please hit the ❤️ button above or below. This will help more people discover Substacks like this one, which is great. Also, please share here or in your network to help us grow.
Polymathic Being is a reader-supported publication. Becoming a paid member keeps these essays open for everyone. Hurry and grab 20% off an annual subscription. That’s $24 a year or $2 a month. It’s just 50¢ an essay and makes a big difference.
Further Reading from Authors I Appreciate
I highly recommend the following Substacks for their great content and complementary explorations of topics that Polymathic Being shares.
Goatfury Writes All-around great daily essays
Never Stop Learning Insightful Life Tips and Tricks
Cyborgs Writing Highly useful insights into using AI for writing
Educating AI Integrating AI into education
Socratic State of Mind Powerful insights into the philosophy of agency
I’m just going to say that with a name like Super Cool & Hyper Criticalthat, I’d expect a bit more, especially when the entire article flags on every AI detector as LLM-generated. Also, when I critiqued his original article and pointed out the AI issue, he promptly blocked me, so…
Albeit constantly misused and misapplied because people don’t understand systems thinking as
is constantly pointing out.This is orientation and reorientation.
I’m currently working on a project where one of the tech leads constantly says, “Not my circus, not my monkeys.” The problem is, when you push those monkeys to the seams, they become chaos monkeys. In Systems Thinking, this is your circus and those are your monkeys.






