Strengthening uniquely human capacities in accelerating systems

I’ve spent much of my professional life working across what initially appeared to be very different domains: executive coaching, team coaching, organization development, polarity thinking, leadership development, democracy work, systems thinking, and more recently the implications of artificial intelligence.

For years I treated many of these as adjacent disciplines. Coaching belonged over here. Organization development over there. Democracy work somewhere else. Taoist philosophy in another category entirely. Artificial intelligence seemed like the newest thing arriving to disrupt all the older things.

Over time something started becoming harder for me to ignore.

I no longer think these are separate conversations.

I think they are all responding to the same underlying developmental challenge:

How do human beings remain sufficiently human inside accelerating complexity?

That question increasingly feels central to nearly everything.

And the more I sit with it, the more convinced I become that the AI era is a Both/And challenge rooted in Technology AND Humanity.

Machines are rapidly increasing their ability to generate information, synthesize knowledge, simulate conversation, accelerate productivity, influence behavior, and shape attention at scale. What remains far less clear is whether human beings are developing the emotional, relational, ethical, developmental, and systems capacities necessary to steward that power wisely.

Confession: That gap keeps me up at night.

One of the things I’ve noticed repeatedly in deeper work with leaders, teams, and organizations is that people often arrive believing they are there to discuss strategy, culture, communication, trust, performance, conflict, AI, or change.

Eventually many discover they are really confronting relationship questions:

Their relationship to certainty.
Their relationship to fear.
Their relationship to power.
Their relationship to complexity.
Their relationship to people who see the world differently.
Their relationship to not knowing.

That realization shifts the focus and nature of the conversations.

And it may explain why the work I’ve spent decades doing increasingly feels less like a collection of frameworks and more like one integrated developmental architecture.

Polarity Thinking helped me recognize something I wish I had understood much earlier in life: many of the most important challenges we face are not problems that disappear once solved. They are ongoing tensions requiring wiser participation over time.

Barry Johnson described these as polarities. Increasingly, I experience modern life itself as multarities: interdependencies of more than two poles synergistically contributing toward a greater purpose larger than the sum of the parts.

The AI era has not reduced the importance of human development. It has amplified it.

And when I step back and look across this collection of tensions, eight keep surfacing that feel increasingly foundational to the human skills required for the future (See end for Summary and Greater Purpose Statements):

Either/Or Thinking AND Both/And Thinking.
Shadow AND Ownership.
Certainty AND Complexity.
Advocacy AND Inquiry.
Continuity AND Transformation.
Freedom AND Authority.
Truth AND Trust.
Human Judgment AND Machine Capability.

Granted, that’s a lot.

And that’s the reality of it.

At this point in life, I sometimes suspect my own adult development path largely consisted of circling the same barn repeatedly with slightly more humility each decade. At minimum, the Barn-circling PhD from the University of Hard Knocks came with a concentration in OCD-fueled overcertainty.

And still, one of the earliest developmental shifts may simply involve recognizing that reality is more interdependent than our certainty usually allows.

That realization naturally turns attention inward.

I encountered Robert Johnson’s work on shadow recently while reading The Infinity Machine, the story of the early days of DeepMind and the people wrestling with the implications of artificial intelligence long before most of the world was paying serious attention. I remember being struck that some of the earliest AI conversations were already wrestling with shadow, projection, and human consciousness itself. It seemed important that early AI conversations were paying attention, and it got my attention because Carl Jung had already been deeply important to me for most of my adult life.

From experience, I know shadow is not merely psychological theory.

Shadow becomes organizational behavior. Political behavior. Leadership behavior. Cultural behavior. Algorithmic behavior.

Jung described the shadow as the hidden, repressed, or disowned parts of ourselves that the conscious ego struggles to acknowledge. Rage. Jealousy. Narcissism. Shame. Selfishness. Cruelty. Fear. Though also creativity, power, emotional sensitivity, assertiveness, and capacities we learned early in life were somehow unsafe to express openly.

Most of us build socially acceptable personas to function in the world. Meanwhile the rejected material does not disappear. It accumulates underneath awareness.

And what remains unconscious does not remain inactive.

It leaks.
It erupts.
It projects.
It distorts perception.
It unconsciously reorganizes behavior.

I wish there was broader awareness of projection and what it looks and sounds like. The qualities we react to most intensely in others are often connected to parts of ourselves we have never fully integrated consciously. And the algorithms figured out how best to use them to “promote user engagement” online in tandem with that blindspot.

Outrage scales beautifully online.

Certainty scales beautifully.
Humiliation scales beautifully.
Dehumanization scales beautifully.
Projection scales seemingly infinitely.

Developing Ownership moves more slowly. But it tends to endure.

Meanwhile, AI in social media seems in each moment to amplify shadows faster than human maturity develops the capacity to metabolize them in the absence of that awareness.

Jung was clear that the challenge is not to focus on eliminating our dark sides. The challenge is to make the darkness conscious enough that it no longer controls us unconsciously. As Jung famously wrote,

“One does not become enlightened by imagining figures of light, but by making the darkness conscious.”

Robert Johnson emphasizes that shadow work is about becoming whole, not perfect. I think that distinction matters enormously and why I think that has to be a starting Human Prompt for us. If we can’t work with ourselves, we eventually destabilize every larger system in which we participate: 1-1 relationships, teams/families, organizations, political systems, faith communities, neighborhood communities, democracies, and technology communities.

Which may explain why one of the most important Human Prompts of the future could simply be:

What part of myself am I projecting outward instead of integrating inward?

It may begin as a therapeutic question. It also becomes a leadership question that eventually gets unpacked in coaching engagements as team, system, democracy, and organizational challenges and opportunities emerge.

Adult development theory deepened this realization further for me. What many people experience as political conflict, organizational dysfunction, or social fragmentation is often also a developmental collision between different ways human beings construct meaning itself.

Some people experience the world primarily through certainty, structure, identity, loyalty, and belonging. Others increasingly experience the world through complexity, systems-awareness, contextual thinking, and paradox.

Both contain strengths. Both contain limitations.

The challenge, especially in a social media field fueled by outrage, is that it becomes increasingly difficult not to sort human beings into “good” and “bad” categories. Today, I find myself doing an increasingly poor job of that despite “knowing better.”

The human prompt question becomes how to develop enough maturity to remain in relationship with complexity without drifting toward cynicism, absolutism, domination, fragmentation, or exhaustion. When I’m in conscious “coaching mode,” I do better than when I’m not.

That’s developmental capacity when I can access it.

Which brings me to the skill of coaching, which for years I thought was primarily a profession. Over time and more recently, I see it differently. Increasingly, coaching feels like one of the most important human capacities of the AI era because of the underlying developmental capabilities it strengthens: listening, discernment, curiosity, emotional regulation, presence, reflective thinking, relational intelligence, systems-awareness, and the ability to remain with uncertainty long enough for deeper understanding to emerge.

AI systems are increasingly capable of generating answers, though the wisdom connected to those answers may or may not emerge alongside them.

True wisdom, in my experience, emerges most often through relationship, reflection, tension, accountability, humility, and life-and-the-living-of-it over time.

One of the central polarities that gets addressed in coaching conversations is Advocacy AND Inquiry. Most people unconsciously overidentify with one side. Some push their views aggressively while listening poorly. Others avoid clarity, influence, and conflict altogether. Neither works especially well for very long.

Human development increasingly requires the ability to advocate clearly while remaining genuinely open to what may still be missing from our understanding.

That skill scales upward powerfully across relationships, teams, organizations, communities, and even between nations.

It’s democracy in action. Or not.

All increasingly depend upon human beings developing the capacity to think together rather than merely react together.

Many modern systems optimize speed, reaction, amplification, certainty, visibility, outrage, and tribal reinforcement. Much less energy gets invested in helping people metabolize complexity together.

That concerns me because organizational life increasingly mirrors societal life.

Teams fracture.
Trust erodes.
Dialogue hardens.
People perform certainty instead of exploring reality together.

And systems eventually reward what they repeatedly reinforce regardless of what they publicly claim to value.

Which brings us directly into one of the central organizational polarities of our era:

Continuity AND Transformation.

You can feel this tension almost everywhere now. Every institution. Every educational system. Every religious system. Every business. Every democracy. Every family.

Too much Continuity and systems calcify. Too much Transformation and systems fragment.

The challenge is not choosing one. The challenge is developing the discernment required to leverage both over time.

This becomes even more consequential when scaled into democracy itself.

Several years ago, together with Dr. Bill Benet and Barry Johnson, I helped co-found the Polarities of Democracy Institute because I increasingly feared we were losing the developmental capacities democratic life requires in order to function well.

Democracy is not self-sustaining.

It depends upon human beings developing enough maturity to remain in relationship with one another and with reality itself while navigating tensions that never disappear: Freedom AND Authority. Justice AND Due Process. Participation AND Representation. Diversity AND Equality. Human Rights AND Communal Obligations.

Democracy deteriorates when one side attempts eliminating the legitimacy of the other.

And underneath many of those tensions sits another polarity that increasingly feels foundational to modern civilization itself:

Truth AND Trust.

Truth without Trust becomes weaponized, dismissed, performative, or ignored.

Trust without Truth becomes manipulative, tribal, conspiratorial, and detached from reality.

Shared reality requires both.

Which is why Human Judgment AND Machine Capability may become one of the defining polarities of the next century.

AI will continue becoming more capable. The deeper question is whether human beings become more capable too: more capable of discernment, stewardship, emotional maturity, systems thinking, ethical reflection, paradox tolerance, forgiveness, accountability, dialogue, and wiser participation in complexity.

The future may increasingly reward those capacities because machines do not automatically cultivate them for us.

This is where the Inner Development Goals framework becomes deeply important to me. The Sustainable Development Goals focus primarily on external systems outcomes. The IDGs recognize something equally important: systems rarely evolve sustainably beyond the developmental maturity of the humans leading them.

That insight matters enormously.

Because leadership development is no longer merely organizational strategy.

It is becoming civilizational infrastructure.

Which brings me back to Human Prompts.

Everybody is talking about prompt engineering for machines. Far fewer people are asking what prompts human beings may increasingly need in order to remain deeply human themselves.

Maybe the future increasingly requires prompts like these:

What part of myself am I refusing to see?
Where have I become overidentified with being right?
What tension am I trying to eliminate that instead requires stewardship?
Am I trying to control the conversation or understand what is emerging?
What is this system rewarding that it claims to oppose?
Can I remain in relationship with people who see reality differently without surrendering discernment?
What capacities become more important as machines become more capable?
What does mature strength look like here?
What would help this system become more human rather than merely more efficient?
What am I modeling for the people around me?
What kind of ancestor am I becoming?

I no longer view coaching, polarity thinking, democracy work, Taoist philosophy, organization development, AI leadership, IDGs, and the broader arc of my work as separate endeavors competing for attention. Now, they’ve become different expressions and skills that are employed (more or less well, I should add), of the same underlying commitment:

helping human beings make wiser decisions over time inside increasingly complex systems.

That work now feels less optional to me than it once did.

And maybe that is the simplest way I can say what this Wiser Decisions series is ultimately trying to explore:

The future will almost certainly become more technologically powerful.

Whether it remains sufficiently human may depend upon the capacities we choose to cultivate now.

SUMMARY:

Greater Purpose Statement (GPS) for the Full Multarity:
Cultivate wiser human participation within increasingly complex technological, organizational, societal, and democratic systems over time.

Either/Or Thinking AND Both/And Thinking
GPS:

Enable effective problem-solving while sustaining awareness of interdependence and complexity.

Shadow AND Ownership
GPS:

Increase human wholeness, accountability, and conscious participation in relationships and systems.

Certainty AND Complexity
GPS:

Support meaning-making that remains grounded while adapting to ambiguity and changing realities.

Advocacy AND Inquiry
GPS:

Strengthen dialogue, learning, and collaborative discernment across differences and tensions.

Continuity AND Transformation
GPS:

Preserve coherence and stability while enabling adaptation, learning, and sustainable evolution.

Freedom AND Authority
GPS:

Support democratic participation, societal stability, and responsible shared governance.

Truth AND Trust
GPS:

Sustain shared reality and relational confidence necessary for healthy human systems.

Human Judgment AND Machine Capability
GPS:

Leverage technological advancement while preserving ethical discernment and human wisdom.

Want to learn more about Polarity Thinking and explore options for self-paced learning and Credentialing?
CLICK HERE

Want to use an AI-trained “Chat w/Cliff” to support you in Step 1 Seeing?
CLICK HERE