(And not losing humanity in the process.)

Some backstory for the link and QR code above. This isn’t an “AI Cliff-coach,” even though the platform I used would happily frame it that way. I’m a PCC-certified ICF coach, and I’m increasingly careful about how we use the word “coach” in this space. This polarity-informed AI is not a coach. It is not a substitute for human presence, judgment, relationship, or accountability. It’s designed to support reflection and learning in a limited way, while keeping responsibility for decisions exactly where it belongs—with humans. That boundary matters to me, and it shaped how this was built.

Part of that clarity comes from experience. I’ve been on the coaching faculty at Mason’s Leadership Coaching for Organizational Well-Being program for nearly twenty cohorts. When COVID forced us online, I honestly wondered if a deeply human, relational coaching experience could survive the shift. It did. Something is always lost when the medium changes, but we didn’t lose what mattered most—presence, ethics, care, depth, and relationship. We adapted because we had to, and over time the unfamiliar became normal. That experience stayed with me when I began asking whether AI had a place in my work.

There was also a more practical reality underneath the question. Over the years, I’ve received a steady stream of polarity-focused questions from clients, colleagues, students, and friends. I value those connections. I care about being generous and responsive. And if I’m honest, that generosity has also eroded my structure and boundaries more than once. Some days, my calendar turns into a proper noun: Chaos. So this wasn’t about novelty or being “cutting edge.” It was about stewardship. Could AI help me be more responsible with my time and attention while still being available to people? If that answer was even possibly yes, then the real question became how to engage it without compromising what matters.

What quickly became clear is that this isn’t a problem to solve—it’s a tension to work with: Human Presence AND AI Augmentation. Not AI versus human. Not replacement versus relevance. Human presence carries meaning, ethics, relationship, accountability, and discernment. AI can support pattern recognition, reflection, synthesis, and learning. Used well, it can carry some of the cognitive load so there’s more capacity for the human work that actually matters. But the line is clear: meaning, values, and judgment are not delegable. A simple way I hold it is this—AI can carry the backpack; humans still choose the path.

This became especially important in the context of the work I do with leaders, teams, and organizations using a 5-Step process: Seeing, Mapping, Assessing, Learning, and Leveraging. Everything starts with Seeing. If we misname what we’re dealing with, everything that follows is off. Many of the challenges leaders face are not problems to solve, but ongoing tensions to work with over time. If we don’t see that clearly, we overcorrect, create predictable downsides, and end up solving one issue while creating the next.

That’s where this AI tool can be useful—specifically for Step 1: Seeing. It can help people slow down, reflect, and recognize patterns they’re inside of. It can help name tensions, surface assumptions, and make dynamics more visible. What it cannot do is replace judgment, decide what matters, or choose a path forward. That remains human work. Always.

Once you start looking at AI through that lens, the broader set of tensions becomes visible. Innovation AND Integrity. Possibility AND Protection. Efficiency AND Presence. Intelligence AND Wisdom. Power AND Stewardship. Structure AND Emergence. This is the real work—governance. Not whether AI exists, but how we engage it in ways that are responsible, relational, and grounded. Avoiding AI doesn’t protect the human center. Ungoverned use erodes it. So for me, the work became engaging it responsibly.

That also meant being clear about what this tool is and is not. It’s not here to replace coaching, simulate relationship, generate meaning, or hold authority. It’s here to support reflection, organize thinking, surface patterns, and strengthen learning. It’s a support for awareness—not a substitute for wisdom.

At one level, this isn’t really about AI. It’s about protecting the human center while engaging the technological edge—something every generation has to figure out with its own tools. This generation just happens to be dealing with something unusually powerful. So the real question isn’t whether we build intelligent systems. It’s whether we use them wisely.

For me, this is a stewardship experiment. A way to test whether I can engage something new without losing what matters. Not to be faster, louder, or more impressive, but—hopefully—a little wiser in how I use my time, attention, and relationships.

P.S. If you want to explore this more directly, there’s also a short polarity assessment on Automation AND Augmentation for you, your team, or your organization.

With all that said, give it a try HERE and let me know what you think.

P.S. Want to sample a quick oAutomation AND Augmentation Polarity Assessment for you, your team, or organization? CLICK HERE

P.S.S. Environmental impact is a real concern and intentionally not unpacked here. It’s another layer of responsibility I’m actively paying attention to as my individual use and the collective-use impact unfolds.