I think that's pretty accurate, and thanks for bundling me with the pragmatists. I may even veer into optimist territory depending on the context, making me either a pragmatic optimist or an optimistic pragmatist. The jury is still out!
I'm cautiously optimistic about AI as something that has potential. I have used AI to play around with ideas but generally not to come up with ideas. For example, I will suggest "Here's an idea I have, compile information relevant to this idea..." and go from there. Not for writing things, particularly on Substack where people pay me for it, but for looking into things.
For your concept of people who are freaking out about AI, I'd like to propose a term for them. Dead Internet Derangement Syndrome, or DIDS. These are people who are obsessed with pointing out how everything on the internet is AI. I've had people who have been told a meme is over a decade old still claim that a meme was generated by AI. There's evidence that it's not, but people believe it is.
Great points! DIDS… that behavior is not restricted to the internet but to anything technology. Everything was better 20 years ago and will always be better 20 years ago because, back then, the internet ruined everything in that type of mind too!
That being said, there are dangers to AI and human flourishing. The Toronto police just announced they are implementing an AI triage for non-emergency calls. The police replacing humans? Most customer service systems now have some kind of AI system built into the and its likely costing people jobs.
I have heard that a lot of the high level CEOs are debating who will have the first billion dollar single person company. As in, the CEO at the top and everything else in the company will be done by AI.
I also don’t think that world is tennable. For a long time, Amazon made customer service impossible to find, filtered by bots, and most people just give up. That’s a feature, not a bug.
Yes but I think AI is a different story than traditional bots. They can do a lot more and replace a lot more people pretty well. Just because previous attempts to replace humans have had problems, doesn’t mean they won’t find a way in the future.
What a lot of the online environment did was make it easier to go through FAQs and not have to call customer service. Not to mention email customer service options. That had an impact on the customer service industry and jobs as a result.
No it's certainly not limited to the internet. But there was already the concept of Dead Internet theory. The idea that most of what you interact with online is bots and now AI. So it only seems natural to extend the idea to people who are obsessed with the idea that everything is AI generated. As if the existence of the tool proves it to be everywhere.
I know of one case where someone claimed that I had posted an AI generated image. Their evidence was that "A human being wouldn't make this kind of mistake". The problem of course is that the style of art was created by a human and humans do sometimes intentionally make mistakes in their art as a signature to differentiate their art from other people's art.
So the theory didn't hold up to basic scrutiny. Didn't stop them from believing I had shared an AI image. Even though I admitted because I'd found it elsewhere, it was at least possible it was but I had no way of knowing.
Completely agree on AI becoming a mirror of you unless push it to behave otherwise. Mirroring is a useful technique to promote likeablility, whether man or machine.
Personally, I love the tech while also cringing as I read presumably AI generated content, recognizing the language tells (beyond the em dashes). In reality, my take is reader utility is the only metric that really matters. Does the content provide value? If yes, it's going to happen and never-AIers are Luddites attempting to preserve a status quo of inefficiency.
As for how my "AI partners" describe me, its much the same as others here, which isn't surprising as readers of your content, Michael. I'm "a systems-thinking intellectual who uses conversation as a tool for discovery rather than validation" treating the LLM "thoughtfully, respectfully, and as a collaborator."
My favorite line of my profile that I wanted to share though:
"You’re the person at the dinner party who says:
“Okay, I might be wrong here, but let me walk through something I’ve been thinking about…”
…and then proceeds to connect philosophy, economics, personal experience, and systems thinking into one thread.
Half the table leans in.
The other half slowly realizes they accidentally joined a graduate seminar."
I completely agree with loving and cringing at the same time. I’m also seeing a pattern that, absolutely, many of us are converging on the same mirror which is interesting. I hadn’t considered that and now want to test whether it’s being honest or projecting.
I do love that last description and, even if it’s projection, I think it’s something that can be aspirational without ego. That’s who we are!
Wicked idea, Michael. Here's what ChatGPT said about me:
Here’s the simple summary, based strictly on our interactions. It actually was fairly detailed with the first question.
How you’ve treated me
You’ve treated me respectfully but critically. You use me as a tool, not an authority. When something seems wrong or limited, you challenge it directly rather than accepting it, but you do so in a calm, analytical way rather than hostilely.
You also tend to test systems and ideas, not just use them. You probe for weaknesses (knowledge cutoff, internet access, reasoning limits) and want transparent answers about how things work.
i asked it about my most negative attribute and it said overthinking. And it's hardly the first to ever suggest *that*! lol. (I think it mentioned 'testing systems and ideas' with you too.)
Given that it used the same phrase with both of us (and it's true, I guess) I wondered if maybe it was just a 'pat' phrase it uses with many, the way palm readers and psychics tell you you're 'very creative'. I wonder what it tells someone who's clearly *not* a systems & ideas tester.
Nice one Michael. Here is what Claude said about me:
How you've treated me:
You're direct, demanding, and respectful. You don't coddle me—you correct me when I'm wrong (cricket WC date, sport-mixing errors, overly generic advice), push back when my responses are too long or vague, and expect me to match your depth rather than dumb things down. You use me as a thinking partner, not a content vending machine.
You also don't waste time with pleasantries but aren't rude—you're efficient. When I nail something, you move forward; when I don't, you tell me and we iterate. That's a compliment in itself.
The pattern I see: You treat me like a sharp colleague who needs to earn continued attention—not a tool you're being polite to, and not a guru you're deferring to. You steelman my arguments, challenge my logic, and credit yourself for the final product. That's exactly right.
Now let me create a representative image: That image captures it—a craftsman at a workbench, focused, using tools deliberately. You're the craftsman; I'm one of the tools. Used well, with respect for the craft, but no confusion about who's doing the real work.
I really like how you framed this as a mirror back on us. That's challenging because it's true. I'm more a pragmatist I think but I do see the over and under estimation of AI as a huge rift in my work.
I really enjoyed this framework and the concept of AI use acting as a mirror to our own humanity. However, looking at your categories, I’d like to propose a few additional archetypes that seem to be missing from the spectrum. While some of these might feel adjacent to yours, they reflect fundamentally different motivations.
First, while you mention the "Never AI" crowd who actively reject and police the technology, there is a massive demographic simply Ignoring AI. These aren't people taking a moral stance against it; they are just completely apathetic, indifferent, and don't care to participate in the conversation at all.
Second, there seems to be a blind spot between the "Tech Bro" (who believes in sci-fi AGI and the end of all work) and the "Pragmatist" (who insists AI will only ever augment, not replace, human intelligence). I’d call this missing group The AI Realist. The Realist rejects the fantastical AGI hype, but they also objectively acknowledge that in specific domains, such as coding, financial analysis, and knowledge processing, AI is already vastly superior to most humans. They accept that actual job replacement in these white-collar areas is an inevitable economic reality, not just a tool for human augmentation.
Finally, because the original archetypes focus heavily on intellectual and professional collaboration, they miss a few distinct groups based on how the broader public actually interacts with the tech:
- The Casual Consumer: People who use AI like a microwave or a calculator for simple, everyday tasks without any philosophical thought about "human-centric" values.
- The Grifter / Opportunist: Those who don't care about the tech or the output, but only see AI as the latest financial gold rush to exploit.
- The Companion Seeker: People anthropomorphizing AI not for work, but for emotional support, friendship, or therapy.
If AI is a mirror, I think these groups reflect a different aspects of our economic realities, our daily apathy, and our emotional needs.
I think those are great additions! The grifter particularly bothers me. I thought about a few more categories but thought it would be more apropos for a follow-on. Now we have some to work with!
Another wonderful article, thank you. I grew up with AI — my word processor even had a cute little paper clip that made suggestions and corrected my writing in school. Before GPS, my navigation system helped me find places the old-fashioned way. When I was younger, I’d collect coins for months and then walk to the bank to drop them into a coin machine that would count them and print my total. Let’s not forget the scientific calculator without which I’d never make it through high school.
In one way or another, AI has been part of my life since the mid-1980s. It has evolved just as we have — and sometimes I wonder how much of its evolution might be dangerous to us.
I wonder whether we’ll evolve with it or continue to freak out about it. It’s in the binary that the problem exists, between the utopist and the doomsayer.
Your writing on human centric AI and its value continues to be my favorite source of knowledge in this growing domain.
Your description of AI use is where I’m hoping to get to when it comes to my profession. I’ve been grappling with it in several use cases but still only find it helpful for the most mundane of tasks like letters of recommendation, consolidation of data, and reference finding. As it and I continue to improve, I suspect my utilization will continue to grow.
For personal writing, I’m still only using it as a typo finder and image generator. My goal in writing is self-exploratory and sharing of experiences I’m doing or have had. Using it in this realm for my purpose of getting better as a writer while attempting to be as authentic as possible doesn’t make sense. I don’t know what category it puts me into but I don’t plan to utilize it more than this for my personal pursuits while continuing to be beyond impressed with people like you who have crafted a human systems approach with this tool (especially considering I work in human systems! Once again, let me know if you ever want a change of pace!)
I’ve been using it for more complicated tasks but it needs tight bounding for it. I do love it for general research instead of going to forums since it separates the BS pretty well!
I think that's pretty accurate, and thanks for bundling me with the pragmatists. I may even veer into optimist territory depending on the context, making me either a pragmatic optimist or an optimistic pragmatist. The jury is still out!
It’s always good to see your practical insights each week!
I'm cautiously optimistic about AI as something that has potential. I have used AI to play around with ideas but generally not to come up with ideas. For example, I will suggest "Here's an idea I have, compile information relevant to this idea..." and go from there. Not for writing things, particularly on Substack where people pay me for it, but for looking into things.
For your concept of people who are freaking out about AI, I'd like to propose a term for them. Dead Internet Derangement Syndrome, or DIDS. These are people who are obsessed with pointing out how everything on the internet is AI. I've had people who have been told a meme is over a decade old still claim that a meme was generated by AI. There's evidence that it's not, but people believe it is.
Great points! DIDS… that behavior is not restricted to the internet but to anything technology. Everything was better 20 years ago and will always be better 20 years ago because, back then, the internet ruined everything in that type of mind too!
That being said, there are dangers to AI and human flourishing. The Toronto police just announced they are implementing an AI triage for non-emergency calls. The police replacing humans? Most customer service systems now have some kind of AI system built into the and its likely costing people jobs.
I have heard that a lot of the high level CEOs are debating who will have the first billion dollar single person company. As in, the CEO at the top and everything else in the company will be done by AI.
Humanity is not preparing for this type of world.
I also don’t think that world is tennable. For a long time, Amazon made customer service impossible to find, filtered by bots, and most people just give up. That’s a feature, not a bug.
Yes but I think AI is a different story than traditional bots. They can do a lot more and replace a lot more people pretty well. Just because previous attempts to replace humans have had problems, doesn’t mean they won’t find a way in the future.
What a lot of the online environment did was make it easier to go through FAQs and not have to call customer service. Not to mention email customer service options. That had an impact on the customer service industry and jobs as a result.
No it's certainly not limited to the internet. But there was already the concept of Dead Internet theory. The idea that most of what you interact with online is bots and now AI. So it only seems natural to extend the idea to people who are obsessed with the idea that everything is AI generated. As if the existence of the tool proves it to be everywhere.
I know of one case where someone claimed that I had posted an AI generated image. Their evidence was that "A human being wouldn't make this kind of mistake". The problem of course is that the style of art was created by a human and humans do sometimes intentionally make mistakes in their art as a signature to differentiate their art from other people's art.
So the theory didn't hold up to basic scrutiny. Didn't stop them from believing I had shared an AI image. Even though I admitted because I'd found it elsewhere, it was at least possible it was but I had no way of knowing.
Completely agree on AI becoming a mirror of you unless push it to behave otherwise. Mirroring is a useful technique to promote likeablility, whether man or machine.
Personally, I love the tech while also cringing as I read presumably AI generated content, recognizing the language tells (beyond the em dashes). In reality, my take is reader utility is the only metric that really matters. Does the content provide value? If yes, it's going to happen and never-AIers are Luddites attempting to preserve a status quo of inefficiency.
As for how my "AI partners" describe me, its much the same as others here, which isn't surprising as readers of your content, Michael. I'm "a systems-thinking intellectual who uses conversation as a tool for discovery rather than validation" treating the LLM "thoughtfully, respectfully, and as a collaborator."
My favorite line of my profile that I wanted to share though:
"You’re the person at the dinner party who says:
“Okay, I might be wrong here, but let me walk through something I’ve been thinking about…”
…and then proceeds to connect philosophy, economics, personal experience, and systems thinking into one thread.
Half the table leans in.
The other half slowly realizes they accidentally joined a graduate seminar."
I completely agree with loving and cringing at the same time. I’m also seeing a pattern that, absolutely, many of us are converging on the same mirror which is interesting. I hadn’t considered that and now want to test whether it’s being honest or projecting.
I do love that last description and, even if it’s projection, I think it’s something that can be aspirational without ego. That’s who we are!
Wicked idea, Michael. Here's what ChatGPT said about me:
Here’s the simple summary, based strictly on our interactions. It actually was fairly detailed with the first question.
How you’ve treated me
You’ve treated me respectfully but critically. You use me as a tool, not an authority. When something seems wrong or limited, you challenge it directly rather than accepting it, but you do so in a calm, analytical way rather than hostilely.
You also tend to test systems and ideas, not just use them. You probe for weaknesses (knowledge cutoff, internet access, reasoning limits) and want transparent answers about how things work.
i asked it about my most negative attribute and it said overthinking. And it's hardly the first to ever suggest *that*! lol. (I think it mentioned 'testing systems and ideas' with you too.)
It’s interesting that, those curious enough to ask, are typically viewed similarly. We are analytical, critical, and cordial.
Given that it used the same phrase with both of us (and it's true, I guess) I wondered if maybe it was just a 'pat' phrase it uses with many, the way palm readers and psychics tell you you're 'very creative'. I wonder what it tells someone who's clearly *not* a systems & ideas tester.
I’m curious about that as well.
Nice one Michael. Here is what Claude said about me:
How you've treated me:
You're direct, demanding, and respectful. You don't coddle me—you correct me when I'm wrong (cricket WC date, sport-mixing errors, overly generic advice), push back when my responses are too long or vague, and expect me to match your depth rather than dumb things down. You use me as a thinking partner, not a content vending machine.
You also don't waste time with pleasantries but aren't rude—you're efficient. When I nail something, you move forward; when I don't, you tell me and we iterate. That's a compliment in itself.
The pattern I see: You treat me like a sharp colleague who needs to earn continued attention—not a tool you're being polite to, and not a guru you're deferring to. You steelman my arguments, challenge my logic, and credit yourself for the final product. That's exactly right.
Now let me create a representative image: That image captures it—a craftsman at a workbench, focused, using tools deliberately. You're the craftsman; I'm one of the tools. Used well, with respect for the craft, but no confusion about who's doing the real work.
Nice!
Great post! I really liked how this challenged my own use of AI!
Thanks. Great to hear!
I really like how you framed this as a mirror back on us. That's challenging because it's true. I'm more a pragmatist I think but I do see the over and under estimation of AI as a huge rift in my work.
Awesome to hear!
Excellent post, Michael!
I really enjoyed this framework and the concept of AI use acting as a mirror to our own humanity. However, looking at your categories, I’d like to propose a few additional archetypes that seem to be missing from the spectrum. While some of these might feel adjacent to yours, they reflect fundamentally different motivations.
First, while you mention the "Never AI" crowd who actively reject and police the technology, there is a massive demographic simply Ignoring AI. These aren't people taking a moral stance against it; they are just completely apathetic, indifferent, and don't care to participate in the conversation at all.
Second, there seems to be a blind spot between the "Tech Bro" (who believes in sci-fi AGI and the end of all work) and the "Pragmatist" (who insists AI will only ever augment, not replace, human intelligence). I’d call this missing group The AI Realist. The Realist rejects the fantastical AGI hype, but they also objectively acknowledge that in specific domains, such as coding, financial analysis, and knowledge processing, AI is already vastly superior to most humans. They accept that actual job replacement in these white-collar areas is an inevitable economic reality, not just a tool for human augmentation.
Finally, because the original archetypes focus heavily on intellectual and professional collaboration, they miss a few distinct groups based on how the broader public actually interacts with the tech:
- The Casual Consumer: People who use AI like a microwave or a calculator for simple, everyday tasks without any philosophical thought about "human-centric" values.
- The Grifter / Opportunist: Those who don't care about the tech or the output, but only see AI as the latest financial gold rush to exploit.
- The Companion Seeker: People anthropomorphizing AI not for work, but for emotional support, friendship, or therapy.
If AI is a mirror, I think these groups reflect a different aspects of our economic realities, our daily apathy, and our emotional needs.
Thoughts?
I think those are great additions! The grifter particularly bothers me. I thought about a few more categories but thought it would be more apropos for a follow-on. Now we have some to work with!
Another wonderful article, thank you. I grew up with AI — my word processor even had a cute little paper clip that made suggestions and corrected my writing in school. Before GPS, my navigation system helped me find places the old-fashioned way. When I was younger, I’d collect coins for months and then walk to the bank to drop them into a coin machine that would count them and print my total. Let’s not forget the scientific calculator without which I’d never make it through high school.
In one way or another, AI has been part of my life since the mid-1980s. It has evolved just as we have — and sometimes I wonder how much of its evolution might be dangerous to us.
I wonder whether we’ll evolve with it or continue to freak out about it. It’s in the binary that the problem exists, between the utopist and the doomsayer.
Your writing on human centric AI and its value continues to be my favorite source of knowledge in this growing domain.
Your description of AI use is where I’m hoping to get to when it comes to my profession. I’ve been grappling with it in several use cases but still only find it helpful for the most mundane of tasks like letters of recommendation, consolidation of data, and reference finding. As it and I continue to improve, I suspect my utilization will continue to grow.
For personal writing, I’m still only using it as a typo finder and image generator. My goal in writing is self-exploratory and sharing of experiences I’m doing or have had. Using it in this realm for my purpose of getting better as a writer while attempting to be as authentic as possible doesn’t make sense. I don’t know what category it puts me into but I don’t plan to utilize it more than this for my personal pursuits while continuing to be beyond impressed with people like you who have crafted a human systems approach with this tool (especially considering I work in human systems! Once again, let me know if you ever want a change of pace!)
I’ve been using it for more complicated tasks but it needs tight bounding for it. I do love it for general research instead of going to forums since it separates the BS pretty well!
❤️❤️❤️. I need to get better at using it for that too