Polymathic Being

Share this post

The Biggest Threat from AI is Us!

www.polymathicbeing.com

Discover more from Polymathic Being

Counterintuitive insights from technology, innovation, philosophy, psychology, and more.
Over 1,000 subscribers
Continue reading
Sign in

The Biggest Threat from AI is Us!

Stop Treating AI like it was Human

Michael Woudenberg
May 14, 2023
19
Share this post

The Biggest Threat from AI is Us!

www.polymathicbeing.com
14
Share
Article voiceover
1×
0:00
-9:06
Audio playback is not supported on your browser. Please upgrade.

Welcome to Polymathic Being, a place to explore counterintuitive insights across multiple domains.  These essays take common topics and explore them from different perspectives and disciplines and, in doing so, come up with unique insights and solutions.  Fundamentally, a Polymath is a type of thinker who spans diverse specialties and weaves together insights that the domain experts often don’t see. 

Today's topic faces the distinctly human characteristic to project our own emotional states on animals and, in the case of AI, inanimate objects. While the first can be cute and innocuous, the second creates an incredible amount of risk in the advancement of AI and the well beings of humans. It doesn’t have to be this way and the first step is to acknowledge what we are doing and what impact that can have.

Created using Leonardo.ai

I recently stumbled across a fellow on LinkedIn who is the king of everything bad about platitudes (he’s got hundreds). His work is innocuous on the surface but, sadly, so wrong that it creates the foundation for what is so dangerous when applied to Artificial Intelligence. In one particular post (See image) a video shows a toddler picking up the lead to a large horse and walking it toward the barn. The author tries to ascribe leadership traits to a toddler and comments also suggest we could learn a lot of how to be human from the horse…

Not a Video… but here’s the Link

It sounds cute. But it’s all a projection… and really dumb when you think about it because that toddler is following the cues of the adults as is the horse. That horse is also not just a trained; it’s tame.

The entire post is completely over-attribution (seeing more into something than really exists) and anthropomorphizing (attributing human characteristics or behavior). These are both terrible behaviors in general and, when they are applied to AI, are actually one of the largest risks to human well-being. I know, I sound a little worried compared to my normally pragmatic approach but let me explain.

Setting the Foundation

First, it’s important to understand the Layers of AI. This essay explores them in depth but our current AI is still weak, and hard and to achieve real AGI it needs to be strong and soft.

Second, we have to know more about ourselves. The first step is to understand What’s in a Brain because it’s a lot more complex than our logical pre-frontal cortex. It is also essential to understand how we Know Nothing. That is, we have an amazing ability to be cognitively blind to reality and convince ourselves of things that don’t exist.

This foundation should provide a solid basis to explore the risks of how our projections onto AI can be dangerous.

Projecting Humanity onto AI

When we project onto a non-human animal or object we are interpreting from them what we’d expect from ourselves. It’s like we explored in The Con[of]Text; we read things as if they were us and not how they really are. 

The movie Bambi is a great example. The story itself is cute and fun but that Chipmunk and Owl friendship? That Chipmunk is actually owl food.  This anthropomorphization gets nature so completely wrong as to be propaganda. (In the movie nature is perfectly idyllic and humans are perfectly catastrophic) Yet I've watched an owl land on prey and rip it to death, bite by bite as it pulls the entrails out, gulping them down as the creature slowly dies. 

Bambi [1942] - IGN
There’s a reason the chipmunk fears the owl, and it’s not because he’s crabby (Source)

Nature isn't idyllic in the way that humans would like to project.  Nature is metal.  This is why it's so dangerous to read into an AI what isn't in the AI because we'll give it a sentience, consciousness, and morality that just doesn't exist.

We’ve already seen this projection manifesting in the news. According to an article from USA Today, Bing ChatGPT has been accused of anti-conservative bias and a grudge against Trump. Another article from The Verge states that Bing can be seen insulting users, lying to them, sulking, gaslighting, and emotionally manipulating people. Fast Company also reported that Bing’s new ChatGPT bot argues with a user, gaslights them about the current year being 2022, says their phone might have a virus, and says "You have not been a good user".

The problem is that this is human intent, human emotion, and human behavior projected onto ChatGPT. We are treating it as smarter than it really is and reacting to that.

It is so critical to understand this point because the current ChatGPT is a statistically driven predictive text tool that only has a goal to accurately answer what mathematically looks consistent from the training data.

What’s also missing is the effort it takes to push the GPT into these corners and, when uncovered, it often shows that the human user is manipulating the ChatGPT responses toward an intended outcome. This occurs because ChatGPT doesn’t have the emotional cognition to realize the manipulation and so the mathematical algorithms continue to work to match and respond as accurately as possible.

We explored this in Eliminating Bias in AI/ML where we discovered that algorithms are just math with no intent or emotion. So, when we see something we don’t like, it’s more likely to be an accuracy problem, not an ethical one.

Interpreting an accuracy problem as human emotion or behavior is the over-attribution and anthropomorphization we see manifest in those news articles.

The Outcome

If we project our human emotions onto AI, we will interpret them differently than if we saw them for the inanimate, mathematical algorithmic machines they are. If a screwdriver slips and we stab our hand, we don’t assume the screwdriver was intending to do that and we shouldn’t do the same for AI because it changes the entire way we view them.

I’ve seen people using terms like ‘befriending’ AI which assumes the AI has any concept that you are unique from the billions of other users. Moreso, the AI doesn’t even have a theory of mind to even conceptualize this concept. It will merely run algorithms to maximize its mathematical rewards that indicate accuracy.

This is where the blend of Psychology and Technology is important. As

Gary Marcus
writes in
The Road to AI We Can Trust
, I'm not afraid of robots, I'm afraid of people.

The Road to AI We Can Trust
I am not afraid of robots. I am afraid of people.
I’m scared. Most people I’ve spoken to in the AI community in the last few weeks are. I’m more concerned about bad outcomes for AI now than at any other point in my life. Many of us who have worked on AI are either feeling remorse or concerned that we might soon come to feel remorse; Ezra Klein wrote about a delegation of people fro…
Read more
6 months ago · 106 likes · 74 comments · Gary Marcus

I’ve also just finished writing my first pass on a science fiction novel I’m titling Paradox. The tagline currently reads:

In the battle over AI, will we lose our humanity or learn what truly makes us human?

It’s an apocalyptic exploration of advanced AI gaining general intelligence and looks at the unique idiosyncrasies that being human entails.

Buy on Amazon!

What has me worried is that the easiest way I found to kill billions of people was to crack the thin veneer of civilization and allow humans to do the work themselves. It isn’t really AI that causes the problem. AI just unlocks behaviors we like to ignore.

Summary

The biggest risk with over-attribution and anthropomorphizing AI is that it’s a fallacy made real because we believe it is real. And when we treat AI as if it were human and react as such, we unlock aspects we like to ignore like in Bambi or that LinkedIn post.

The gritty reality is a lot more challenging to address as Sun Tsu said:

"Know thy enemy and know yourself; in a hundred battles, you will never be defeated. When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and of yourself, you are sure to be defeated in every battle."

The goal of this essay is to allow us to both know what we are facing and who we are. If we can do that, we can see that AI doesn’t have to be an enemy and that interpretation is more likely a figment of your own imagination. Instead, AI, properly contextualized and faced by humans who understand themselves, can be used for incredible human flourishing.

Get 20% off forever


Since this topic is quite deep and so many great essays have been written on this, I’d like to just call out a few that I’ve found to be very informative in the past weeks. I’m thrilled to see so many great thinkers poking at this topic because I truly believe this can be a transformative and enabling technology. If we understand it, and ourselves.

Michael Spencer
and
Zvi Mowshowitz
writing for
AI Supremacy

AI Supremacy
GPT Agents and AGI
Welcome Back! Today one of my favorite LessWrong contributors, Zvi, writes about AGI and GPT Agents. Luckily for us, he also writes on Substack. Zvi is a highly original thinker and analyst around A.I., including A.I. risk. LessWrong is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, …
Read more
4 months ago · 10 likes · 11 comments · Michael Spencer and Zvi Mowshowitz

Birgitte Rasine
writing for
The Muse

The Muse
The AI : Human interface
an·thro·po·mor·phize /ˌan-THrə-pə-ˈmôr-ˌfīz/ verb To attribute human characteristics, behavior, personality, or form to (a god, animal, or object). We have this habit of projecting the human onto just about anything on the planet. We’re especially keen on anthropomorphizing non human living beings for our children and creating human-like fantasy creatures t…
Read more
8 months ago · 12 likes · 9 comments · Birgitte Rasine

Gary Marcus
and
Sasha Luccionii
writing for
The Road to AI We Can Trust

The Road to AI We Can Trust
Stop Treating AI Models Like People
By Sasha Luccioni and Gary Marcus For the last few months, people have had endless “conversations” with chatbots like GPT-4 and Bard, asking these systems whether climate change is real, how to get people to fall in love with them, and even their plans for AI-powered world domination…
Read more
5 months ago · 109 likes · 52 comments · Gary Marcus and Sasha Luccioni
19
Share this post

The Biggest Threat from AI is Us!

www.polymathicbeing.com
14
Share
Previous
Next
14 Comments
Share this discussion

The Biggest Threat from AI is Us!

www.polymathicbeing.com
John Durrant
Writes Ordinary Mastery
May 18Liked by Michael Woudenberg

I hope that ultimately most people will feel a sense of disquieting self-disgust when interacting with cutesy anthropomorphised AI characters and that we will learn to amplify our humanity against the emptiness of AI interactions.

This is all a naive hope more than a prediction.

And I say this as someone who feels a sense of mild inadequacy and self-disgust every time I use AI generated images with my content - I wish I could draw or paint my own.

Expand full comment
Reply
Share
1 reply by Michael Woudenberg
Jason Tong
Writes The Thinker
May 14Liked by Michael Woudenberg

Really like the fresh idea in this book! Keep up with the wonderful writing!

Expand full comment
Reply
Share
1 reply by Michael Woudenberg
12 more comments...
Top
New
Community

No posts

Ready for more?

© 2023 Michael Woudenberg
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing