Discover more from Polymathic Being
The No True AI Fallacy
Welcome to Polymathic Being, a place to explore counterintuitive insights across multiple domains. These essays take common topics and explore them from different perspectives and disciplines and, in doing so, come up with unique insights and solutions. Fundamentally, a Polymath is a type of thinker who spans diverse specialties and weaves together insights that the domain experts often don’t see.
Today's topic is a twist on a challenge I’ve faced in the Autonomy / AI world for half a decade. It’s based on the No True Scotsman Fallacy and is a continuous shift of the goalposts as systems get smarter. We’ll also tie back into some of our previous looks at the opposite reaction where we claim AI is already cognizant and self-aware. In doing so, we can hopefully avoid two of the major logical pitfalls around our emerging technology.
Where we are in AI, and what that means for us as humans is a topic we’ve explored in essays here on Polymathic Being, specifically, The Layers of AI:
And What’s in a Brain:
The truth of the matter is that we aren’t very close to emulating what we are as humans. Yet we certainly strive to find a lot of comparisons to justify it being as smart or smarter. I don’t quite understand the behavior of reading so much into it and the almost insatiable demand that it is smarter, more sentient, more capable than the actual facts support.
On the other hand, we also keep moving the goalposts on what we consider to be ‘real AI.’ We are simultaneously over and under-attributing what AI is. This is a challenge I have to tease out in my novel Paradox when I introduce AI in the chapter Almost Intelligent:
“So, can we even consider what we have now as AI?” Hector asked.
“Of course it is!” Callie smiled. “It’s clearly almost intelligent.”
Kira chuckled. “What we have suffers from what my Dad would call the ‘No True Autonomy Fallacy,’ which is based on the No True Scotsman Fallacy. Basically, every time we achieve a degree of AI, we immediately dismiss it as not ‘true AI.’”
“So what is AI?”
Kira smiled recalling the memory. “Dad would say ‘What we call AI is merely unexplainable code’. Explainable code is called software.”
“Like when they said that we’d have AI when a computer could beat a human at chess like with Deep Blue?” Hector shifted in his chair.
“Yep, and they moved the goalposts to the game of Go.” Kira mimed shifting the goal.
“But then they made AlphaGo and beat a human.” Sofie was doodling on her paper. “So, was that AI or just software?”
What is AI and Autonomy Related to Humans?
This is something I had to deal with when designing autonomous systems at Lockheed Martin. Whenever we achieved a truly innovative breakthrough, people would see it, understand how it worked, and then shrug and say “That’s not really autonomy.”
To be fair, they were right. It wasn’t autonomous as we know we are. But then again, as neuroscientist Sam Harris makes a case, we also aren’t as uniquely autonomous with as much free will as we think.
His proposition on free will is that we are just genetically coded, hormone-driven, expert systems merely reacting to external stimuli like an algorithm. This has an effect on what we call our own individual autonomy as well as our intelligence.
I don’t agree with his entire proposition, but it does give pause to understand how we really operate. He’s correct that we have a lot more ‘algorithmic’ responses than many give credit for, but I do believe there is a unique element in how we learn socially. It’s a superpower that distinctly differentiates us from computers. This social learning is something we explored in AI Is[n't] Killing Artists:
Namely, we have the ability to combine our capabilities into something greater than the sum of its parts.
This is where innovation occurs. Through collaboration, sharing, and learning. By breaking down the barriers in expertise and looking beyond our own domains and disciplines, we will have an advantage over AI that this technology currently isn’t capable of.
The challenge in the obfuscation of both constantly moving the goalposts when we understand AI or freaking out about AI when we don’t understand it is creating a mess of opinions and commentary that don’t help us in what critically matters; understanding ourselves!
We need to be able to dangle in the messy gray zone of a technology that is quickly advancing, challenging our understanding of its capability, and yet still explainable if we slow down and analyze it. We also need to slow down on the over-attribution of AI and, instead, step back and focus more on what makes us human.
Right now, what we call AI is merely unexplainable code or, at least, code we just don’t understand or read too much into. But when it’s explainable code, we just call it software and merely use it in our web searches, word processing, social media ‘algorithms’, and more without a second consideration for the human implications.
This isn’t to say the software doesn’t continue to develop. Still, it should cause us to think twice before overattributing capabilities while at the same time cogently accepting the technological gains we have made. Avoiding these two pitfalls and being able to manage the messy gray space in our current development ecosystem is critical to managing the human, ethical, and societal implications of AI.
This extra thought process is just one more superpower that humans have. We can think about these implications while AI, right now, merely computes.
Enjoyed this post? Hit the ❤️ button above or below because it helps more people discover Substacks like this one and that’s a great thing. Also please share here or in your network to help us grow.
Polymathic Being is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Further Reading from Authors I really appreciate
I highly recommend the following Substacks for their great content and complementary explorations of topics that Polymathic Being shares.
So what topics have you dug into that have surprised you in what you found on inspection