18 Comments

While AI can compute and provide responses based on existing knowledge, it lacks the depth of understanding and the capacity for critical thinking that humans possess. AI models don't experience genuine curiosity or confusion and can't ask probing questions to seek a deeper understanding. This limitation means that AI's responses may lack nuance or context, and they might not always consider the ethical or moral implications of their answers.

When integrating technology, including AI, into our lives, it is of utmost importance to approach it ethically and responsibly. This involves several key considerations. First, we need to ensure that the data and information fed to AI models are diverse, and representative of the real world. Biases present in the data can be inadvertently amplified by AI, leading to unfair outcomes. Careful data curation and regular evaluation are essential to mitigate these biases. Furthermore, ongoing monitoring and evaluation are necessary to identify and rectify any unintended consequences or emergent biases that may arise from AI systems. Regular audits and assessments of AI algorithms and their impact on individuals and society can help address emerging issues and refine the technology.

Secondly, transparency and explainability are crucial. AI algorithms can be highly complex, making it difficult for users to understand how and why certain decisions are reached. It is vital for developers and organizations to make efforts to explain AI's decision-making processes in a clear and understandable manner, allowing for accountability and avoiding the creation of "black box" systems.

However, I have a question , Can the hallucinations refine and one day indicate an evolution of intelligence? As AI gets access to different knowledge bases and use algorithms to employ logic and decision making to map out an approximate response to the prompt provided? Is our realignment to ensure that we are the focus of the systemic evolution and limit the agency of AI to preserve itself to act on it own ability to think ? Does it come up with a logic that aligns with reality similar to ours, surpassing our understanding or is the realignment limiting the response. AI is aligned with the latest interpretation of the discipline that it is being integrated into, and emergent knowledge has to be validated by human critical thinking always?

Expand full comment
author

Great insights.

Expand full comment
Dec 16, 2023Liked by Michael Woudenberg

As usual, Michael from Polymathic Being nails it: AI computes, humans think.

Expand full comment
author

I have to give credit to my wife for the phrase and kicking the analysis off!

Expand full comment
Dec 16, 2023Liked by Michael Woudenberg

That's where my best ideas come from too (my wife, not yours :))

Expand full comment

I remember being shocked to learn that "computer" used to be a job title before realizing how complicated equations used to be before even mechanical computing. WIthout all of that, you'd be forced to do every computation by hand, and the actual mathematician or engineer who was writing the formulas was likely needed elsewhere.

In your essay I saw you mention that AI is still a computer, fundamentally still only computing in such an advanced way that we anthropromorphize it (though we'll do that to anything given the chance). Do you think that eventually, enough computation will allow AI to "evolve" critical thinking? Would it require a different programming architechture or a critical model running "on top" of ChatGPT, or do you think it's not something we can figure out with current advancements?

Expand full comment
author

I think it's possible but not until we better understand the human brain. ChatGPT is nowhere near that. It's still structured and weak vs. flexible and strong.

Expand full comment

Yeah, yelling at a computer to "do the thing!" doesn't work like it does with people. Instead of a satisfying movie ending where the guy pulls the right lever, it throws an error.

Expand full comment
Jun 28, 2023Liked by Michael Woudenberg

Love the analogy! 😂

Expand full comment

Expectations of big data:

"Computer, do the thing!"

"I have already written up the results in an easy-to-digest, one-page format for your perusal. With your approval, it shall be done."

Reality:

"Computer, do the thing!"

"Please structure your query."

Expand full comment
Jun 28, 2023Liked by Michael Woudenberg

Appreciate this! So glad to have found your profile.

Expand full comment
author

Glad to have you here!

Expand full comment

I understand where you're coming from but I think you have failed to consider many aspects. While I'm not necessarily sure I agree with you that humans think while AI computes, the bigger error you make is in a more present focused view of artificial intelligence. You talk about it as a tool or an intern. Well, humans have been known to treat human interns badly historically, often exploiting them as free labour.

If we simply view the current artificial intelligence as a tool or an intern, we might create that idea in humanity's mind which will be difficult to dislodge in the future. If at some point, artificial intelligence does gain your conception of "thinking", people will come to exploit it like a tool or intern. Long after it makes sense to think artificial intelligence is just a tool or intern.

Let's not make that mistake, it usually doesn't end well.

Expand full comment
Jun 28, 2023Liked by Michael Woudenberg

I agree that people may exploit and abuse AI beyond ethical use cases. Still, I’d say viewing AI as a tool is necessary for us to distinguish between our role as prompter and AI role as implementer. If AI gains sentience, people will treat it as they would treat human assistants, good or bad.

Expand full comment

I understand the desire to view AI as a tool. My concern is that we’re going to be locked into this way of thinking until well after this artificial intelligence becomes independent thinking.

My general way of pointing this out is how we’ve treated newly discovered things. Like an entire continent of humans who don’t look like those who discovered it. How the industrial revolution lead to treating people like machines.

Expand full comment
Dec 16, 2023Liked by Michael Woudenberg

Not sure I follow — are you suggesting we don’t properly explain what AI does today (act as an assistant) because “someday” it “might” be different?

Expand full comment

No, what I’m saying is plan for when it WILL be different because it’s going to be. Anyone who has any understanding of history realizes that.

Remember when people thought that the internet was going to be this great unifying force in the world? Then we had the same idea about social media?

Only a naive person would assume that what something is now will always be that forever. It’s techno-utopian thinking to believe that we should just act like we understand how a technology will change things in the future. Implicit in your comment is the assumption that we understand everything there is to know and that it’s all perfect the way it is.

We actually don’t know what conscious is and to presume that you do know and that AI doesn’t is to fail to see the obvious.

To paraphrase something that sums up my thoughts:

“I’m glad that you’ve got an absolute definition of the universe but maybe the universe has other ideas?”

Expand full comment
Dec 17, 2023Liked by Michael Woudenberg

“Implicit in your comment is that we everything there is to know.” — that’s entirely opposite of what I meant. Nobody knows where things will go, obviously. No disagreement here.

Expand full comment