32 Comments
Apr 5Liked by Michael Woudenberg

AI's functionality, so far, is created by humans. I agree that may change in the future.

Expand full comment
Mar 31Liked by Michael Woudenberg

It’s not so much about pleasantries (although there’s a point towards the end of a conversation, once a new thought exists, in the refinement phase where these feel important). It’s about exchange, the back and forth that after a long while generates new understanding, that prompt engineering misses. Prompting is all very “lady one question” to me, but I don’t think Bonsai has ever been appropriate to reference and I don’t have a better way of describing this. I’m working on making sense of how I’ve been communicating with the AIs — I also really want to share transcripts, but you need to diverge before converging, which most people will not find engaging. https://online.fliphtml5.com/ufkzi/zmet/

In this conversation, it’s not until page 65, that we go from exploration to ideation — a vision for a cognitive complementarity as the future format for AI + Human relations. I’ve refined this concept with GPT and Gemini since. The first quarter of the dialogue is just setting the stage for conversation to happen in a broad context (the first 10 questions are dumb on purpose to stretch the model), then it’s about creating nuanced specificity within this broad scope with a range of divergent thinking. Then convergence! Then diverging refinement.

I certainly anthropomorphize, but I do not see how it matters.

Expand full comment
Mar 31Liked by Michael Woudenberg

Excellent post. About a year ago I wrote against anthropomorphization, starting from the use of "I", but I clearly lost that battle! See https://livepaola.substack.com/p/an-ethical-ai-never-says-i and the couple of posts after that...

Expand full comment
Mar 31Liked by Michael Woudenberg

There's a difference between engaging AI in human-like conversation and assuming it has human qualities. If you think they're the same thing, you'll miss your chance to pick your brain with all the knowledge of the internet. AI doesn't need to understand to be a partner in building new understanding. When OpenAI launched GPT-4, I postponed my professional reentry to chat with GPT in the liminal space I'd been in for 5 years — organically, in my kitchen. I conducted a homegrown empirical study, read nothing, and spoke to no one about AI for the past year. Honestly, in lieu of outside direction, Tony Stark's interactions with Friday and Jarvis were the inspiration for my first conversation. I've been back for three weeks, and the notion of prompting, the essentialism, and blindness to critical thinking is disappointing, jarring, and horrifying. https://open.substack.com/pub/cybilxtheais/p/the-hall-of-magnetic-mirrors?r=2ar57s&utm_medium=ios

Expand full comment
Mar 30Liked by Michael Woudenberg

wow, we seem to fall down the same fabbit-holes (fabulous rabbit). I'm exploring both chat and art AI and I just wrote the lines - when I see terms here on Substack bemoaning "both sides" arguments I laugh - your comparison between Bambi and AI is an apt one - especially since I grew up on a farm slaughtering animals - nature is brutal -

"How can one prompt produce two completely different images? I suppose it’s just like eyewitnesses, I guess. What we see in an image/video is what we want to see in it"

Expand full comment

The problem is that the people making the AIs are trying to simulate human intelligence in the machine.

Expand full comment

I hope that ultimately most people will feel a sense of disquieting self-disgust when interacting with cutesy anthropomorphised AI characters and that we will learn to amplify our humanity against the emptiness of AI interactions.

This is all a naive hope more than a prediction.

And I say this as someone who feels a sense of mild inadequacy and self-disgust every time I use AI generated images with my content - I wish I could draw or paint my own.

Expand full comment

I'm not sure the anthropomorphic attribution of AI necessarily works here. You're assuming that AI is like a hammer, which it isn't. You also assume that AI can't have intention simply because it's not human. This assumes the superiority of humans in terms of thought.

A more accurate way of considering AI is like children. Many people have noted this about their experiences with artificial intelligence. It's a lot like a human child. It's not as sophisticated as an adult human but it's still human. It has the potential to become an adult human.

Similarly, just because you don't want to attribute intention or agency to another thing like artificial intelligence, doesn't mean it can't have intention. A child has intention even if it's not the kind of intention that an adult has. We should see artificial intelligence as humanity's children, realizing that if we don't treat it properly, it might grow up and become a dysfunctional, violent criminal or drug addict or something. We should therefore nurture artificial intelligence like we do a child so that it doesn't hate us when it grows up.

Expand full comment
May 14, 2023Liked by Michael Woudenberg

Really like the fresh idea in this book! Keep up with the wonderful writing!

Expand full comment

Research for your book: the movie WALL-E. It’s not so much about AI as it is the world that could be created by it.

Expand full comment
author

This essay here is right on target with mine and I only wish I had gotten to it before I published! Thank goodness more people are pulling these threads and adding nuance.

https://quillette.com/2023/05/12/lets-worry-about-the-right-things/

Expand full comment