Understanding Humanity's Superpower
While AI can compute and provide responses based on existing knowledge, it lacks the depth of understanding and the capacity for critical thinking that humans possess. AI models don't experience genuine curiosity or confusion and can't ask probing questions to seek a deeper understanding. This limitation means that AI's responses may lack nuance or context, and they might not always consider the ethical or moral implications of their answers.
When integrating technology, including AI, into our lives, it is of utmost importance to approach it ethically and responsibly. This involves several key considerations. First, we need to ensure that the data and information fed to AI models are diverse, and representative of the real world. Biases present in the data can be inadvertently amplified by AI, leading to unfair outcomes. Careful data curation and regular evaluation are essential to mitigate these biases. Furthermore, ongoing monitoring and evaluation are necessary to identify and rectify any unintended consequences or emergent biases that may arise from AI systems. Regular audits and assessments of AI algorithms and their impact on individuals and society can help address emerging issues and refine the technology.
Secondly, transparency and explainability are crucial. AI algorithms can be highly complex, making it difficult for users to understand how and why certain decisions are reached. It is vital for developers and organizations to make efforts to explain AI's decision-making processes in a clear and understandable manner, allowing for accountability and avoiding the creation of "black box" systems.
However, I have a question , Can the hallucinations refine and one day indicate an evolution of intelligence? As AI gets access to different knowledge bases and use algorithms to employ logic and decision making to map out an approximate response to the prompt provided? Is our realignment to ensure that we are the focus of the systemic evolution and limit the agency of AI to preserve itself to act on it own ability to think ? Does it come up with a logic that aligns with reality similar to ours, surpassing our understanding or is the realignment limiting the response. AI is aligned with the latest interpretation of the discipline that it is being integrated into, and emergent knowledge has to be validated by human critical thinking always?
As usual, Michael from Polymathic Being nails it: AI computes, humans think.
I remember being shocked to learn that "computer" used to be a job title before realizing how complicated equations used to be before even mechanical computing. WIthout all of that, you'd be forced to do every computation by hand, and the actual mathematician or engineer who was writing the formulas was likely needed elsewhere.
In your essay I saw you mention that AI is still a computer, fundamentally still only computing in such an advanced way that we anthropromorphize it (though we'll do that to anything given the chance). Do you think that eventually, enough computation will allow AI to "evolve" critical thinking? Would it require a different programming architechture or a critical model running "on top" of ChatGPT, or do you think it's not something we can figure out with current advancements?
Appreciate this! So glad to have found your profile.
I understand where you're coming from but I think you have failed to consider many aspects. While I'm not necessarily sure I agree with you that humans think while AI computes, the bigger error you make is in a more present focused view of artificial intelligence. You talk about it as a tool or an intern. Well, humans have been known to treat human interns badly historically, often exploiting them as free labour.
If we simply view the current artificial intelligence as a tool or an intern, we might create that idea in humanity's mind which will be difficult to dislodge in the future. If at some point, artificial intelligence does gain your conception of "thinking", people will come to exploit it like a tool or intern. Long after it makes sense to think artificial intelligence is just a tool or intern.
Let's not make that mistake, it usually doesn't end well.