Mind or Metal: Is AI Shaping the Next Evolution of Thought? PART I- by Rajabhishek Dey

  • Visitor:36
  • Published on: 2024-09-29 12:45 am

Mind or Metal: Is AI Shaping the Next Evolution of Thought? PART I- by Rajabhishek Dey

...It was a vivid reminder of my carefree childhood days spent glued to Cartoon Network, watching Mojo Jojo's genius unfold. Little did I know back then that Mojo Jojo’s relentless quest for domination could be a lighthearted metaphor for what we now fear with AI: the rise of superintelligence. As I returned to my work, I couldn’t help but chuckle at how a childhood villain could so perfectly illustrate modern concerns about machines outsmarting their creators...

  • Share on:

 


Introduction - The Age of Machines and the Quest for Consciousness

Amid the hectic working schedule of adulthood—deadlines looming, emails pinging and a cup of coffee going cold on my desk—somedays ago I found myself overwhelmed by the pace of modern life. As I sat staring at the endless rows of code on my screen... scrolling up and down, the tension in the air was palpable. Tiny droplets of rain clung to the window, casting a calming rhythm on the glass. "Talkin' to myself and feelin' old, Sometimes I'd like to quit... Nothin' ever seems to fit Hangin' around, Nothin' to do but frown... Rainy days and Mondays always get me down" played softly on my system with its melancholic melody perfectly matching the gray skies outside. Then, in a fleeting moment, a random thought popped into my head that brought an unexpected smile to my face. The rhythmic tap of raindrops against the window reminded me of lazy childhood afternoons spent indoors, often with the TV on, escaping into the world of cartoons. It was during those rainy days that I enjoyed Mojo Jojo’s comical attempts to take over the world in The Powerpuff Girls, and the memory of those carefree moments resurfaced, offering a stark contrast to the stress of my current reality. Yes, I remembered him—the notorious villain from show. The image of his oversized brain under that iconic helmet, paired with his wild schemes to take over the world, flashed in my mind. Suddenly, the seriousness of the moment dissolved.

 


It was a vivid reminder of my carefree childhood days spent glued to Cartoon Network, watching Mojo Jojo's genius unfold. Little did I know back then that Mojo Jojo’s relentless quest for domination could be a lighthearted metaphor for what we now fear with AI: the rise of superintelligence. As I returned to my work, I couldn’t help but chuckle at how a childhood villain could so perfectly illustrate modern concerns about machines outsmarting their creators.


Puff!! In the current digital landscape, artificial intelligence (AI) has moved from being a mere of science fiction stuff to becoming deeply embedded in our daily lives. From smart assistants to advanced algorithms that manage big data, AI has redefined technological limits. However, this rapid progress sparks a profound philosophical and ethical debate: Can AI transcend its role as a tool and develop a mind of its own? Central to this inquiry is the question—can AI remain a machine or potentially evolve into a conscious being, capable of thinking, understanding along with self-awareness?


From the times preceding modern technological development, we human beings have been always pondering about the idea of creating intelligent life that are found in many myths and stories. In ancient Greece, the myth of Pygmalion, who sculpted a woman so lifelike that she came to life, captured our desire to blur the lines between creation and reality. Similarly, the myth of Talos, a giant automaton created to protect Crete, reveals an early fascination with artificial beings endowed with a semblance of life. These stories set the stage for more modern philosophical inquiries.


Another childhood memory is of Frankenstein (1818) where we know that Mary Shelley’s mind conflicts with the cost of creating artificial life, touching on both the promises and dangers of giving machines autonomy. Her creation, though made of flesh, can be seen as a predecessor to discussions of machine learning—an entity capable of thought but alienated from its creator. Here, Shelley's themes resonate with contemporary issues about AI.


->What responsibilities do creators have?

->And what happens if machines become conscious?


 

Rabindranath Tagore, a pioneer of modern thought and literature, echoed these concerns about the relationship between humanity and creation. In his poetry, he reflected on the depth of human awaraness, often exploring themes of selfhood and existence. His famous verse from Gitanjali, 


"Where the mind is without fear and the head is held high; Where knowledge is free," 

-captures the essence of human dignity and freedom—qualities we might wonder if AI can ever achieve.

Hope my today’s article will successfully lead us to the fundamental questions about the essence of the mind, cognition and existence itself. The debate is shaped by two conflicting perspectives: one that views intelligence as a computational process and the other that ties it to deeper philosophical concepts of consciousness. As we dive into these complex issues, we’ll be drawing from scientific theories and philosophical arguments with cutting-edge ideas to examine whether AI could ever actually gain a mind. Let’s see!


Philosophy’s Contribution to the Debate: From Descartes and Searle

At the heart of the debate is the argument of whether machines can think in the same way humans do? There comes the famous declaration of René Descartes, the father of modern philosophy, “Cogito, ergo sum” (I think, therefore I am). For him, thought and self-awareness were the basis of existence. His dualistic model—distinguishing the mind from the body—has highly put marks on subsequent discussions about machine intelligence. According to Cartesian thought, machines could never be truly conscious. This is because they lack a soul, the immaterial essence that allows for contemplation.


 

Now, moving forward into the 20th century, philosopher John Searle gave the popular "Chinese Room" argument. This challenged the idea that machines could have the true understanding. In his thought experiment, Searle has imagined a non-Chinese speaker who has been manipulating symbols as per a rulebook, generating a response that is indistinguishable from a fluent speaker. The point of this analogy is that, like computers, the person in the room has no understanding of the meaning behind the symbols. It only has the rules of manipulation. This shows the difference between syntactic processing (which computers surpass at) and semantic understanding (which remains uniquely human, according to Searle).

Coming to the modern era, David Chalmers, a current day philosopher, brings another dimension to the discussion with his distinction between the "easy" and "hard" problems of consciousness. 

“If we can build a machine that behaves like a human being, does it have consciousness? Or is it simply simulating consciousness?” This question encapsulates the debate around whether AI can truly be conscious or merely simulate consciousness”. (David Chalmers)

The "easy" problems include explaining cognitive functions like perception and memory—tasks that artificial intelligence can mimic. Nonetheless, the "hard" problem of consciousness, which deals with subjective experience and what it feels like to be something, has been still remaining unresolved. Can a machine ever achieve “qualia” which is the unique, first-person experience that has been characterizing consciousness?


The Science Behind AI: From Turing to Kurzweil


Now comes Alan Turing, the famous mathematician and computer scientist with his landmark 1950 paper, "Computing Machinery and Intelligence" that posed the famous question, "Can machines think?". He proposed the now-famous "Turing Test," where a machine's ability to exhibit intelligent behavior that has been indistinguishable from that of a human that has qualified this as "thinking." Though he himself has been avoiding metaphysical questions about awareness, his work laid the ground stone for contemporary AI research. The Turing Test suggests that the appearance of intelligence may be sufficient, even if machines can never have consciousness the way we people do.


 

 

Moreover, there is Ray Kurzweil, a leading futurist, has taken these ideas even further. In The Singularity Is Near: When Humans Transcend Biology (2005), he has made the argument that artificial intelligence will be eventually beating human intelligence. This will lead to a “singularity”. This is the very point at which technology would be not only coping human cognition but far can exceed it. Kurzweil envisioned a future where human beings can merge with machines, augmenting their intelligence through cybernetic enhancements.


This vision of a post-human future has raised ethical and philosophical questions regarding the nature of mind and identity: Will AI could be a partner in our evolution, or would it replace us?

Next, Carl Sagan in The Dragons of Eden (1977) offers another layer to this argument. While Sagan explored the evolution of human intelligence, he had speculated the potential for machines to achieve forms of intelligence that are alien to human experience. It has raised the question: Are human beings limited in our understanding of AI through our human-centric view of sentience? 


 

Perhaps the idea AI could develop its own form of intelligence that, while been different from ours, is not a less valid argument.


Moving toward a deeper understanding: Intelligence vs. Consciousness

The difference between intelligence and consciousness has been there at the heart of the debate. Intelligence, defined as the ability to process information and solve problems, can be simulated by machines. Deep learning algorithms, such as those used by Google's AlphaGo or OpenAI's GPT models, demonstrate remarkable proficiency in tasks that has been traditionally connected with human intelligence. These systems are able to beat world champions at complex games, create coherent text and assess vast range of datasets quicker than any human ever could.

 

But, true sentience has involved more than just the capability to perform tasks. It has included self-awareness, emotional depth along with a sense of purpose here. Cognitive scientists such as Douglas Hofstadter and Daniel Dennett in his book have explored the possibility of machine consciousness. Nonetheless, they have been skeptical about whether AI can ever possess the qualia that characterize human experience. Dennett in his book Consciousness Explained has put the argument that consciousness has been an emergent property of complex systems. This has been indicating that machines may one day achieve something that is parallel to consciousness. Yet, even Dennett stops short of making the claim that AI will totally replicate the richness of human subjective experience ever.

The Mind-Body Problem and the Computational Approach to AI

As we move deeper into the debate on Artificial Intelligence and human sentience, this is important to explore one of philosophy’s most enduring dilemmas. This is the mind-body problem. The mind-body problem has been seeking to understand the way mental phenomena—thoughts, emotions, and awareness are related to the physical body and brain. The question has turned out to be even more pertinent while considering AI, as machines have no biological body. How could something that is totally physical, like machine has something that is seemingly non-physical be conscious?

Descartes and Dualism