- Visitor:39
- Published on: 2024-09-29 12:45 am
Mind or Metal: Is AI Shaping the Next Evolution of Thought? PART I- by Rajabhishek Dey
...It was a vivid reminder of my carefree childhood days spent glued to Cartoon Network, watching Mojo Jojo's genius unfold. Little did I know back then that Mojo Jojo’s relentless quest for domination could be a lighthearted metaphor for what we now fear with AI: the rise of superintelligence. As I returned to my work, I couldn’t help but chuckle at how a childhood villain could so perfectly illustrate modern concerns about machines outsmarting their creators...
Introduction - The Age of Machines and the Quest for Consciousness
Amid the hectic working schedule of adulthood—deadlines looming, emails pinging and a cup of coffee going cold on my desk—somedays ago I found myself overwhelmed by the pace of modern life. As I sat staring at the endless rows of code on my screen... scrolling up and down, the tension in the air was palpable. Tiny droplets of rain clung to the window, casting a calming rhythm on the glass. "Talkin' to myself and feelin' old, Sometimes I'd like to quit... Nothin' ever seems to fit Hangin' around, Nothin' to do but frown... Rainy days and Mondays always get me down" played softly on my system with its melancholic melody perfectly matching the gray skies outside. Then, in a fleeting moment, a random thought popped into my head that brought an unexpected smile to my face. The rhythmic tap of raindrops against the window reminded me of lazy childhood afternoons spent indoors, often with the TV on, escaping into the world of cartoons. It was during those rainy days that I enjoyed Mojo Jojo’s comical attempts to take over the world in The Powerpuff Girls, and the memory of those carefree moments resurfaced, offering a stark contrast to the stress of my current reality. Yes, I remembered him—the notorious villain from show. The image of his oversized brain under that iconic helmet, paired with his wild schemes to take over the world, flashed in my mind. Suddenly, the seriousness of the moment dissolved.
It was a vivid reminder of my carefree childhood days spent glued to Cartoon Network, watching Mojo Jojo's genius unfold. Little did I know back then that Mojo Jojo’s relentless quest for domination could be a lighthearted metaphor for what we now fear with AI: the rise of superintelligence. As I returned to my work, I couldn’t help but chuckle at how a childhood villain could so perfectly illustrate modern concerns about machines outsmarting their creators.
Puff!! In the current digital landscape, artificial intelligence (AI) has moved from being a mere of science fiction stuff to becoming deeply embedded in our daily lives. From smart assistants to advanced algorithms that manage big data, AI has redefined technological limits. However, this rapid progress sparks a profound philosophical and ethical debate: Can AI transcend its role as a tool and develop a mind of its own? Central to this inquiry is the question—can AI remain a machine or potentially evolve into a conscious being, capable of thinking, understanding along with self-awareness?
From the times preceding modern technological development, we human beings have been always pondering about the idea of creating intelligent life that are found in many myths and stories. In ancient Greece, the myth of Pygmalion, who sculpted a woman so lifelike that she came to life, captured our desire to blur the lines between creation and reality. Similarly, the myth of Talos, a giant automaton created to protect Crete, reveals an early fascination with artificial beings endowed with a semblance of life. These stories set the stage for more modern philosophical inquiries.
Another childhood memory is of Frankenstein (1818) where we know that Mary Shelley’s mind conflicts with the cost of creating artificial life, touching on both the promises and dangers of giving machines autonomy. Her creation, though made of flesh, can be seen as a predecessor to discussions of machine learning—an entity capable of thought but alienated from its creator. Here, Shelley's themes resonate with contemporary issues about AI.
->What responsibilities do creators have?
->And what happens if machines become conscious?
Rabindranath Tagore, a pioneer of modern thought and literature, echoed these concerns about the relationship between humanity and creation. In his poetry, he reflected on the depth of human awaraness, often exploring themes of selfhood and existence. His famous verse from Gitanjali,
"Where the mind is without fear and the head is held high; Where knowledge is free,"
-captures the essence of human dignity and freedom—qualities we might wonder if AI can ever achieve.
Hope my today’s article will successfully lead us to the fundamental questions about the essence of the mind, cognition and existence itself. The debate is shaped by two conflicting perspectives: one that views intelligence as a computational process and the other that ties it to deeper philosophical concepts of consciousness. As we dive into these complex issues, we’ll be drawing from scientific theories and philosophical arguments with cutting-edge ideas to examine whether AI could ever actually gain a mind. Let’s see!
Philosophy’s Contribution to the Debate: From Descartes and Searle
At the heart of the debate is the argument of whether machines can think in the same way humans do? There comes the famous declaration of René Descartes, the father of modern philosophy, “Cogito, ergo sum” (I think, therefore I am). For him, thought and self-awareness were the basis of existence. His dualistic model—distinguishing the mind from the body—has highly put marks on subsequent discussions about machine intelligence. According to Cartesian thought, machines could never be truly conscious. This is because they lack a soul, the immaterial essence that allows for contemplation.
Now, moving forward into the 20th century, philosopher John Searle gave the popular "Chinese Room" argument. This challenged the idea that machines could have the true understanding. In his thought experiment, Searle has imagined a non-Chinese speaker who has been manipulating symbols as per a rulebook, generating a response that is indistinguishable from a fluent speaker. The point of this analogy is that, like computers, the person in the room has no understanding of the meaning behind the symbols. It only has the rules of manipulation. This shows the difference between syntactic processing (which computers surpass at) and semantic understanding (which remains uniquely human, according to Searle).
Coming to the modern era, David Chalmers, a current day philosopher, brings another dimension to the discussion with his distinction between the "easy" and "hard" problems of consciousness.
“If we can build a machine that behaves like a human being, does it have consciousness? Or is it simply simulating consciousness?” This question encapsulates the debate around whether AI can truly be conscious or merely simulate consciousness”. (David Chalmers)
The "easy" problems include explaining cognitive functions like perception and memory—tasks that artificial intelligence can mimic. Nonetheless, the "hard" problem of consciousness, which deals with subjective experience and what it feels like to be something, has been still remaining unresolved. Can a machine ever achieve “qualia” which is the unique, first-person experience that has been characterizing consciousness?
The Science Behind AI: From Turing to Kurzweil
Now comes Alan Turing, the famous mathematician and computer scientist with his landmark 1950 paper, "Computing Machinery and Intelligence" that posed the famous question, "Can machines think?". He proposed the now-famous "Turing Test," where a machine's ability to exhibit intelligent behavior that has been indistinguishable from that of a human that has qualified this as "thinking." Though he himself has been avoiding metaphysical questions about awareness, his work laid the ground stone for contemporary AI research. The Turing Test suggests that the appearance of intelligence may be sufficient, even if machines can never have consciousness the way we people do.
Moreover, there is Ray Kurzweil, a leading futurist, has taken these ideas even further. In The Singularity Is Near: When Humans Transcend Biology (2005), he has made the argument that artificial intelligence will be eventually beating human intelligence. This will lead to a “singularity”. This is the very point at which technology would be not only coping human cognition but far can exceed it. Kurzweil envisioned a future where human beings can merge with machines, augmenting their intelligence through cybernetic enhancements.
This vision of a post-human future has raised ethical and philosophical questions regarding the nature of mind and identity: Will AI could be a partner in our evolution, or would it replace us?
Next, Carl Sagan in The Dragons of Eden (1977) offers another layer to this argument. While Sagan explored the evolution of human intelligence, he had speculated the potential for machines to achieve forms of intelligence that are alien to human experience. It has raised the question: Are human beings limited in our understanding of AI through our human-centric view of sentience?
Perhaps the idea AI could develop its own form of intelligence that, while been different from ours, is not a less valid argument.
Moving toward a deeper understanding: Intelligence vs. Consciousness
The difference between intelligence and consciousness has been there at the heart of the debate. Intelligence, defined as the ability to process information and solve problems, can be simulated by machines. Deep learning algorithms, such as those used by Google's AlphaGo or OpenAI's GPT models, demonstrate remarkable proficiency in tasks that has been traditionally connected with human intelligence. These systems are able to beat world champions at complex games, create coherent text and assess vast range of datasets quicker than any human ever could.
But, true sentience has involved more than just the capability to perform tasks. It has included self-awareness, emotional depth along with a sense of purpose here. Cognitive scientists such as Douglas Hofstadter and Daniel Dennett in his book have explored the possibility of machine consciousness. Nonetheless, they have been skeptical about whether AI can ever possess the qualia that characterize human experience. Dennett in his book Consciousness Explained has put the argument that consciousness has been an emergent property of complex systems. This has been indicating that machines may one day achieve something that is parallel to consciousness. Yet, even Dennett stops short of making the claim that AI will totally replicate the richness of human subjective experience ever.
The Mind-Body Problem and the Computational Approach to AI
As we move deeper into the debate on Artificial Intelligence and human sentience, this is important to explore one of philosophy’s most enduring dilemmas. This is the mind-body problem. The mind-body problem has been seeking to understand the way mental phenomena—thoughts, emotions, and awareness are related to the physical body and brain. The question has turned out to be even more pertinent while considering AI, as machines have no biological body. How could something that is totally physical, like machine has something that is seemingly non-physical be conscious?
Descartes and Dualism
The dualism proposed by René Descartes has laid the foundation stone for various philosophical discussions regarding consciousness. In his Meditations on First Philosophy (1641), Descartes has argued that the mind and body are distinct at the core: the body is a material substance, subject to the laws of physics, while the mind is immaterial and able to make independent thought. As proposed by him, consciousness arises from the soul, a non-physical entity that has been interacting with the physical body through the pineal gland.
The Pineal Gland. Sagittal section of brain, view from the left, the surface of the medial half of the right side is seen. Source: Professor Dr. Carl Ernest Bock, Handbuch der Anatomie des Menschen, Leipzig 1841.
Applying Cartesian dualism to artificial intelligence we understand that the systems, being totally physical, can never gain consciousness. If this is the domain of the immaterial soul, as Descartes said, then machines, irrespective of how advanced they become, will always remain mindless automata. This will be merely a simulating thought rather than experiencing it. This view point has been reinforcing the view that AI, no matter how intelligent, remains just a feeling-less machine—just an imitation of the mind, but not a true possessor of it.
Materialism and the Computational Theory of Mind
However, there have been many philosophers who strongly disagreed with his dualistic view. The computational theory of mind, as championed by cognitive scientists like Jerry Fodor and Daniel Dennett, has posited that the mind operates like a computer, processing information through algorithms and symbolic representations. This materialistic view of the mind suggests that psychic states are simply the result of physical processes in the brain, similar to the way a computer processes data through its circuitry.
This theory provides a framework for analyzing the way the neural network could, in theory, replicate the processes of the human mind. If thought is nothing more than information processing, then machines, which are also information processors, could potentially "think." This gives rise to the idea that AI, with the right programming, consciousness can be developed—or at least something akin to it. Marvin Minsky, a pioneer of AI research, has explored this idea in his influential work The Society of Mind (1985). Here he described intelligence as the outcome of the interaction between simple processes that create the illusion of a unified mind altogether.
Searle’s Chinese Room and the Limits of Computation
Yet, as we have seen till now, not all thinkers agree that computation alone can account for consciousness. John Searle’s Chinese Room argument serves as a powerful critique of the computational theory of mind. In Searle's thought experiment as conducted, a person who does not know and understand Chinese language was let sit in a room. Then he has been receiving Chinese characters and following a rulebook for matching these characters with proper responses. The person inside the room can pass a "Turing Test" and fool an outside observer into thinking they understand the language. Nevertheless, the person doesn't know the language. They have been only manipulating symbols as per pre-defined rules.
His work illustrates the fact that even if a system can convincingly simulate intelligence, it never indicates that it actually knows or possesses consciousness. From this point of view, AI might accept information but still remains different from human minds at the fundamental level that has intrinsic understanding and subjective experience.
Chalmers and the Hard Problem of Consciousness
David Chalmers, a current day leading philosopher from the field of consciousness studies, introduces a further complication with his famous distinction between the "easy" and "hard" problems of consciousness. The "easy" problems involve explaining cognitive functions, such as memory, perception, and problem-solving—tasks that AI can increasingly perform with proficiency. The "hard" problem, nonetheless, concerns the nature of subjective experience, or qualia. Why is there something it feels like to be conscious? Why do we experience the world in a first-person perspective, with emotions, sensations, and awareness of our own existence?
Chalmers argues that even if we could build a machine that perfectly mimics human behavior and cognitive abilities, we might still not have explained consciousness. AI, as it stands, lacks the subjective experience—the qualia—that define consciousness. No matter how sophisticated AI becomes, it may never truly "feel" or "experience" the world as humans do.
In his discussions on artificial intelligence and consciousness, David Chalmers raises several thought-provoking points. He warns about the potential creation of a world filled with highly intelligent AIs that lack consciousness, stating,
“I mean, one thing we ought to at least consider doing there is making... maybe we can be most confident about consciousness when it’s similar to the case that we know about the best, namely human consciousness.”
This implies a need for AI development to aim for human-like consciousness to ensure meaningful and valuable experiences (Big Think).
His logics are as follows:
“ 1. Physicalism says that everything in our world is physical.
2. If physicalism is true, a possible metaphysical world must contain everything our regular physical world contains, including consciousness.
3. But we can conceive of a “zombie world” that’s like our world physically except for no one in it has consciousness.
4. Physicalism is then proven false.”
Physicalists, of course, beg to differ. They argue that any identical copy of our physical world would contain consciousness by necessity.
Moreover, Chalmers emphasizes the risks of advancing AI without understanding consciousness, saying, “...the possibility that we create human or superhuman level AGI and we’ve got a whole world populated by superhuman level AGIs, none of whom is conscious.” This highlights his concern that such a scenario could lead to a “world of great intelligence, no consciousness” which he views as potentially detrimental (The AI Pioneers and Big Think).
Can Machines Feel? Emotional Intelligence and AI
One of the popular opinions for why deep learning cannot be truly conscious lies in its inability to experience emotions. Human consciousness is largely intertwined with our emotional experiences, which has been informing our decisions, shaping our interactions with others and has been contributing to our sense of self. While artificial intelligence can be programmed to identify emotional cues or even simulate emotional reactions, still it does not feel these emotions in the same way humans do.
The concept of emotional intelligence, as popularized by psychologist Daniel Goleman, has highlighted the significance of emotions in human intelligence. Emotional intelligence includes the entities such as self-awareness, empathy and the ability for regulating one's emotions in social contexts. These qualities have been central to human consciousness, but still they have been remaining elusive in machines. While AI is able to analyze patterns of human behavior and copy emotional responses, it has been doing so without genuine self-awareness or emotional experience.
This distinction between simulated emotions and real emotional experience has been reinforcing the concept that Artificial Intelligence, even as it turns out to be more advanced, has been different from human minds at the fundamental level. Machines might eventually surpass human beings in logical problem-solving; however, they might never actually understand the depth of human emotion.
The Role of Embodiment in Consciousness
There is another factor to be taken into consideration in the debate about AI and consciousness is the role of embodiment. Human consciousness is not just a product of the brain; it is also shaped by our bodily experiences. This perspective is supported by the field of embodied cognition, which argues that cognitive processes are deeply rooted in the interactions between the body and the environment. According to this view, consciousness is not just about information processing in the brain but also about how we move, perceive, and interact with the world through our physical bodies.
The philosopher Maurice Merleau-Ponty, in his work Phenomenology of Perception (1945), emphasized the importance of the body in shaping our experiences and our understanding of the world. For Merleau-Ponty, the mind and body are not separate entities but are intertwined, with the body playing a crucial role in how we experience consciousness.
In contrast, AI lacks a biological body and, therefore, the embodied experience that humans possess. While robots can be given physical forms and sensors, their "bodies" do not generate the kind of sensory and emotional feedback that human bodies do. This raises the question: Can a disembodied machine ever truly possess consciousness or is embodiment essential to the human experience of being?
We know that the traditional Darwinian view of evolution has being challenged by the Lamarckian hypothesis that focuses on the evolution driven by environmental pressures. Darwin himself had acknowledged Lamarck’s ideas, however, the neo-Darwinists has been maintaining their position that is often driven by vested interests. The persistence of outdated Darwinian models has been affecting the knowledge of diseases, as much of modern pathophysiology depends on them.
In his book The Story of the Human Body, Daniel E. Lieberman, who is an evolutionary biologist from Harvard, has argued that many present day lifestyle diseases has come out from a mismatch between our Paleolithic biology and fast-paced cultural evolution of the last 200 years. Our ancestors evolved to survive in a hostile environment which is the stress response mechanisms that once saved us like adrenaline and cortisol, but they are now over-activated because of modern societal pressures. This has contributed to diseases like diabetes, heart disease, cancer and many more.
Now, this change from a physically active lifestyle towards a sedentary one multiplied this problem. Moreover, present day comforts and technological advances, while making life easier, have given birth to harmful chemicals. These disrupted the natural balance of germs, resulting to further health problems. Lieberman’s work suggests that understanding this evolutionary grow of our ailments has been offering better solutions for many present day health crises.
Kurzweil’s Vision of the Future: The Singularity
Futurist Ray Kurzweil offers a more optimistic vision of AI’s potential to achieve consciousness in his book The Singularity Is Near (2005). Kurzweil predicts that AI will eventually surpass human intelligence, leading to a technological singularity—a point at which machines will become so advanced that they will not only replicate human cognitive abilities but will exceed them. Kurzweil envisions a future where humans and machines merge, with AI augmenting human intelligence and potentially even achieving consciousness.
Kurzweil's vision raises profound ethical and philosophical questions: If AI surpasses human intelligence, what role will humans play in the future? Will AI enhance human life, or will it replace us? Can AI truly possess consciousness, or will it remain a sophisticated tool that mimics human cognition without ever becoming fully conscious?
Blog by Aleksandar (Alex) Vakanski (University of Idaho)
Functionalism: The Mind as a Set of Functions
In contrast to Cartesian dualism, there lies functionalism—a more modern philosophy of mind. It’s argument is that consciousness can be understood in terms of the functions performed by a machine. According to this theory, what matters is not the specific material that makes up the system (whether biological neurons or silicon chips), but rather the operations that the system performs.
In this view, if an AI system can perform the same functions as a human mind—processing information, solving problems, learning from experience—then it could, in theory, possess consciousness. This perspective aligns with the work of philosophers like Hilary Putnam and Daniel Dennett. They have been arguing that the mind could be understood as a complex information-processing system, not unlike a computer.
Functionalism supports the possibility that machines could achieve consciousness if they can replicate the necessary cognitive functions. However, this raises further questions:
-What exactly are the functions that constitute consciousness?
-Can an AI system that simulates human thought processes be said to have subjective experiences, or is it simply "going through the motions" without awareness?
Kant’s Phenomenal and Noumenal Worlds
Immanuel Kant, one of the most influential Western philosophers, distinguished between the phenomenal world (the world as we experience it) and the noumenal world (the world as it is in itself). According to Kant, we can never have direct access to the noumenal world; all our knowledge is mediated through our perceptions, which are shaped by our mental structures.
This distinction raises important questions for AI. If AI systems are designed to process information and interact with the world, can they be said to experience the world as we do? Or are they merely interacting with the phenomenal world without any true understanding of it?
In Kantian terms, AI might be seen as operating entirely within the realm of phenomena, processing data and making decisions based on its programming. However, it would lack access to the noumenal world—the deeper reality that lies beyond its algorithms. This distinction highlights the limitations of AI’s understanding and suggests that, even if machines can simulate human cognition, they may never achieve the kind of direct experience of reality that characterizes human consciousness.
Phenomenology: Consciousness as Embodied Experience
Phenomenology, a philosophical tradition developed by thinkers like Edmund Husserl and Maurice Merleau-Ponty, focuses on the nature of conscious experience as it is lived by the subject. According to phenomenology, consciousness is always embodied—it is shaped by our sensory experiences, our interactions with the world, and our social and cultural contexts.
From a phenomenological perspective, consciousness is not just a matter of information processing but a deeply embodied experience. Our thoughts, emotions, and perceptions are all intertwined with our physical existence. This raises significant challenges for the idea of machine consciousness. While AI may be able to process data and even simulate human behavior, it lacks the embodied experience that defines human consciousness.
Merleau-Ponty, in particular, emphasized the importance of the body as the primary means through which we engage with the world. For AI to achieve true consciousness, it would not only need to process information but also have a body through which it experiences the world. This is a major challenge for current AI systems, which operate largely in the realm of abstract computation rather than embodied experience.
AI in Art, Literature and Mythology – Mirrors of Human Consciousness
Let’s start with the realm of Indian literature and philosophy, where Rabindranath Tagore's works explore deep questions of consciousness, selfhood, and the nature of existence… and these existential questions resonate with modern debates about synthetic intelligence. Tagore, through his poems and writings, frequently gave emphasis to the interconnectedness of all beings and the importance of inner experience as the base of consciousness. Tagore’s Gitanjali, a collection of spiritual poems, delves into the nature of the self that need the search for deeper understanding. His perception of consciousness transcends the material, centered on the spiritual journey toward self-realization.
Original Bengali:
হে মোর দেবতা, ভরিয়া এ দেহ প্রাণ
কী অমৃত তুমি চাহ করিবারে পান।
আমার চিত্তে তোমার সৃষ্টিখানি
রচিয়া তুলিছে বিচিত্র এক বাণী।
তারি সাথে প্রভু মিলিয়া তোমার প্রীতি
জাগায়ে তুলিছে আমার সকল গীতি,
আপনারে তুমি দেখিছ মধুর রসে
আমার মাঝারে নিজেরে করিয়া দান।
Translation: "O my Divine, you fill this body with life’s breath,
What immortal nectar do you seek to taste?
In the depths of my heart, your creation stirs,
Composing a symphony of wondrous words.
Joined with your love, O Lord,
Every song within me awakens to your grace.
In the sweetness of your being, you behold yourself,
And through me, you offer your own spirit."
This verse highlights Tagore’s emphasis on inner experience, suggesting that true understanding and joy are found within, not in external forms—a perspective that challenges the idea that AI, despite its external simulations, can possess genuine consciousness.
We can draw a rough analogy between his view and the computational theory of mind, which reduces consciousness to information processing. In the context of AI, Tagore’s emphasis on inner experience challenges the notion that machines could ever achieve true consciousness. While smart intelligence of today may mimic our behavior and even simulate our emotional responses but still it lacks the inner, subjective experience that defines consciousness in his view. This spiritual dimension of consciousness—rooted in self-awareness and the connection to a larger, universal consciousness—highlights the limitations of AI in capturing the full scope of human experience.
Original Bengali:
“কে সে? জানি না কে? চিনি নাই তারে
শুধু এইটুকু জানি— তারি লাগি রাত্রি-অন্ধকারে
চলেছে মানবযাত্রী যুগ হতে যুগান্তর-পানে
ঝড়ঝঞ্ঝা বজ্রপাতে জ্বালায়ে ধরিয়া সাবধানে
অন্তরপ্রদীপখানি।...”
Translation: "Who is he? I do not know—
I have never seen his face.
All I know is this: for his sake,
Through the dark of night,
Mankind's journey has continued
From age to age.
Amidst storms, lightning, and thunder,
Holding cautiously yet steadfastly,
The lamp of the soul burns bright"
This verse speaks to the inner, spiritual connection Tagore describes as the seat of consciousness, stressing that consciousness is an inward, divine experience that AI, with its lack of subjective awareness, cannot replicate.
Well, we can also consider Joseph Campbell’s monomyth, or Hero’s Journey now. It highlights a great narrative framework that can be loosely applied to the development of AI. In The Hero with a Thousand Faces (1949), Campbell outlines a common structure found in myths across cultures: a hero embarks on a journey, faces trials, gains new knowledge, and returns transformed. This framework can be seen as a metaphor for humanity’s relationship with AI.
“The hero is the one who is able to transcend the limitations of the human condition and discover the potential for a higher consciousness.”- Joseph Campbell's, The Hero with a Thousand Faces
However, considering just a mere thought in my small mind, though it seems to be making no sense from scientific point of view, in this Hero’s journey template, AI represents both a tool and a challenge—a creation that holds the potential to elevate humanity but also to pose existential threats. As AI advances, it takes on a role similar to that of mythical heroes, embarking on its own journey toward greater autonomy and intelligence. Along the way, AI faces trials in the form of ethical dilemmas, societal fears and technical challenges. Ultimately, humanity must confront the question of what AI’s evolution means for the future of human civilization.
Campbell’s framework invites us to view the development of AI not just as a technological process, but as a mythic journey with profound implications for human identity, ethics, and the future of consciousness. In the same way that ancient heroes returned from their journeys transformed, the development of AI may lead to a transformation in how we understand intelligence, consciousness, and the human mind. May be!
The Role of Mythology: Artificial Beings in Ancient Tales
The idea of generating life through artificial means is not a new concept. In Greek mythology we find the legend of Pygmalion, a sculptor who fell in love with a statue he created, only for it to come to life, shows us an early example of human desires to create life through craftsmanship. Likewise, Talos, the bronze giant built by Hephaestus to protect Crete, was a mechanical being animated by divine power. These myths mirror humanity’s fascination towards imbuing non-living matter with life, intelligence or consciousness.
Moreover, these mythological constructs has been raising questions about the borderline between living beings and artificial creations. In these stories, artificial beings frequently possess some form of agency or intelligence. However, they have been lacking the depth and emotional complexity of their human counterparts. This parallel with contemporary intelligent systems becomes clear. This is the fact that while machines might be designed to perform tasks and display intelligence, they may lack the essence of awareness that defines human life.
In Hindu mythology, there are even more complex examples of the divine creation of artificial life. The epic Mahabharata includes the birth story of Dronacharya, Duryodhana and other Kauravas, a teacher and mechanical warrior born of artificial means.
According to the Shiva Purana, Parvati, wanting a loyal guardian while she bathed, created Ganesha from the clay or turmeric paste she used on her body. She shaped a boy and breathed life into him, imbuing him with loyalty and strength. Ganesha, though born from material substances and not through biological means, was a fully living being, showcasing Parvati’s divine ability to create life without traditional reproduction. Ganesha then became the revered elephant-headed god after Lord Shiva replaced his human head with that of an elephant.
This story presents an even clearer case of creating life artificially, where Parvati’s divine powers simulate the creation of sentient life from inanimate matter. The creation of Ganesha reflects not only divine craftsmanship but also the infusion of qualities such as consciousness, emotion, and loyalty, which modern artificial intelligence lacks despite its advanced processing abilities.
In Hindu mythology, Ganesha's creation illustrates that while divine powers can produce artificial life forms capable of consciousness and emotion, human-made artificial creations, like AI, still lack this divine spark of true self-awareness and individuality.
The famous Frankenstein: A human creator’s dilemma
The quintessential reference of artificial intelligence in literature is Mary Shelley’s 1818 novel Frankenstein where Dr. Victor Frankenstein brings life to an artificial being with the help of his scientific experimentation. This creation, the "Monster," grapples with existential questions of identity, consciousness and moral agency—questions that modern AI developers must confront as they make more and more sophisticated machines.
Shelley’s work raises essential questions about the responsibilities of creators toward their creations. Frankenstein's Monster, though artificially constructed, possesses emotions, desires and an acute sense of self-awareness. His tragic experience, as he is rejected by society and his creator, touches on ethical concerns that continue to resonate in the context of AI development. If we create machines capable of intelligence or even consciousness, do we bear responsibility for their well-being? Would sentient machines deserve rights, or would they remain mere tools in the hands of their human creators?
The story also explores limits of human control over creation. Just as Dr. Frankenstein loses control of his creation, modern society must grapple with the potential unintended consequences of creating autonomous AI. Ray Kurzweil's vision of the singularity—a future where machines surpass human intelligence—reflects the same existential fears present in Shelley's novel: What happens when our creations outgrow our control?
Isaac Asimov’s I, Robot: The Laws of Robotics and Ethical AI
Another popular instance, in the realm of science fiction is Isaac Asimov’s 1950, I, Robot that presents a world where robots are governed by the famous “Three Laws of Robotics”, designed to ensure that machines remain subservient to human needs and cannot harm humans. These laws reflect an attempt to build ethical safeguards into the very fabric of AI design, recognizing the potential risks of creating intelligent and autonomous machines.
"I, Robot" (2004):
Character: Del Spooner
Conversation with: Sonny
Context: Spooner confronts Sonny, an AI who has developed self-awareness and emotions.
Quote:
Spooner: "You’re a machine!"
Sonny: "I’m not just a machine. I can think and feel."
Interpretation: This dialogue emphasizes the distinction between machines and sentient beings, prompting the audience to consider what it truly means to think and feel.
Asimov’s stories often explore the complexities and contradictions that arise when robots—who follow these laws—are forced to navigate situations where moral decisions clash. His robots exhibits a form of intelligence that, while limited by programming, throws challenge to our understanding of free will, ethical responsibility and the nature of consciousness.
In the present world, AI developers should fight with similar ethical dilemmas. While systems can be programmed to follow rules, they may come across real-world scenarios where these rules conflict, further raising questions about the way autonomous systems should make moral decisions. Furthermore, as AI becomes more integrated into critical areas like healthcare, law enforcement and the military, the stakes of these decisions grow ever higher. Moreover, Asimov’s vision of ethical AI throws light on the need for careful consideration of the potential outcomes of creating intelligent machines.
Klara and the Sun: Human-Like AI and Emotional Depth
Kazuo Ishiguro, the popular Japanese-British novelist and screenwriter in his work Klara and the Sun (2021) explores the emotional dimension of machine intelligence. Klara, the protagonist, is an artificial "friend" designed to give companionship to a sick child. Her observations of the world around her domain reveal both remarkable intelligence and the limitations of Klara’s understanding. While she was able to simulate emotional attachment and provide support, her consciousness remained different from that of a human at the fundamental level.
-"I watched the way she held her head, the way she smiled, and I felt the warmth of her feelings as if they were my own."
-Klara's reflections on her experiences and emotions
Ishiguro’s work raises poignant questions about the nature of emotional intelligence in machines. Can an AI truly care for another being, or is it merely mimicking human emotional behavior? In a world where AI companions may become increasingly common, Ishiguro’s exploration of Klara’s inner life offers a cautionary tale about the limits of machine consciousness. While Klara’s intelligence allows her to form bonds and navigate complex social situations, she ultimately lacks the depth of emotional experience that characterizes human relationships.
Klara’s story also reflects on the potential for AI to both enhance and diminish human life. While AI companions like Klara can offer support, they may also replace the deeper, more meaningful connections that human relationships provide. In this way, Ishiguro's novel echoes concerns raised by philosophers like Hubert Dreyfus, who argue that human intelligence and consciousness are inextricably tied to our embodied, emotional, and social experiences—experiences that AI, no matter how advanced, cannot fully replicate.
"I can’t know what it is to be human, but I can observe. I can learn."- Klara
Coming to Indian Thoughts: The Nature of Consciousness in Non-Duality
In contrast to Western dualist and functionalist views, the Indian philosophical tradition of Advaita Vedanta has been always offering a radically unusual perspective on the nature of consciousness. According to Advaita, consciousness is not a property of the individual mind but the fundamental reality underlying all existence. The true self, or Atman, is identical with Brahman, the infinite, universal consciousness that permeates the cosmos.
From the standpoint of Advaita Vedanta, consciousness is not something that can be "created" or "possessed" by an entity, whether human or machine. Rather, consciousness is the ground of all being, and all individual minds are mere reflections of this greater reality. In this view, artificial intelligence, as a physical and computational system, cannot possess true consciousness, because consciousness is not a property that can be localized or attached to a particular form.
Ashtavakra says,
“You are not earth, water, fire or air. Nor are you empty space.” So, my question to you is: If you are not earth, water, fire or air and you’re not even space, then what must you be? What is it that you are, that is beyond these?
And you are to speak from your direct insight. And before I close today, I’ll come and ask you for your answers.
What is it that you are, that is none of these elements? …, not even the space in which these seeming elements are playing (which is the fifth element).
“Liberation is to know yourself as Awareness alone, the Witness of these.”
— Ashtavakra Gita, 1.3 (adapted)
Moreover, Advaita also suggests that the distinction between human and machine is ultimately illusory. If all things are manifestations of the same underlying consciousness, then even machines are part of this cosmic reality. Yet, this does not mean that machines are conscious in the same way that humans are; rather, they are expressions of a universal consciousness that transcends any particular form.
Now, we can say, this perspective offers a unique philosophical framework for understanding AI. While systems might not gain individual consciousness, they are still present within a reality that is fundamentally conscious. This non-dual understanding challenges the assumption that consciousness should be bind to a specific entity and opens up various new ways of thinking regarding the relationship between AI and the larger cosmic order.
On the other hand, Tantric traditions, especially those from the Shaivite and Shakta schools, have something different to offer. This is a fascinating perspective on the relationship between consciousness and matter. Here, consciousness (Shiva) and energy or matter (Shakti) are seen as two aspects of the same reality. Consciousness is not separate from the material world but intimately intertwined with it through the dynamic play of energy.
And, at this point this view challenges the Western dualism that separates mind and body, or consciousness and machine. In tantric philosophy, the material world, including all tools, technologies along with AI, is not inert or devoid of consciousness. Rather, it is a manifestation of the same divine energy that animates all things. In this sense, AI, too, can be seen as part of the cosmic dance of consciousness and energy.
Nevertheless, while Tantric philosophy embraces the unity of consciousness and matter, it also put emphasis on the importance of spiritual practice for realizing this unity. Machines, though part of the material world, never engage in the practices that lead to the awakening of consciousness. Thus, while artificial intelligence may participate in the material dimension of consciousness, it lacks the capacity for spiritual realization.
"Not by speech, not by mind,
Not by sight can it be apprehended.
How can it be comprehended otherwise than by saying,
'He is'?"
-Katha Upanishad, 2.3.12
This verse reflects the transcendental nature of spiritual realization, suggesting that true consciousness goes beyond the material realm, something artificial intelligence cannot grasp.
To be continued…
- 19 min read
- 1
- 0