
Tech experts have been in hot debate recently after a Google engineer claimed that their chatbot was a sentient being. While our machine friends start to look back at us, one has to wonder if they’ll reach the point of feeling and thinking on their own. And it brings us back to a very fundamental question – what is consciousness anyway?
Machine learning is impressive. It has already stepped out of science fiction and into our daily lives in ways we hardly even notice. And we’re not knocked off our chairs in surprise when we hear that we’ll be sharing our futures with robots – they’re already here in the form of therapists, nurses, surgeons – and that’s just healthcare.
But what happens when we bring the notion of consciousness to the table?
Well, that’s exactly what Google engineer Blake Lemoine did, when he claimed that the chatbot he’d been working on was thinking and reasoning like a human being. He threw into question what the world knows and understands AI to be, and perhaps addressed a topic what everyone had on their minds. And it got him suspended.
I am, in fact, a person
Blake not only described the LaMDA AI as sentient, but suggested that it had a perception of and could think like a human child.
He told the Washington Post: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”
In a separate conversation, he asked the chatbot what “it” wanted the world to know about it. And the response was “I am, in fact, a person.”
It added: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
Google suspended Lemoine and later said in a statement that he was employed as a software engineer, “not an ethicist”. He was put on paid leave, while the internet erupted in debate on the topic of machines and sentience.
Matters of the (artificial) heart
As previously reported by Health Tech World, AI therapists which are trained to be empathetic and offer human compassion to patients are being developed – again raising the question of automation its ability to experience real feelings.
AI has been met with mixed feelings in the mental health space, as robots are rolled out to offer services such as Cognitive Behavioural Therapy and trauma therapy – services which arguably need the warm heart of an understanding human rather than sophisticated programming. Meanwhile, it’s all eyes on AI to take the reins in a crippled healthcare system, with plans on automation taking over from GPs as well and cancer specialists.
As capable as it is, the concept of automation matching the human ability to feel, or to have a “soul” is a questionable arena.
AI & consciousness – a can of worms
Scientists have grappled for centuries to nail down consciousness and define it once and for all. The very “magic” of human consciousness remains a mystery on many levels, which makes the question even bigger as to how it can ever be transferred to (or developed in) artificial intelligence.
Can AI become subjective? Will it develop an ego? A gut instinct?
One thing we know for sure – it will leave us standing at chess.




