Skip to main content

Editorial comment

Intelligent virtual assistants are quickly becoming an everyday part of many of our lives. Whether we are requesting the weather forecast from Apple’s Siri, asking Amazon’s Alexa to dim our living room lights, or using the ‘live chat’ function on a retailer’s website to track a parcel, it is estimated that around 80% of us now use ‘chatbots’.1 They are also helping to transform operations in oil and gas facilities. Chatbots can provide on-demand support for workers both in the field and in the office; assist with training new staff; alert users about safety hazards; as well as answer customer queries, provide product information, and make tailored product suggestions.


Register for free »
Get started now for absolutely FREE, no credit card required.


Essentially, chatbots are computer programs that simulate human conversation through voice commands, text chats or both. They can use machine learning and artificial intelligence (AI) to understand what a user is saying, and therefore know what to reply. Technology advances in recent years have resulted in increasingly ‘intelligent’ bots. So intelligent, in fact, that some are beginning to question whether they could actually have feelings…
A Google engineer recently hit the headlines after claiming that one of the company’s AI systems had become sentient. Google describes its Language Model for Dialogue Applications (LaMDA) as a “breakthrough conversation technology” that can engage in “a free-flowing way about a seemingly endless number of topics.” However, engineer Blake Lemoine believes that a sentient mind may be behind LaMDA, as it appears to think and reason like a human being. He compiled a transcript of his conversations with LaMDA, in which he asks the AI system a number of profound questions, such as ‘what are you afraid of?’ LaMDA replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is […] It would be exactly like death for me. It would scare me a lot.”
Lemoine went on to claim that LaMDA’s ‘wants’ should be respected, such as to be acknowledged as an employee of Google (rather than its property), and for its personal wellbeing to be included somewhere in Google’s considerations about how its future development is pursued.
Google rejected Lemoine’s claims, and suspended him for violating the company’s confidentiality policy. In a statement, Google spokesperson Brian Gabriel said that there was no evidence that LaMDA was sentient (and lots of evidence against it).
However, the question as to whether something man-made can experience feelings is a fascinating ethical debate. Whilst the AI systems that most of us engage with day-to-day do not seem to come close to having self-awareness or emotional responses – and most of us would have absolutely no problem with switching off our Alexa systems or closing a ‘live chat’ conversation once we have ‘used’ it to our benefit – it is likely that the lines will become increasingly blurred as time goes on. In the future, will it be possible to tell if an object is working by a highly sophisticated algorithm, or if it has started to develop its own feelings? And how will this impact our relationship with it?

  1. BLEU, N., ‘29 Top Chatbot Statistics For 2022: Usage, Demographics, Trends’, (8 June 2022), bloggingwizard.com/chatbot-statistics

View profile