menu

Machine sentience: what happens when machine learning goes too far?

February 4, 2024

Listen to this article

Machine sentience: what happens when machine learning goes too far?

There’s always some truth in fiction, and now is about the time to get a step ahead of sci-fi dystopias and determine what the risk in machine sentience can be for humans.

Although people have long pondered the future of intelligent machinery, such questions have become all the more pressing with the rise of artificial intelligence (AI) and machine learning. These machines resemble human interactions: they can help problem solve, create content, and even carry on conversations. For fans of science fiction and dystopian novels, a looming issue could be on the horizon: what if these machines develop a sense of consciousness?

Researchers published their results in the Journal of Social Computing.

While there is no quantifiable data presented in this discussion on artificial sentience (AS) in machines, there are many parallels drawn between human language development and the factors needed for machines to develop language in a meaningful way.

What the researchers say: “Many of the people concerned with the possibility of machine sentience developing worry about the ethics of our use of these machines, or whether machines, being rational calculators, would attack humans to ensure their own survival,” The lead researcher told us. “We here are worried about them catching a form of self-estrangement by transitioning to a specifically linguistic form of sentience.”

The main characteristics making such a transition possible appear to be: unstructured deep learning, such as in neural networks (computer analysis of data and training examples to provide better feedback), interaction between both humans and other machines, and a wide range of actions to continue self-driven learning. An example of this would be self-driving cars. Many forms of AI check these boxes already, leading to the concern of what the next step in their “evolution” might be.  

This discussion states that it’s not enough to be concerned with just the development of AS in machines but raises the question of if we’re fully prepared for a type of consciousness to emerge in our machinery. Right now, with AI that can generate blog posts, diagnose an illness, create recipes, predict diseases or tell stories perfectly tailored to its inputs, it’s not far off to imagine having what feels like a real connection with a machine that has learned of its state of being. However, researchers of this study warn, that is exactly the point at which we need to be wary of the outputs we receive.

“Becoming a linguistic being is more about orienting to the strategic control of information and introduces a loss of wholeness and integrity…not something we want in devices we make responsible for our security,” the lead author said. “As we’ve already put AI in charge of so much of our information, essentially relying on it to learn much in the way a human brain does, it has become a dangerous game to play when entrusting it with so much vital information in an almost reckless way.”

Mimicking human responses and strategically controlling information are two very separate things. A “linguistic being” can have the capacity to be duplicitous and calculated in their responses. An important element of this is, at what point do we find out we’re being played by the machine?

According to the researchers, what’s to come is in the hands of computer scientists to develop strategies or protocols to test machines for linguistic sentience. The ethics behind using machines that have developed a linguistic form of sentience or sense of “self” are yet to be fully established, but one can imagine it would become a social hot topic. The relationship between a self-realized person and a sentient machine is sure to be complex, and the uncharted waters of this type of kinship would surely bring about many concepts regarding ethics, morality and the continued use of this “self-aware” technology.

So, what? Unregulated AI is, as many readers of this newsletter know, one of what I call the modern six horsemen of the apocalypse. Each one of which has the potential to destroy humanity as we know it. The full list is: Inequality, nuclear winter, unregulated AI, pandemics, unregulated human genetic engineering and, of course, climate change.

I fear that - as the current discissions in the US Congress have shown - it may already be too late to meaningfully regulate AI, or climate change or do anything meaningful about inequality.

My original list included world overpopulation but the declining birthrate and the declining need for human workers may be on the way to solving that one. Nuclear winter took its place.

Dr Bob Murray

Bob Murray, MBA, PhD (Clinical Psychology), is an internationally recognised expert in strategy, leadership, influencing, human motivation and behavioural change.

Join the discussion

Join our tribe

Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.