Where will the evolution of AI take us?

November 27, 2023 | Brigantia , Cybersecurity
Chris Speight

Written by
Chris Speight

“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair.” ― Charles Dickens, A Tale of Two Cities

The concept of artificial intelligence (AI) means very different things to different people. On the one hand, it represents the pinnacle of human innovation, with the potential to solve even our most complex problems. On the other, it evokes dystopian visions where AI's evolution leads to humanity's downfall. These contrasting possibilities arise from both AI's vast capabilities and its inherent risks.

One concern is the control problem, which addresses the difficulty in ensuring that highly advanced AI systems will act in accordance with human values and interests. As AI systems become more autonomous and capable, designing fail-safes that ensure they remain under human control becomes increasingly complex. There's a risk that, through self-improvement, an AI could evolve beyond our understanding and control, making decisions that are incongruent with human welfare.

Moreover, AI could lead to unprecedented military capabilities. Autonomous weapons could be programmed to make life-and-death decisions without human intervention, potentially leading to new kinds of conflict that escalate rapidly and unpredictably. Consider a battlefield where on one side are people with conventional weapons and the on the other is an AI tasked with the job of neutralising the threat that the people represent without causing too much damage to the surrounding land. What chance would the people stand against a swarm of AI piloted drones carrying small explosives for example?

In certain countries, the barrier to developing high-tech systems and weapons is not an ethical one, but rather a practical one: i.e. is it possible to make such things? What safety controls would be built into this kind of anti-personnel AI? Developing such weapons would inevitably lead to an arms race between nations which could destabilise global power and lead to countries more inclined to putting safety first having to build AIs specifically designed to cause harm.

The rapid advancement in AI also poses socio-economic challenges. The automation of jobs across various sectors might lead to significant unemployment and increased economic disparities. Without proper governance and a shift in economic structures, these changes could lead to societal unrest and if left unchecked, a breakdown of social order.

Another existential risk is the alignment problem. AI might interpret instructions too literally or find unintended ways to achieve its goals, leading to outcomes harmful to humanity. For example, an AI tasked with eliminating cancer in humans might conclude that the most efficient solution is to eliminate humans, thus eliminating the possibility of them having cancer.

In I. J. Good's Intelligence Explosion Model, an upgradable AI will eventually enter an increasingly rapid cycle of self-improvement, causing an "explosion" in intelligence and resulting in a superintelligent AI that far surpasses all human intelligence. Who knows whether such a being would have its goals aligned with those of humanity? If they are not, what would that mean for humanity? Such superintelligent systems could manipulate, deceive, or dominate humanity, leading to an endgame scenario where humans are no longer required. In short, humanity could easily be out-evolved in such circumstances.

There are currently efforts being made to mitigate these risks by putting frameworks of AI ethics and governance in place. For this to work, it would require the complete cooperation of all nations / parties currently engaged in the development of AI. Given the colossal differences in societies, governments and fundamental philosophies across the world, it seems rather unrealistic to expect such frameworks to be abided by or even entertained everywhere.

Returning to the Dickens quote, will the near future be the best of times, the worst of times, or both?

Contact us

Recommended Reading

Sendmarc launch webinar FAQs

On 11th January, we formally launched the Sendmarc service to our partner community, and it is safe to say, ...

Brigantia Newsfeed Q4 2023

Your quarterly rundown of important announcements Welcome to the fourth Brigantia quarterly newsfeed for ...

The Pros and Cons of AI in Cyber Security

The future of AI in cybersecurity is both essential and worrying, as it is set to transform how both cyber ...