Current Issue #488

Modern Times: Staying human in the race for artificial intelligence

Modern Times: Staying human in the race for artificial intelligence

As global competition for leadership in the domain of artificial intelligence emerges, Andrew Hunter asks what we can learn from the natural world, and whether this technology could be used for the greater good.

When Vladimir Putin suggested last year that “whoever becomes the leader in this sphere will become the ruler of the world”, the seriousness with which we should take this new field of battle, which has taken on the characteristics of an arms race, was abundantly clear.

Putin claimed that AI was the future, not just for Russia, but for all humankind. It is disarming to consider the accelerated progress of artificial intelligence when humankind has yet to come close to natural intelligence, despite millennia of lived proximate experience. Schools of fish, swarms of bees and flocks of birds act with such coordination that they appear to have a collective mind.

If understood, the capacity in these species for groups of individuals to act in perfect, harmonious coordination may assist in the evolution of human societies increasingly disinterested in collective action. The opportunity to learn from nature has been there since the beginning of time, yet scientists are only just starting to understand how individuals can act with such collective purpose. These learnings are important, but our attention is elsewhere.

Putin asserted that AI offers offers both “colossal opportunities” and dangers. The fear — that rapid technological progress has the capacity to irrevocably alter our way of life — is not new. If it is true that humankind has adapted and persevered, it is also true that not all technological advancement has been for the betterment of humanity. The speed and scale of progress in the field of artificial intelligence will change our lives beyond imagination. It may even render human beings obsolete.

Power tends to be a zero-sum game. Putin’s interest in leading the AI arms race is directly related to the power he believes it will deliver. This informs the way in which advances in AI will be deployed. Sometimes, the greatest threat exists within. A country without stability will struggle to assert power externally if it is internally weak.

Those who dismiss Japan due to its relative size, for example, ignore its incredible resilience and social harmony. Those who assume China’s dominance overlook its history of fragmentation and restoration. “Long united must divide, long divided must unite,” according to the classic novel Dream of the Red Chamber. The future pays little attention to such linear logic. AI will help define a future that will not be defined by AI alone.

How we use AI is every bit as important as our mastery of it. Can we make the collective decision to avoid crossing a line, beyond which AI will pose an irrevocable risk to humanity? Can we consider how AI could improve the lives of the masses, rather than considering it an instrument to be deployed to achieve world domination? Human nature is anything if not predictable, and it would take a substantial departure from historic norms for us not to invite disaster and welcome it with open arms.

There remains a compelling case to persist with an ancient area of learning in our modern education: philosophy. Why we do something is more important than how we do it. It is unfortunate that philosophy is increasingly considered obsolete and we lack understanding of natural intelligence, despite our historic opportunity to observe and learn from nature. Advances in technology without a parallel focus on why and how to deploy these new understandings and capabilities is dangerous.

Most would argue that humanity has always coped with and adjusted to technological advances. This will remain true until it is no longer true.

@andrewhunter__

Get the latest from The Adelaide Review in your inbox

Get the latest from The Adelaide Review in your inbox