How afraid should we be of Artificial Intelligence?

Sid MOHASSEB

--

THIS ARTICLE WAS ORIGINALLY PUBLISHED ON ABC | MARCH 4, 21

We must not allow ethical concerns about Artificial Intelligence (AI) to distract us from the moral obligation we have to use AI to improve the lives of humanity. Rather, ethical arguments should inform the type of AI we want to see more of and the path our collective evolution should take.

AI is already saving lives — from flood prediction and cancer detection to the development of new vaccines, it is a humanitarian lifeline. Age-old concerns around new technology and our addiction to the comfort of sameness cannot be allowed to impede progress.

The critical contribution of Artificial Intelligence has never been more clear than during the pandemic: AI has been used to grant individuals access to official government guidelines on COVID-19. The start-up Mantle Labs offered its cutting edge crop-rotation tool to retailers to help boost supply chain resilience and was instrumental in identifying the outbreak of an infectious disease in Wuhan. Now, start-ups like Benevolent AI are helping to accelerate the discovery of new drugs to fight the disease.

Before the pandemic, AI and machine learning was being used to predict natural disasters, detect cancers, help monitor human rights violations and provide accommodation for displaced refugees. But objections to AI persist and fuel public misunderstanding — this distrust can damage the efforts of those scientists who are pioneering this ground-breaking technology and ultimately harm us all.

Resistance to transformational technology is not new. The term “Luddite” — referring to someone with a distrust of emergent technology — takes its origin from the eighteenth-century weavers, artisans, and textiles workers who opposed the use of mechanized looms. Yet the mechanization of the textile industry has brought millions of people into the middle classes. More recently, we have experienced Luddite-ism against 5G, even though this is the technology that will enable remote working and a global increase in economic opportunity. With every new technology comes old resistance.

For example, when the “horseless carriage” (that is, the car) was first invented, some initial concerns led to an opinion piece entitled “The horseless carriage means trouble”, where the author argued: “The speed of which they are capable intoxicates and bewilders the senses, and deadens them to the dangers which surround the machine, and by a sudden mishap may turn in the twinkling of an eye into a terrible engine of destruction.” Cars can kill people. But this doesn’t mean we should ban them.

But skepticism towards AI is common even among its proponents. Elon Musk — who, as the Henry Ford of this century, heads a company that builds artificially-intelligent self-driving cars — has described AI as “more dangerous than nukes”. Stephen Hawking told the world, (through an artificially intelligent synthesizer), that AI could “spell the end of the world”.

Intelligent machines are increasingly being designed to self-improve. Instead of being programmed from the “top-down” so that the original design is the finished article, AI programmers are being designed from the “bottom-up”, meaning they teach themselves. The fear is that over time, this would trigger an “intelligence explosion” — a kind of machine master. Alan Turing referred to a child machine that gradually develops itself, which is now referred to as a “seed AI”. Such an AI would be able to accumulate content, and gradually improve its own information architecture. An upgraded system would then be able to make more significant improvements, creating an exponential rise in intelligence and therefore self-improvement.

Allowing a machine to manage its own development could mean that an unrecognizably intelligent machine could emerge in a matter of weeks or months. This machine would then be capable of a level of processing that a simple mind could not comprehend, and such a level of intelligence could have unpredictable or even dangerous effects. Since humans possess more intelligence than animals, we have subjugated them. After years of domestication, they became our food, laborer’s and companions. Since a sophisticated AI would, in theory, have more intelligence than humans, the argument follows that we may be subjugated in the same way.

But this argument confuses intelligence with motivation. Motivation does not necessarily follow from improved processing power. Even if we did create super-intelligent robots, there is nothing to say that such a robot would want to enslave its owners — and it wouldn’t be able to, if its motivation was “hardcoded” into it.

The intelligence and reasoning power of Homo Sapiens is bundled up with the more primitive motivations of charming mates, disciplining children, and challenging rivals. It is a mistake to bundle up the behavior of the circuit of a limbic brain of a specific primate with the general nature of intelligence; it is only one feature in our arsenal of mental tools. Contextual understanding is not an inevitable outcome of increased processing power. Understanding comes from formulating explanations and testing them against reality; running an algorithm faster does not necessarily lead to this.

AI machines learn from the past, but life and humans’ evolutionary path are not simply an extension of history. Creativity is the heart of our evolution. While AI machines may consider discovering and creating candles that last longer or are brighter, humans conceived electricity. Where machines may explore avenues to feed and exercise horses to be able to run faster or go further, humans created the car. Our collective motivation, though at times derailed by greed and fear, has forever remained evolution. An evolutionary path that has been focused on our physical nature but is now entering an evolution of mindset and creativity.

Of course, a motivation less AI is useless. IBM’s Deep Mind was programmed to play chess. AI used in hospitals is programmed to identify early growth cancers. The question is, how would a human program such a super-intelligent AI? This is what is known as the “value alignment problem”. The fear is that if we give an AI the goal of solving a problem and stand by and observe it solve that problem, the unforeseen consequences could be catastrophic. If, for example, we programmed a super-intelligent AI machine to maintain water levels in a dam, it may flood a town and kill unsuspecting civilians. If we programmed AI to protect human life, it may systematically destroy all potential predators, leading to collapse of the eco-system. While these examples are somewhat fantastical, they point to a deeper issue of how humans would manage such an unwieldy weapon.

But the question that haunts us is whether humans are simultaneously intelligent enough to build an artificially intelligent AI, yet moronic enough to allow such a system to operate without testing it in controlled environments first and implementing safeguards — like brakes on a horseless carriage? At the same time, would we create an AI machine that is simultaneously intelligent enough to manipulate the world around it, yet stupid enough to fall into such significant traps of misunderstanding?

Humans and all their creations are never flawless. It is impossible that humans would build a machine that was omnipotent, malevolent, and tamper-proof. However, we can always design an “off” button. Not all innovations are perfect, but all innovations lead to more innovations.

As well as the apocalyptic sci-fi-fueled objections, AI suffers from a more immediate and tangible concern — that it will, through automation, destroy jobs. The loom displaced artisans, but it created factories, managers, repairers, salesmen, and designers. Cars made horse-carriage drivers — and those who clean horse defecation from streets — redundant, but created taxi drivers, mechanics, and car designers. AI will replace certain jobs, but it will also create them. Just as clothes manufacturers may gawp today if they were asked to thread each new garment themselves, in the future, we will look back in horror that so many people spent their waking hours manually calculating taxes and driving lorries.

Ultimately, ethical objections to AI are a luxury that few humans can afford. The villagers in Bangladesh will not ignore AI-generated flood warnings due to fears of an “intelligence explosion”. The coffee farmers in Uganda will not ignore AI-generated crop-rotation insights because they aren’t sure of AI’s true motivation. We have an ethical obligation to them — and our children and grandchildren — to embrace any technology that can improve their lives. We have a moral obligation to humanity as whole to explore the next stage of our evolution — an evolution of mind providing the freedom for unbounded innovation.

--

--

Sid MOHASSEB

Sid Mohasseb is an Author, Venture Investor, Innovation Leader, Serial Entrepreneur, University Professor, Adviser, Board Member & Business Thought Provoker.