Issues Magazine

Technology’s Slippery Slope

By James Newton-Thomas

The problems we face from technology and AI are not a point on the horizon – they are starting now – and the issue is not the technologies themselves but the inability of our current systems and ideologies to adapt to them ethically.

Many years ago I took an opportunity to climb Uluru. Climbing the rock is now discouraged by the local indigenous people, partially on cultural grounds but also because it is actually very dangerous. The rock is an undulating series of domes, very smooth, very hard and red with precipitous sides. Being dome-like, the surface has no sudden edge; it slowly increases its gradient until such an angle that falling – some tourists have toppled to their deaths – is inevitable.

Today’s technological landscape can also be viewed as a series of undulations. They encourage us in certain directions, and our systems of government, cultural values, laws, and often our ignorance give us momentum. There may also be dangers in going too far.
Although technology has no moral valency, it does have increasing potency. This power is not just destructive, as witnessed on 9/11 when a relatively small group of ideologically minded individuals employed technology to attack the most powerful country on Earth; it is also sociologically transformative as with Facebook and even simply television.
Technological innovation is another such factor with well recognised momentum, and by consensus it is a positive thing. Will this always be the case? We do not know. As philosopher Sir Karl Popper famously observed, “we cannot know today what we will discover tomorrow”.

One thing we do know however, is that the introduction of anything – whether it be an alien species to an ecosystem or a technological innovation to an existing marketplace – will cause some change. The greater the change, the greater the disruption and angst experienced by some, and occasionally all, of the population.

In futurist circles there is much talk about the singularity, a hypothesised point in the future at which technology runs away from us and leads us to some kind of utopia, dystopia or oblivion. Whilst the changes beyond this point may be profound and unknowable, the changes that are happening now, and may be leading to a singularity, are very real and very knowable if we choose to observe or even allocate time to observe rather than just being swept along with the flow.

Unfortunately specialisation, which has made us so successful as a species, means we now have fewer and fewer people looking at the bigger picture. Whilst the drivers for technological change – capitalism and curiosity – are the same, the potency of the changes being wrought are vastly greater.

Tools and the Human Machine

From an engineering perspective, a human being is a remarkable machine: incredibly versatile, self-learning, self-reproducing and self-repairing. But to know what we can do is to know what we cannot. We are not omnipotent; we suffer the cold, we cannot fly, we break if we try. We recognise that even our strongest characteristics, such as intelligence, have limits.

It is the remarkable success we have had in overcoming these biological limitations that has become one of the defining characteristics of us as a species. It is this success, however, that is the cause of the changes now upon us. We have become so good at compensating for these limitations that very soon we will have nothing to do.
Traditionally, compensation has taken the form of augmentation: we use tools, we work in teams (which allows specialisation), we wear protective clothing. Humans are one of a small number of tool-using species but we are by far the most successful. If we were not born with it, we made it.

Augmentation through tool use is what has allowed us to work in hostile environments chasing stuff we want, whether that be mammoths on the tundra or minerals from underground ore deposits. Most of our physical capacities have now been completely subsumed by our tools. We can now travel on land, swim, burrow and fly better than any other animal on the planet, and whilst some of us could still run and eventually club a buffalo to death we don’t; we drive and shoot it or, more often, drive to McDonald’s and just eat it.
Because of our excellent spatial modelling skills and dexterity, we can comfortably drive cars and other vehicles at many times the speed our brain was expecting to find itself travelling at. Nevertheless, we encounter problems because, like any high-performance machine, things start to go wrong when ours is pushed outside of its “design” envelope. To compensate for driving cars beyond our reaction capability, we install air bags and anti-lock braking systems and enforce speed limits.

Arguably the most distinctive feature of humans is our general intelligence; other animals are smarter than us in their own domains, but for general intelligence we are unmatched. This characteristic has set us apart from all other animals and enabled the teamwork and augmentation through tools that has made us so successful.

Our brains allow us to model and communicate with each other, and to understand the environment around us to an unparalleled degree. Our anatomy and physiology combined with our brains constitute a supreme alpha predator. We have not only good dexterity but also the intelligence to apply it to a broad range of tasks. Almost counter-intuitively, because intelligence is the strongest of our natural abilities it is one of the last to be subsumed by our tools, but this too is now changing.

The Power of AI

The imminence of significant change is not always obvious; the first thing to recognise is that small changes can make a big difference as invisible thresholds are crossed. The frog on the staircase analogy highlights this well. A frog that can only jump 11 cm, sitting on stairs with 12 cm risers, is always stuck on the bottom step. A frog that is only slightly stronger can climb the whole staircase.

In the past 12 months the current leading supercomputer, the Fujitsu K series in Japan, achieved 8.162 peta flops on target to its operational goal this year of 10 peta flops. A peta flop is a measure of computational performance and represents a thousand trillion floating point operations per second. Low-end estimates of the computational capacity of the human brain put it at about the same figure. The upshot of this is that computer hardware is already “capable” of being intellectually one of us even though the software is lagging. In fact, if you consider that most of what our brain does is housekeeping, such as running our digestive system and endocrine system, then the raw computing power directed to outside problems has already tilted in favour of the computers. We can see this because they clearly outperform us (augment us) in their assigned roles, which include weather forecasting, protein simulations, share trading and materials analysis. Like all tools, the supercomputers are tasked to augment our weaknesses.

There is no point in racing straight to make computers “one of us” when we already have “us”. There is, however, a point in making them general so that they can do multiple tasks, like we can. It would also be nice to speak to them without having to write messy code.
Moreover, it would be nice to have them learn by themselves rather than having to tell them how to do things. This is particularly useful because many things that we do and take for granted are actually very difficult to articulate. We know what we do, but not how we do it. The “how” is handled by the background processing part of our brain, and these functions are difficult to observe and often interpret.

These problems generally come under the domain of artificial intelligence, or AI, and it is this area that has enjoyed remarkable success of late. The work is also being buoyed by the massive background developments in the computers that AI systems run on – Moore’s law (see page 11) looks like it has another 10 years left (although this prediction has been made every year for the last 30 years) so we can expect that next year’s computers will be twice as powerful as this year’s. These observations are, of course, simplistic. There is about a 10-year lag between supercomputer performance and desktops. There is also some truth in the quip “what Intel giveth Microsoft taketh away”, a phenomena known in the industry as “bloatware” (see also Wirth’s law). Nevertheless, the emergence of better AI algorithms coupled with improvements in sensors, control systems and communications, all underpinned by evermore powerful hardware and software, is inexorably taking us across a threshold. Our frog can now climb the staircase.

These disparate technologies all support and feed on each other. Better computers allow the development of more powerful AI; and more powerful AI can be used to assist chip designers as they design better computers and better fabrication systems to make computers.

The requirement for better performance, particularly in hostile environments such as warfare or underground mining, or even intellectually hostile environments such as share trading, has seen a strong economic push recently towards automation. Modern communications mean it is a lot cheaper to move sensor data from a hostile remote location to a human eye than it is to move and protect a human eye in a hostile environment.
In the US military, more pilots are training to fly unmanned drones than conventional aircraft. An even more telling statistic is that there are more drones being built than there are pilots. This is because the remote systems are becoming autonomous and the control is being distributed from the human operators to the machines themselves. One operator can supervise multiple drones.

The net result of all of this automation development is that, in the near future, machines will have the capacity to exceed our abilities in anything. This is not to say they will do this in a single package as we do, but ultimately it does not matter. This time will mark the tipping point at which humans are no longer born with the attributes required to be meaningful from an economic perspective.

Augment or Replacement?

There is a major problem with the perception of the dangers that AI poses. The populist view is to think of disruptive AI through a kind of Terminator scenario. In fact, the most immediate problem is the economic dislocation it will cause, or, as Karl Marx defined it when speaking of societal evolution, the potential for immiseration. That is not to say that a hostile AI or a nanotechnology swarm enveloping the planet (called the grey goo scenario) could not occur – they could – but rather that the clear and present danger we now face is the systemic inability of our socio-economic system to adjust to what is now happening.

In the US military, more pilots are training to fly unmanned drones than conventional aircraft.

In general, people accept that machines are stronger and faster than us; the early industrial revolution was testament to this. But until the most recent decades, machines have been relatively dumb and required the guidance of a human hand to make them useful. To this day, despite all that is happening, some people doubt that machines will ever become smarter than humans. In reality, in sub-domains they already are. AI is now widely deployed in our environment, whether in passive design or active embedded electronics. These new intelligent sub-systems are often, by design, invisible.

It is a truism of AI research that once it gets deployed it is no longer called AI. Today’s AI algorithms are just tomorrow’s software, and all modern non-trivial software is littered with what was once in the domain of AI.

At a more visible level it can be seen in the current tablet and smart appliance explosion. Superficially, an iPad or similar is just a networked computer that is easily accessible; however, as a system I am much smarter when I have my iPad with me than when I do not. The iPad augments me.

People do not feel threatened by the AI all around them in the way that they once felt threatened by steam engines in the early industrial revolution because today’s AI doesn’t directly compete with them. AI talks to people, plays games with them, shows them pictures of the family and helps them schedule appointments. People feel that AI is a small, portable extension of themselves.

Smart phones and tablets are designed to take just enough credit to get other people to buy them. The main economic value of this technology is mere ownership.

When Capitalism and Human Values Diverge

Capitalism has overseen the greatest creation of wealth in human history. It has brought more goods and services and provided more purchasing power to more people than any other system ever devised. Unfortunately, it is fundamentally incapable of equity when deploying the now-emerging technologies.

Why is this? First, when machines outperform humans in a given domain, capitalism will, without question, employ the machines. Those companies that hold out will eventually be outcompeted by those that do. This process has already started and has thrown up some surprising examples. Foxconn Technology Group, the China-based electronics company that makes iPhone components, has recently announced plans to deploy a million robots over the next three years to reduce labour costs. Companies that were previously concerned about cheap Chinese labour now find they are sharing their concerns with the same cheap Chinese labourers they were worried about.

Second, capitalism has evolved with the assumption and experience that humans would always be part of the value chain, but the emerging reality is that this assumption is no longer a surety. There is no guaranteed mechanism within capitalism of disbursing the wealth created to people who are not themselves economically adding value. Economically, human labour was always thought to be elastic; that is, if people were unemployed somewhere they could always go and work somewhere else or they could invent something new to do and create value in that way. Now that technology is exceeding human capabilities, even in inventiveness, that elasticity will no longer exist.

Picture two trains that have been running parallel for many years; the first represents capitalism, the second represents human ethics. So long as money flowed from the capitalist system through individuals, then their intrinsic human values were able to guide its government and disbursement, and this has allowed our society to be generous, to construct public infrastructure, to care for the disadvantaged, to educate and to heal.
Unfortunately, as human participation in the capitalist process declines, there will be less of this flow. The tracks are now diverging.

Unfortunately, government appears to be systemically incapable of dealing with what is happening. The main problem is that modern western governments have been largely subsumed by the economic forces they were meant to govern. Thus, they have become effectively impotent where any decision would conflict with significant economic interests.
Governments deploy tools to assess these threats but therein lies an irony. The problems we face are complicated and therefore the systems we are deploying to face them are increasingly less human and therefore containing less-human values.

As humans, we cope with complexity by generalisation, but unfortunately this has led to a tendency to oversimplify problems and a propensity to declare victory too early. We don’t want to keep thinking about things so we just put them behind us.

In the early stages of the industrial revolution a movement calling themselves the Luddites set about opposing the introduction of mechanisation. They even resorted to violence; some people were killed and many industrial weaving machines were destroyed.

Their premise, now known as the Luddite fallacy, was that mechanisation would soon destroy all jobs. At the time they were wrong: the Luddites did not see the elasticity available to human labour at the time. The early industrial revolution actually massively increased employment, not decreased it.

Now, with the new technologies being deployed, what we have to face up to is that the Luddite fallacy is starting to look less and less like a fallacy and that the industrial revolution did not end in the 1850s; that was just phase one. We have to recognise that our liberal-minded democracy and ideological adherence to an amazingly successful economic system may in fact cause untold suffering in the near future. I do not proffer here any answers, but I do say adherence to any particular ideology is wrong.

Unfortunately, unlike at Uluru, we do not have the sage advice of elders who have been there before, who know the dangers, can guide us, and warn us to be cautious.