Issues Magazine

A Rationalist Approach to Transhumanism and the Singularity

By Meredith Doig

Even if it’s technically possible, is it ethical to pursue the transhumanist agenda? Would we still be “human” in a transhumanist world?

At first meeting, the idea of transhumanism and the singularity fascinates but also invites scepticism. Machines taking over from humans? At a particular point in time? Within the next couple of decades?

My own reaction was initially one of doubt, but I do call myself a rationalist, and that means I had to keep an open mind and look for the evidence. And what I found was more finely balanced than simply a modern day science fiction story.

First, a little background. What is a “rationalist”?

In classical times, rationalism was simply another term for philosophy, the sort of thing Socrates did, and his student Plato, and his student Aristotle. Since the Enlightenment, rationalism came to be associated with the idea that truth comes from the use of mathematical methods of logic, to be contrasted with empiricism, the idea that truth comes from sensory experience.

However, rationalism has also acquired another meaning: to challenge the prevailing social and political status quo, championing reason over faith, tradition and authority, particularly the authority of the established Church but also the authority of the State.

It’s this pragmatic approach to political philosophy that drives what the rationalist movement is today. Rationalism these days is a world view, a way of thinking in which the belief in the supernatural and appeal to dogma are rejected in favour of a scientific approach to social challenges and an independent attitude of mind. In sum: “We’re in favour of science and evidence, as opposed to superstition and bigotry”.

The Rationalist Society of Australia is the oldest of the free thought movements in Australia, having been established in 1908 as an offshoot of the Rationalist Press Association in London. Philosophically we believe in the physical world (and not the spiritual world); we believe that words have clear meanings (unlike the postmodernists); we believe in cause and effect; and we believe that reason is what makes human beings distinct. Politically, we stand for a secular democracy, citizen access to information, universal human rights, evidence-based public policy and a universal education system free from religious indoctrination.

So, armed with this philosophical and political background, how does a rationalist approach the phenomenon of transhumanism and the singularity?

According to Nick Bostrum of the World Transhumanist Association, transhumanism is two things:

  • an intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason; and
  • the study of the ramifications, promises and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies (http://humanityplus.org/learn/transhumanist-faq).

As rationalists, we should say this is right up our alley. The Transhumanist Declaration states that transhumanism envisions the possibility of alleviating grave suffering like poverty, disease, disability and malnutrition. All good, worthy goals.

So why was I not convinced?

I wasn’t convinced partly for technical reasons and partly for ethical reasons. Let’s look at the technical reasons first.

What occupies those who follow transhumanism? Among other things it’s the melding of the human body with machines and the uploading of the human brain into a supercomputer.

As rationalists, we are in favour of science and evidence, as opposed to superstition and bigotry – transhumanism is too.

Eminent transhumanist Ray Kurzweil argues that developments like these are inevitable because of the exponential increase in the power of machines, and particularly of computers.

But is this inevitable?

A well-known form of this exponential growth is Moore’s law, named after the founder of Intel, Gordon Moore. Moore’s law states that the power of computers (processing speed, memory capacity) doubles every two years. This has happened every year for the past 50 years and it’s expected to continue to do so.

But, mathematically, exponential growth is not the only form of runaway growth. For example, until recently it was assumed that world population was growing exponentially but of recent years this growth has levelled off and it’s now thought the human population is likely to flatten out at around nine billion by the middle of the 21st century. What looked for many years like exponential growth can turn out to be simply the early stages of an S curve. And it turns out that S curves are often found in human-related disciplines of biology, economics and sociology.

So why does what looks initially like exponential growth sometimes flatten out and become an S curve? It’s because of some limiting factor, like the depletion of an important resource in the system.

In the world of Moore’s law, the limiting factor to what appears to be the exponentially increasing power of technology may be size: a limit to the miniaturisation of microchips comes when they reach atomic-level sizes.

So when Ray Kurzweil predicts that we will be able to upload a human brain into a supercomputer in the foreseeable future, based on the assumption that the power of computers is growing exponentially, he may be making the wrong assumption. That’s not to say he’s wrong, just that he may not be right.

And there are other objections to Kurzweil’s predictions.

Dan Dennett is a philosopher of human consciousness, one of the “four horsemen of the New Atheism” along with Richard Dawkins, Sam Harris and Christopher Hitchens. Like Kurzweil, Dennett is interested in the possibility of creating machines with human-like intelligence and consciousness. But, unlike Kurzweil, he doesn’t think we will create machines that embody human-like consciousness. Why not?

Dennett’s objection is not technical, it’s economic. Religionists tend to argue that human consciousness has some special quality imbued by God, beyond the capacity of humans to understand let alone recreate, and that this makes it impossible to replicate human consciousness in a machine. Dennett will have none of it. It’s not that we could not recreate human consciousness in a machine, it’s just that it would take too long and be too costly!

He points out that for a machine to carry on intelligent human conversation we would need to upload 1015 (one thousand trillion) different contributing subconversations. And even if we could work out the content of these subconversations (how would we list them?), it would take a few trillion years to upload them. Who would be willing to pay for such an effort?

Then there are those who simply think Kurzweil is a first-rate bulls**t artist.

Biologist P.Z. Myers says Kurzweil simply doesn’t understand how the brain works. He says Kurzweil’s statement that the brain is simpler than we think is “deeply flawed”. Not one to mince words, Myers calls Kurzweil’s predictions “techno-mystical crap”.

These are just some examples of technical objections to transhumanism. But what about the ethical objections?

First, some simple definitions. What is ethics? Ethics is that part of philosophy that’s about what’s right and what’s wrong, what’s good and what’s bad. Ethics provides the reasons for how humans ought to act. Just because something is scientifically and technologically possible doesn’t mean it’s ethical to do it.

It’s common to hear people talk of a lack of ethics or unethical behaviour. The problem is, there is not one ethical approach and no single answer to ethical dilemmas. There are at least five different types of ethics:

  • utilitarian approach: ethical action is what produces the greatest balance of good over harm;
  • rights approach: ethical action starts with accepting the inviolability of each human being; everyone has the right to be treated as an end, not just as a means to an end;
  • justice approach: ethical action treats all humans equally, or if not equally, then at least fairly according to some defensible standard;
  • common good approach: ethical action improves society as a whole rather than the individual; and
  • virtue approach: ethical action is consistent with virtues like honesty, courage, compassion. Virtue ethics asks: “What kind of person will I be if I do this?” and “Is this action consistent with doing my best?”

Let’s just focus on the last of these, the virtue approach. This style of ethics was originally developed by Aristotle, student of Plato and teacher of Alexander the Great. Aristotle thought the highest human good, the purpose of human life, was to live a life of eudaimonia. This is a Greek word that is usually translated as “happiness” but it’s more accurate to say “flourishing” or “well being”. So, according to Aristotle, the purpose – the “end” – of human life is to pursue happiness or flourishing.

The obvious next question is: how is this done? How to live a life of eudaimonia? Here, Aristotle argues that human beings, like everything else on earth, should live according to what makes them truly human. And that, according to Aristotle, is the use of reason. Reason is the function that sets humans apart from all other sentient creatures. (This assumption of Aristotle’s is challenged by our more recent understanding of certain animals, but it was a reasonable assumption in his day.)

He further argues that living a life of reason means acting virtuously. To be virtuous means to act, to choose, according to qualities like courage, honesty, compassion, self-control, generosity and prudence: it’s what you do that makes you who you are.

But how do we know what courage-in-action might mean in any particular situation? The answer, according to Aristotle, is to choose the golden mean – neither one extreme nor the other. For example, courage is the mean between fear and recklessness; generosity is the mean between wastefulness and stinginess.

But how does this relate to transhumanism? Firstly, it raises the question of what it means to be a human being. Aristotle says it’s the exercise of reason, the unique function of being a human being. Could a post-human – a human melded with the machine, beyond the singularity – fulfil this condition? The answer surprised me. Yes, post-humans could certainly think, choose and act according to reason.

So, then we can ask, could a post-human live a life of eudaimonia?

This is more difficult. Could a post-human knowingly choose virtuous action and thereby fulfil a life of eudaimonia? If Dennett is right, we’d have to upload at least 1015 different possible choices for the post-human to choose from. How could we know if the choice made is an end in itself, rather than a means to an end? This needs understanding motivation – much harder to imagine we could program motivation into a computer.

Erik Parens, a senior researcher with the US-based Hastings Center for bioethics, suggests an interesting use of Aristotle’s philosophy. Faced with the possibilities posed by the transhumanist agenda, he suggests that a virtuous way forward is to choose the Aristotelian golden mean: neither sticking with the status quo out of fear of the unknown, nor recklessly plunging into radical human enhancement with no thought for the consequences. Instead we should choose the Aristotelian mean of moderate human enhancement, with a view to maintaining experiences that give meaning to our lives as humans.

In pursuing a Utopian world of perfect post-humans, we may irrevocably undermine what makes us human in the first place.

In thinking about what these and other moral philosophers might say about the transhumanist agenda, I found I moved from initial scepticism to grudging acceptance of the idea of transhumanism.

As a rationalist, I am an advocate of the human capacity to reason, particularly in the service of improving the human condition. Transhumanism does this. As rationalists, we are in favour of science and evidence, as opposed to superstition and bigotry –transhumanism is too.
But it’s only a grudging acceptance. Why?

It’s because I am also reminded of the great philosopher Isaiah Berlin, who warned of the dangers of trusting to the human predilection to seek Utopia. He said:

One belief more than any other is responsible for the slaughter of individuals on the altars of the great historical ideas... This is the belief that somewhere, in the past or in the future, in divine revelation or in the mind of an individual thinker, in the pronouncements of history or science, or in the simple heart of an uncorrupted good man, there is a final solution… But is this true?... The world we encounter in ordinary experience is one in which we are faced with choices between ends equally ultimate and claims equally absolute, the realisation of some of which must inevitably involve the sacrifice of others. It is because of this that men place such immense value on the freedom to choose… [emphasis added]
“Two Concepts of Liberty” lecture delivered by Isaiah Berlin, Oxford University, 1958

Not all good things are compatible. The Transhumanist Declaration seeks “respect [for] autonomy and individual rights” as well as “solidarity with and concern for the interests and dignity of all people around the globe”. It may well be that these two worthy aims are incompatible. In pursuing a Utopian world of perfect post-humans, we may irrevocably undermine what makes us human in the first place. Technically possible, perhaps, but ethically unacceptable.