Issues Magazine


By Matthew Tieu

Matthew Tieu explains the field of neuroethics, comparing and contrasting the ethical questions it shares with the “genetic revolution”.

Since the molecular structure of DNA was discovered in 1953 by James Watson and Francis Crick, molecular bioscience has made rapid progress and contributed extensively to related fields such as medical science, reproductive medicine, agriculture and biotechnology. A more detailed understanding of the molecular and cellular basis of biological life and the realisation that DNA can be manipulated meant that biological traits could be altered, infertility circumvented and plants and animals genetically modified to suit our needs. Novel treatments for diseases, such as cell replacement therapy using differentiated adult and/or embryonic stem cells, were foreseeable. The possibilities seemed limitless and ideas ranging from organ banks (and other spare parts) to human immortality were no longer the domain of science fiction.

The notion of a “genetic revolution” eventually found its place in the public consciousness, with many imagining and embracing new possibilities. Even facetious suggestions exacerbated by media hype, like making broccoli taste like chocolate, would attract the interest of those wanting a positive nutritional outcome for children. Who wouldn’t want this luxury? Little imagination is required to foresee the economic and commercial potential of this revolution.

But some of these suggestions, labelled “Frankenfood” or “Frankenscience”, have made ethicists, scientists, politicians, theologians and the public more conscious of the ethical issues that arise.

From Bioethics to Neuroethics

This brief synopsis of the rise of the molecular biosciences points to the ethical issues inevitably raised with the advent of new discoveries, new knowledge and new science. So what’s new in the 21st century?

There are many disciplines, but one of the most exciting and arguably the most profound is cognitive science, a multidisciplinary synthesis of philosophy, psychology, neuroscience and artificial intelligence. However, most students are unlikely to have had exposure to it until they enter university.

As this discipline progresses further we will have a greater understanding of the relationship between two very important domains that have historically been considered essentially distinct: mind and brain. This distinction is known as dualism and was brought to prominence by the 17th century French philosopher René Descartes.

Cognitive science, which emerged in the middle of last century, can be understood basically as the rigorous study of the mind conceived of as a natural biological system. Despite the philosophical and methodological debate concerning the nature of mind and brain (commonly referred to as the “mind/body” problem), it is clear that the phenomena of consciousness, selfhood, memory, perception, emotion, belief, judgement and decision making are all inextricably linked to brain function.

Perhaps brain function may also play an essential role in our concept of “personhood” – a concept of what it is to be a person with inalienable human rights. Therefore, a very profound and intimate understanding of ourselves is to be gained from such an investigation.

In recent years, research in cognitive science has focused on neuroscience, yielding some interesting data. Cognitive neuroscientists focus on the neural basis of thought processes using techniques such as functional neuro-imaging (scanning for physiological activity in certain regions of the brain) and lesion studies (studies of patients with deficits due to specific brain damage).

Much like the molecular bioscience that transformed bioethics, 21st century cognitive science raises a plethora of ethical issues that have the potential to transform the way we view ourselves as human beings and moral agents. Although the ethical issues are in many regards analogous to those raised by molecular bioscience, in the case of cognitive neuroscience they seem more profound when our very thoughts and feelings are the subject of scientific scrutiny. Indeed, the relationship between the self and the brain is a lot closer to home than the relationship between self and genes, as philosopher Neil Levy states in Neuroethics: Challenges for the 21st Century:

Our minds are in some sense, us, so that understanding our mind, and increasing its power, gives us an unprecedented degree of control over ourselves.

In fact, we will become increasingly aware of why people behave as they do and why certain people form certain beliefs. Perhaps we might even begin to make certain modifications and/or enhancements. Such possibilities raise important ethical issues that, amongst many others, are discussed and debated in neuroethics.

Neuroethics is a broad and multidisciplinary field. Whilst it fits within the framework of bioethics, it is formally recognised as a discipline in its own right. The themes and examples of neuroethics covered here give some idea of the breadth and depth of neuroethics as well as the social and ethical implications likely to result from 21st century neuroscience.

Adina Roskies, a prominent cognitive neuroscientist and neuroethics researcher, has defined neuroethics as consisting of two streams: the ethics of neuroscience and the neuroscience of ethics.

The Ethics of Neuroscience

The first stream, the “ethics of neuroscience”, is concerned with the social and ethical implications of neuroscience and neurotechnology. It is foreseeable that we will be able to restore and significantly enhance mental function through novel neuropharmacology, neurogenetic engineering and neurostimulation techniques. For example, we may be able to develop novel treatments for depression, post-traumatic stress, attention deficit hyperactivity disorder (ADHD) and other personality disorders. We may also be able to either dampen or enhance other aspects of cognition.

Should we go beyond restoring function to enhancing all mental functions? Is there anything wrong in principle with enhancing our mental capacities? Consider, for example, whether any student should be allowed to freely use drugs like Ritalin to enhance their mental performance. In future we may also have the technology to enhance our mood and even produce neural devices designed to give pleasure.

With the rapid development of neuroimaging and brain stimulation technology, a host of possibilities emerges. For example, one can control activity in the various regions of the brain by either stimulating or suppressing activity in particular brain regions. Christopher Chambers and colleagues used a technique known as transcranial magnetic stimulation (TMS) to improve attention and visual focus (Brain Research, 2006). They have shown that TMS-induced disruption of a region of the brain known as the right parietal cortex improved the perception of relevant stimuli when subjects were presented with competitive visual displays.

Neuroimaging technology, which is now commonly used in both research and medical practice, raises ethical issues concerning mental privacy, predicting/controlling behaviour and diagnostics.

Brain-based lie detection is now a commercial reality, with at least two companies advertising brain-based lie detection services and claiming to have an accuracy of 90%. They use a neuroimaging technique known as functional magnetic resonance imaging and associated protocols. These procedures bypass the conscious processes of thought and tap into unconscious processes. When someone recognises a previously observed object, such as a murder weapon, patterns of brain activation that are different to those occurring for non-recognition can be detected using this technique.

There is also technology that can produce a change in people’s social behaviour. A recent study revealed that a subject’s decisions can be altered using TMS of a particular region of the brain known as the dorsolateral prefrontal cortex (DLPFC). In this study, subjects were asked to give their response to the “ultimatum game”, a traditional experiment in economics to assess people’s judgements about economic fairness. Subject A is given a certain amount of money and has the choice of sharing any amount of it with subject B. If B accepts the offer then both will receive the money; however, if that subject refuses it then no one gets anything.

The results of the ultimatum game have generally indicated that any offers of less than 20% are rejected, likely to be due to retaliation or spite, even though subject B has everything to gain and nothing to lose, since anything is better than nothing. The interesting point is that in a recent experiment, Turhan Canli and colleagues found that after TMS stimulation of the DLPFC, subjects were less willing to reject the offers that fell under 20% (American Journal of Bioethics, 2007).

In relation to diagnostics, an important question is whether a patient has the right to know (or not to know) what their brain scans reveal about themselves and possibly about their futures. Furthermore, this information could be used to discriminate against individuals whose brain scans reveal information that would otherwise remain private or unknown.

Just a few examples of the kinds of technologies and applications possible with our current knowledge of neuroscience have been given here. This stream of neuroethics raises many of the traditional issues in bioethics, concerning privacy, discrimination and manipulation. The big ethical question of therapy versus enhancement is also central.

Neuroscience of Ethics

Humans are distinct from other animals in our reasoning and in our reflections on our beliefs, values, expectations and experiences to make important judgements and decisions – we are rational. In the absence of such capacities we cannot be held responsible for our decisions and actions – we would no longer be moral agents. This second stream of neuroethics, the neuroscience of ethics, is therefore a deep enquiry into the neurological basis of moral judgements and decision-making.

Recent neuroscientific experiments have allowed us to gain some knowledge of the neurological states that correlate with moral judgements, and as a result we are presented with a genuine challenge to our deeply entrenched beliefs. The evidence, according to cognitive neuroscientist Joshua Greene (Trends in Cognitive Sciences, 2002), suggests that we are not so rational or self-reflective in forming ethical judgements. In fact, the contention is that we have a tendency to make moral judgements in accordance with our emotional inclinations rather than with our carefully thought-out rational deliberations. So if it turns out, as some philosophers and neuroscientists have claimed, that a particular moral belief was not the product of rational contemplation but a post-hoc rationalisation of an emotive judgement, or merely an attitude of disapprobation, then we must wonder how we can trust our moral beliefs.

Plenty of evidence from cognitive neuroscience, anthropology and psychology suggests that our brains may even contain a “moral organ” responsible for making moral judgements, something that Marc Hauser has written about in Moral Minds. If so, these moral judgments are to a degree isolated from the rest of our mental faculties and therefore may not be open to critical evaluation or rational revision. Furthermore, this organ may be selectively damaged whilst leaving other aspects of mental function undamaged. Such a finding would be of great importance in cognitive neuropsychology, a field of research that aims to distinguish between different functional domains in the brain based on how damage to particular brain areas can affect our behaviour whilst leaving other domains intact.

Consider the frequently cited example of rail construction worker Phineas Gage, who in 1848 had a tamping iron thrust through his skull in an accident, damaging his prefrontal cortex and subsequently transforming his personality. Whilst his basic faculties of intelligence remained intact, he became short-tempered, unsociable, profane and unable to persevere with future plans. It is because of his accident that people were reluctant to blame him for his behaviour.

Consider one of the most interesting test subjects that cognitive neuroscientists look at, the psychopath. Psychopaths in general lack empathy. They fail to distinguish between social/conventional transgressions, such as burping at the dinner table, and ethical/moral transgressions such as murder.

Cognitive neuroscientist James Blair explains the psychopath’s behaviour as a lack of the capacity to recognise the submissive cues of their victim. This is due to deficits in brain regions such as the amygdala and orbital prefrontal cortex, which are responsible for emotional functions such as aversiveness and linked with learning to respond to submissive cues such as fearful and sad facial expressions. Without such emotional capacities it seems that psychopaths cannot inhibit their harmful behaviours, which they direct towards their victims. Psychopaths seem to equate morality with etiquette.

Should psychopaths be held responsible for their transgressions? Do psychopaths have diminished free will, and are they acting autonomously? Perhaps instances of moral transgression and poor decision-making may reflect some underlying neurological dysfunction, along the lines of Phineas Gage and/or the psychopath.

This leads to another important question in neuroethics: whether neuroscience can help determine when a person is making a decision based on rational free will, and whether a person is entirely responsible for his or her actions. Perhaps it can be argued in a court of law that a psychopath’s capacity for responsibility is diminished.

In the legal arena, neurological dysfunction is indeed relevant to assessing a person’s responsibility for their actions. The legal principle of mens rae states that a defendant should only be held criminally liable for events or consequences that he/she intended or knowingly risked. Therefore, any form of neurological dysfunction that relates to a defendant’s responsibility for his/her actions (i.e. capacities that are necessary for establishing mens rae) can form the basis of establishing diminished responsibility (neuromitigation).

Understanding the neurological basis of responsibility will also play a crucial role in the treatment of drug addiction. An important debate centres on whether drug addicts are still able to make decisions and act out of free will. Are they acting autonomously? The neuroscience of decision-making might be able to help answer this question. The answer will affect the way that drug addicts are treated, also informing drug treatment policy.

Neuroethics exists due to recognition of the significance of the new sciences of the mind. An appreciation of the relevance of the cognitive sciences will enable foresight into the wide-ranging ethical and social implications that 21st century humanity will inevitably face.

Perhaps this is a prelude to what many regard to be our path towards a “post-human” era.