Morality, Neuro-myths, and the Spurious Seduction of Evolutionary Ethics

Has neuroscience sold us a lie about the nature of morality?

Here’s a thought experiment. Suppose you are an aid agency providing food for children in a refugee camp. You have limited resources and could either feed all the hungry children inadequately, in which case they will soon starve, or feed a few adequately so they will survive but the others will all die. It’s a moral choice between equity and efficiency. What do you do – especially if your head is in an fMRI brain imager when you are confronted with the dilemma? According to the authors of this neuroscientific quandary, who claim to be measuring the brain correlates of distributive justice, one brain region, the insula, encodes inequity while the putamen region encodes efficiency.[i]

This typifies the beliefs of the new discipline of neuroethics that absolute moral values are inscribed in the brain. But how did we get here? For as long as oral traditions or written records have been available, moral injunctions have been lai

Continue reading

Enjoy unlimited access to the world's leading thinkers.

Start by exploring our subscription options or joining our mailing list today.

Start Free Trial

Already a subscriber? Log in

Join the conversation

Abraham Joseph 31 August 2017

More than morality obsessed, existence is keen on some other objectives, it seems. But morality comes automatically for the guys and societies that follow her guiding principles.

Comments of Dzen makes a lot of sense.

Man is moral, not because of, or directly from any inherent moral sense, but from establishing his destined 'relation' with Existence. His self-realization, the proverbial goal of every human being!

Love to share here with, a blog post of this commentator, that delves into the theme of 'how ought to we live':

Dzen_o 30 June 2017

To speak about the morality is necessary before to define/rationally understand – what are the notions/phenomena “Matter”, “Consciousness” , and “Life”; first of all to understand that Matter and Consciousnesses are fundamentally different, including human’s consciousness is non-material and uses practically totally material body as some stable residence.

All these notions/phenomena are Meta-mainstream philosophical and so Meta-mainstream “usual scientific” as well, which can be defined/understandable/studied only in framework of the “The Information as Absolute” conception ( ).

Outside the conception the study of the consciousness, moral, etc. has no perspective, including, for example, applications of the neuroscience is practically something as studying what a computer with a running program does, having at that no understanding what is studied and having no access to the random access memory and the processor; as that the neuroscience in the reality does.

Returning to the moral, etc. – see the Sec.2 in ; though it is useful, of course to read the whole paper.

The URL links in the IAI comments often aren’t active directly, but work if an address ia copied and pasted into the browser address line.

mlsloan 20 May 2017

Hello Steven,

Perhaps some readers will find interesting my starkly opposing view of how EO Wilson’s prediction of a science based morality (or perhaps more than one) is, after 45 years or so, finally near fruition.

Of course, I would be glad to discuss my particular objections to your perspective if you have any interest in doing so.

You may agree that biology underlying emotions triggered by moral judgements such as empathy, loyalty, gratitude, indignation, guilt, and shame was selected for by the benefits of cooperation these emotions motivated.

However, contrary to your implication, it is a profound error of logic to conclude anything like this science claims ’justification’ for “racism, sexism and class”. Quite the opposite, racism, sexism, and class tend to reduce the over-all benefits of cooperation.

Also, it appears to be the case that virtually all cultural moral norms have the same primary selection force, the benefits of cooperation in groups. Cultural norms are just an additional substrate on which evolution can select for and encode elements of cooperation strategies. ”Morality” as defined by both our biology and cultural moral norms is about elements of cooperation strategies.

But it would be a second profound error of logic to conclude anything like this science claims whatever increases cooperation in groups is what we morally ‘ought’ to do”. While virtually all past and present moral norms are elements of known cooperation strategies, many strategies increase the benefits of cooperation for an in-group at the expense of an out-group. For example, claiming homosexuals are imaginary ‘threats’ to the group can be effective in increasing in-group unity and cooperation (people enjoy banding together against a threat!). But well-informed modern societies would see that as shameful, and immoral, exploitation.

Science tells us what moral ‘means’ ‘are’. They are elements of cooperation strategies. Anyone who claims they are anything else is factually wrong – they are making a category error. They are talking about a different subject than what our moral emotions and intuitive moral judgements are about.

On the other hand, science cannot tell us what the goals for acting morally ought to be.

Possible goals for acting morally suggested by mainstream moral philosophy are 1) utilitarian goals such as the most happiness for the most people, 2) to act ‘virtuously’, 3) to act in ways that are universally moral, or 4) combinations of the above such as achieving utilitarian goals by universally moral ‘means’. So far as I know, which goal a person or a society chooses for acting morally is a matter of subjective preference as long it does not conflict with what moral ‘means’ ‘are’ - increasing the benefits of cooperation.

EO Wilson predicted science would provide the basis for an evolution-based morality. By revealing what the evolutionary function of morality ‘is’, science has defined what moral ‘means’ for achieving our goals are, and are not,. That seems to me a large step forward.

My own preference goal is a kind of Rule-Utilitarianism, achieving utilitarian goals by universally moral cooperation strategies - strategies that exploit no one.