Bioethics or Biowaffle?

According to this BBC article, scientists are reported to be developing a new super coral resistant to global warming. This involves using natural mutations (some corals are resistant to very high water temperatures) and ‘training’ coral to become more resistant to rising water temperatures and ocean acidification, problems expected to get worse as the world warms It seems to me that the scientists concerned are starting in the right places: sourcing what natural resilience there is out there, and testing it in the lab.

Inevitably, in the interests of maintaining ‘balance’, the BBC found input from ‘bioethicists’ to cast doubt on the undertaking. The words of wisdom of these learned authorities are quoted in full:

‘No-one fully understands the ecology of reefs, so by putting a genetically modified organism on it you can’t possibly know the unintended ecological implications,” says Prof Rob Sparrow, an expert in applied ethics at Monash University. “It’s foolish and unwise.”

Prof Sparrow also argues that the genetic engineering of coral distracts from a solution governments already have to prevent coral extinction.

“When people read a story about a high-tech solution, it makes us feel good in that it distracts us from what we should really be doing – moving towards a zero emissions economy,” he says.

Prof Paul Komesaroff, director of Australia’s Centre for Ethics in Medicine and Society, raises a different concern.

“The closest analogy to this would be GM [genetically modified] crops, where strains of rice are constructed to better resist pests,” he says. “And one of the outcomes of that has been that companies own the intellectual property to produce those crops.”

“I’m sure the scientists behind this are doing it for the right reasons, but they won’t be able to turn their discovery into a commercially viable option without capital investment,” Paul Komesaroff says.

“This is how biotech works in the real world; it’s driven by profit. So we would potentially be looking at a scenario where the Great Barrier Reef is owned by a company like Monsanto. And that would be a game-changer.”

Of the two wise men, we will deal with the second one first. Why the Director of Australia’s Centre for Ethics and Medicine in Society is considered an authority on coral reefs is beyond me but no matter. His observation that the scientists will need ‘capital investment’ is true enough. It will probably need private sector backing as environmentalists will probably lobby against the provision of public funds to support the project.

Prof Komesaroff seems to think that Monsanto owning the Great Barrier Reef is a bad thing. But why? He makes no effort to explain. He seems to think that evoking the words ‘Monsanto’ and ‘profit’ in the same breath suffices to spell out what his ethical objections are. If the price of saving the Reef is having Monsanto own it, then I would certainly have reservations about that. But if the alternative is the extinction of the Reef then let Monsanto own it.

Now, let’s turn to the first one. Prof Rob Sparrow’s reservations are an obvious expression of the precautionary principle. Of course we do not know what the ‘unintended’ consequences of putting modified coral out in the wild is. How can we be? That is why it is being tested in the lab first. Granted, even with extensive testing, no one can be sure that there won’t be ‘unintended consequences but on what basis does he assume that these will be negative? And why is it ‘foolish and unwise’ to describe something that has not been done yet? Can he give reasons and relevant examples? To answer that we cannot pursue a technological option because of unintended consequences would halt technological progress – including the development of green technology.

In fact, the precautionary principle provides no guide for action in the real world. It does not tell us what should be done, how we go about it or even how we can find out whether its apprehensions are well-founded. Indeed, it is a double-edged sword. Conservative climate change ‘sceptics’ can wield the same argument, to demand we refrain from economic policies that might cause unforeseen harm.

Prof Sparrow says that high-tech solutions make us feel good. Yes, they do. And what is wrong with that? What’s wrong with feeling encouraged with good news, that gives us hope? Well, he would say, it distracts us from the task of moving toward a zero-emission world. But how is this going to be done, if it is not going to involve technological innovation? And should the precautionary principle be applied to developing new green technology, too? And how realistic is it? The only way we could move to a zero-emission economy right now is stop burning fossil fuels. This would have dreadful social and environment consequences. Poorer people would not live in greater harmony with nature but would strip it bare in order to survive. When India blockaded Nepal last year, cutting off import of cooking gas, a fossil fuel, people resorted to stripping forests for fuel instead.

We are going to have to wean ourselves off carbon but even if were to do that tomorrow, and managed to do it painlessly, existing greenhouse gases will carry on warming the world. That means we will have to cope with it and technological palliatives are going to be necessary, and not just dealing with the impact of climate change on coral reefs. Calling for a zero emission economy, without having the faintest clue how this is to be done in time to avoid the harmful impacts of climate change, is not moral reasoning but posturing.

Given that Professor Sparrow’s alternative approach is not viable in the short to intermediate term, that means GM Reefs should be considered. We are left with two options. One is to do what the scientists are doing, sourcing natural resistance to build more resistant reefs. A second option is to do nothing.

The first option is vulnerable to the objection of unintended consequences, of possible harms unforeseen. The second option is vulnerable to the objection that doing nothing is certain to do harm.

Against consequences that we cannot foresee, there are the consequences that we can see. Since we know that doing nothing will do harm, there is good reason to take action. Against this, there is Prof Sparrow’s law of unintended consequences. There is no way we can refute this law a priori. Only experience can do that. But, which would Prof Sparrow prefer? Would he prefer to avoid unforeseen harms, at the expense of taking action to deal with harms that we can see? I doubt it. That is why he calls for an alternative solution. But if his alternative solution is not viable, and it is not, then he cannot duck this question.

Since we cannot refute Prof Sparrow’s dictum,we have to settle for second best: testing in the lab, followed up by controlled experiments in nature. But if you still think that it is better to do nothing rather than something, even if you cannot produce evidence to show how doing nothing is better than doing something, then you are not expressing a moral position but a dogma.

Indeed, the implicit structure of the article, scientists playing with nature on one side of the ledger, ‘bioethicists’ standing in judgement of them on the other, is perhaps revealing. The presumption is that someone with the phrase ‘Professor of Applied Ethics’ in their job title is somehow morally superior or wiser. But their own moral assumptions are not beyond scrutiny or challenge. There is no reason to think that Prof Sparrow, despite the ring of sanctimony his title confers, is uncontaminated by ideological bias, a bias which affects much of the green movement. This accepts the authority of science when it comes to climate change but challenges it when it comes to some of its suggested solutions, especially biotech and nuclear power, for fear that these solutions might just secure industrial capitalist modernity’s future, along with the profit-motive that underpins, a motive that few intellectuals are prepared to defend.

Sure, scientists are not above criticism or scrutiny but neither are their ‘ethical’ critics. The two examples we have discussed above have not demonstrated disinterested, dispassionate reasoning. They are ideologically motivated. They rehash traditional apprehensions about technological advance, apprehensions acute in the case of biotech but not other areas of technology (an article on carbon capture technology did not attract comment from ‘geoethicists’). And, aside from that, invoking the word ‘ethical’ as a basis for your objections indicates that you really do not know what you are talking about.

Are robots going to make us pets?

If you want to talk about artificial intelligence, and you want to get a name for yourself, then make sure you paint it black. A recent BBC article epitomizes this trend – the doomsayers speak louder and more stridently than the more cautious, muted voices. The article asks if we should fear AI? The consensus opinion among the ‘experts’ quoted is yes. To quote one representative voice:

‘Elon Must, founder of Tesla motors and aerospace manufacturer Space X, has become the figurehead of the movement, with Stephen Hawking and Steve Wozniak as honorary members. Mr. Must who has recently offered 10 million pounds to projects designed to control AI, has likened the technology to “summoning the demon” and claimed that humans would become nothing more than pets for the super-intelligent computers that we helped to create’.

His apprehension is nothing new. Over thirty years ago, Marvin Minsky of the Massachusetts Institute of Technology claimed that the next generation of robots will be so intelligent that we will be lucky if they decide to keep us as house hold pets. The quote is from John Searle’s book ‘Minds, Brains and Science’, which we will discuss shortly. Must, I think, is just as mistaken as Minsky was. But maybe this time it’s different?

I don’t think it is different this time and the reason I think that is re-reading John Searle’s collection of essays from 1984, ‘Minds, Brains and Science’, which were also delivered as the BBC’s Reith Lectures in the same year. The second lecture, ‘Can Computers Think?’ could well have been written to answer the sort of nonsense that the doom peddlers are espousing today. He wrote at a time when many assumed that it is ‘only a matter of time before computer scientists design the sort of hard and soft ware that are the equivalent of human brains.’ Pundits then, as they do now, assume that the human brain is a kind of digital supercomputer, and that developments in IT and AI are analogous to closer and closer approximations to it, and will eventually surpass it.

But we are not comparing like with like. Searle writes:

‘The reason that no computer programme can ever be a mind is simply that a computer programme is only syntactical, and minds are more than syntactical. Minds are semantical … they have a content.’

To illustrate this, he imagines a machine is programmed to simulate the understanding of Chinese. You feed in questions in Chinese and it gives answers in Chinese. It’s programmed so well that it looks like that it actually understands and speaks Chinese. But this impression misleads. Searle asks you to imagine that you are locked in a room with a basket of Chinese symbols. Someone outside the room feeds in Chinese symbols and in return you hand them symbols, according to a rule book, but you have no idea what the symbols mean. You are just following the rule book. You are in fact answering questions in Chinese but you neither speak nor understand Chinese. And neither does the computer. All you have, writes Searle, is ‘a formal programme for manipulating uninterpreted Chinese symbols’.

A computer has a syntax but no semantics – all form but no content:

‘Understanding a language, or indeed, having mental states at all, involves more than just having a bunch of formal symbols. It involves having an interpretation, or a meaning attached to those symbols.’

Let’s think of another example of our own to develop this insight further. Some have suggested that AI will even displace lawyers. Now, we know that in law, the words do not speak for themselves. Take the 2nd Amendment of the United States constitution:

‘All well-regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.’

In the United States, that phrase has split not only ink but blood. What is the argument about? Surely not over the formal, dictionary definition of each word in that phrase. It’s about the ideas that each word expresses, either standing alone or in combination with others. Different human minds scanning those words do not ‘see’ the same ideas in phrases like a ‘free state’. They mean different things to different people.

Now, imagine trying to settle this issue by referring the matter to two robot lawyers who try to argue the case in the presence of a robot judge. The robot concerned could simulate the formal structure of a legal argument by ‘knowing’ the formal meanings of words but underlying these words are mental concepts that no robot could ever know. How can you programme a robot lawyer to know what a ‘free state’ looks like? And how you programme a robot to ‘know’ that a law for gun control (or its absence) does or does not ‘violate’ the 2nd amendment? We are back to Searle’s distinction between syntax and semantics. Words like ‘this law violates the 2nd amendment’ are expressions of semantics, not syntax.

It is not only that the law does not speak for itself. It has to be interpreted. We also have to decide when it applies in certain cases. Armed robbery and shoplifting are both forms of theft. But do we apply the same rules in both cases? Of course we don’t. But why don’t we? Because we have different ideas about what constitutes a ‘just’ punishment each case. Try programming a robot judge to ‘let the punishment fit the crime’. Again, this is a phrase that is not reducible to its component parts. You cannot explain it merely by attaching a dictionary definition to each word in the phrase. A robot could certainly be programmed to utter such words but it would have no more content than a parrot’s mimicry. It could simulate words but it cannot duplicate the mental states words generate. Semantics again!

Much of the confusion around this issue rests on ignorance between the concepts of syntax and semantics. But it also overlooks something else. Information and Communications Technology is often conflated with Artificial intelligence. Being able to look up information quickly, as per Google, is not the same thing as AI, with robots or machines able to gather and process their own information independently of any human intervention or influence – like the way Skynet can do in the Terminator movies or like the malevolent Hal in 2001. Improvements in calculation and processing power speed does not necessarily lead to improvements in intelligence and sentience. Cockroaches are superior to any robot in terms of their ability to learn from and adapt to their environment.

Good public speakers – or rabble-rousers – know that it is not the words they use that inspire other minds to act. It is the ideas that words generate that do that. Ideas move people. Robots do not have ideas because they do not have mental states. Therefore, they are not motivated to do anything. Indeed, as a great fan of the Terminator movies, I have often asked the question that the plot never answers: why do the machines bother to fight the humans? Why do they care so much? But we don’t need to ask these questions about the humans’ motivations for fighting the machines. The answers are obvious.

If you have got this far, then you may be wondering what this has to do with moral conflict and politics. Hopefully, the answer has already become clear, with the discussion about the 2nd amendment of the US Constitution a clue: moral and political conflict is generated not by arguments over syntax but over semantics. The resolution of such conflicts depends on the political use of semantics to smooth out these conflicts, sometimes by using weasel words and half-truths. A robot would have no idea about the complex mental concepts involved when two warring parties down weapons and ‘agree to disagree.’

___________________________________________________________________________