Mark H. Lee | Transhumanism—downloading the brain21.05.2020
Excerpted and adapted from "How to Grow a Robot”: Developing Human-Friendly, Social AI" by Mark H. Lee (The MIT Press, 2020).
It is sad that the twenty-first century has seen a general increase in anti- science and the denigration of expertise. Opinions, beliefs, and even falsehoods are sometimes promoted as facts when there is no supporting evidence at all. Thus, science fiction ideas can spread in popular culture as possible (or even likely) events. People say, “Well, anything is possible, isn’t it? Why not? Who knows?” Well, actually, a lot of things aren’t possible—electricity generation by perpetual motion machines, for example. And why not? Because we have thermodynamic theory, which explains very clearly why not. And who knows? Well, anyone and everyone who takes the time to learn (or remember) the specific bit of science that deals with that idea.
Expertise is not simply the opinion of a group of professionals—it is collective, carefully tested evidence that forms the very best knowledge we have at the present time.
Scientists have toiled over these theories, experimented for years, and amassed piles of evidence. They know why some things are feasible now, while others will need a few years of work, and others are intractable and not worth starting. Expertise is not simply the opinion of a group of professionals—it is collective, carefully tested evidence that forms the very best knowledge we have at the present time. There are many topics in this category that impinge on robotics and AI, some of which originate from futurologists who should know better. There is only space to mention one example: transhumanism. This is founded on the assumption that everything that makes up a person—the individual self, the life history, the sum total of experience—is contained within the brain. The brain is an electrical system that can be encoded in digital form, and therefore it should be possible, or so the story goes, to download a complete copy of a brain into a digital computer, thus capturing the entire personality of that particular person. At some future point, this “digital version” of a person could be reenergized in some kind of simulation, and then the person would “live” again.
But let’s consider this scenario logically. Suppose that a scanner could record the complete state of a brain at an instant in time. The captured digital data is a snapshot, a cumulative result of a cognitive life as it stands up to the point when the scanning was performed. But how is it to be used? A snapshot does not tell you what happens next.
Now suppose this digital “brain” is to be uploaded and restarted. As I’ve argued, humans and robots are so fundamentally different that they can never totally replace one another. So the only way transhumanism could work would be by uploading the digital brain into a human brain. This is the really difficult bit. The process would have to be very fast to set up the 20 billion cortical neurons into the exact state recorded in the downloaded snapshot. This requires some fantastic invasive technology that has not been proved even theoretically feasible. Also, the neural wiring diagram of one person is not the same as another, so how are these circuitry mismatches to be handled? All of this seems very remote; after all, even the scanner to get all this started is not yet realizable. And the process would be so unavoidably unpleasant for the recipient that hopefully it will never happen.
Modern robotics has shown how important embodiment is. Without a sensory-rich body, perception as we know it is impossible (never mind sentience).
Like cryonics (the freezing of bodies in a kind of suspended animation), transhumanism is an attractive idea to those who wish to become semi- immortal. Such ideas are not new; George Bernard Shaw produced a play on this theme in 1921 (Back to Methuselah), but they have no scientific credibility. It should be clear by now why transhumanism simply won’t work. Modern robotics has shown how important embodiment is. Without a sensory-rich body, perception as we know it is impossible (never mind sentience). And enaction is also vital; our actions are entangled with our thoughts just as much as our feelings. The life process, the life cycle of the individual, cannot be separated from embodied cognition. This is the difference between biological brains and computer brains.
Imminent Threats
It can be harder for a journalist to cover [stories about] hope rather than fear.
—Laura Kuenssberg, BBC Radio 4, April 27, 2018
A really frightening aspect of AI technology is the damage that it could do when in the hands of malevolent humans. Just as the combination of humans and machines in close cooperation can, and has, delivered tremendous benefits and advances, so that same partnership has the potential to wreak awful suffering and destruction. Consider the algorithms already working away in the internet, collecting information for news sources and feeds to social networks and media sites. The criteria for this news selection should emphasize measures like truth, accuracy, integrity, as well as having priority, and relevance. But increasingly, we have seen populism driving the selection criteria, perhaps more than truth. News organizations can’t seem to resist the pull of fashionable and popular issues, no matter how trivial, misleading, or offensive they may be. This is how “fake news” becomes disseminated and established, and how a single (erroneous) report can turn people away from lifesaving preventive medicine; the scare about the triple MMR vaccine (for measles, mumps, and rubella) is a prime example.
Social media, trading companies, and other organizations involved in harvesting personal data are no less engaged in social engineering, but apparently the law either does not apply to them or has not yet caught up.
The main social media platforms have already been hit by complaints, court cases, and substantial legal fines for their poor management and control of personal data. They have been accused of “human hacking,” otherwise known as social engineering. Although this term has recently been applied specifically to security crime (illegally extracting data from people), the original concept from the social sciences means any psychological technique that influences populations toward particular outcomes without their agreement or knowledge. In the 1950s, psychologists discovered a phenomenon known as subliminal perception. They found that if an image was flashed up for a fraction of a second during a film, the conscious mind would not register it, but it would nonetheless be seen by the subconscious mind and affect future behaviour. This proved to be so insidious and effective that it has been banned or made illegal in most countries. This is an example of social engineering—controlling peoples’ desires, wishes, and preferences by means of which they are completely unaware and don’t have the option of avoiding.
But isn’t this exactly what is happening today? Social media, trading companies, and other organizations involved in harvesting personal data are no less engaged in social engineering, but apparently the law either does not apply to them or has not yet caught up. For example, it may be quite acceptable to collect data on people’s addresses and bank accounts when you’re trading with them and transactions are actively taking place. But if this data is later used to influence their choice of products that they buy without their knowledge or agreement, then this is a form of subliminal persuasion. It’s probably worse than we think—or want to think about!
We are being spied on without our knowledge or consent. What purpose could this data be used for? But if it can increase your profits, sway your political margins, or reduce crime and improve citizenship, isn’t that a good thing? Why should we care?
We are all being spied on by these means: Our movements are being tracked, the places we visit are recorded, the products we buy are logged, and our preferences are being assessed by measuring the time that we linger over a particular product or event. We are being spied on without our knowledge or consent. What purpose could this data be used for? But if it can increase your profits, sway your political margins, or reduce crime and improve citizenship, isn’t that a good thing? Why should we care? The Age of Surveillance Capitalism (Zuboff, 2019) offers some convincing arguments as to why we really should care.
The real danger in social engineering is that it can become a tool for the control of populations. China is experimenting with linking benefits and rewards to good behavior. Capitalist societies already have this in a weak form, with credit ratings controlling access to mortgages and so on, but the idea of reward and punishment connected to digitally gleaned metrics of good citizenship harks back to George Orwell’s dire warnings of state control.
The only way around this problem is either some form of transparency, whereby users can see all their information everywhere it is held, or active regulations, which prohibit and penalize organizations that misuse their data. In either case, it’s a question of secrecy. The lack of access, the need for transparency, is the important issue here. Obfuscation by any means is the key to criminal activity. Because these methods are insidiously creeping into the news and entertainment media, social media, and the advertising realm, we do need to look for better standards and principles that can be applied to our current digital, data-centric world.
Note that social engineering can also achieve positive goals. So long as influences on people’s behavior are minor and easily avoidable, usually by providing free choice, then it is often possible to reinforce positive values in a society.
Unfortunately, we cannot leave this to companies, who will cite “commercial confidentiality.” Regulators are needed to enforce legislated standards. That means governments must be involved. Furthermore, global superregulation is necessary in order to deal with worldwide digital technology. Some countries may not sign up but, just as for other global issues, group pressure can help to encourage compliance.
Note that social engineering can also achieve positive goals. So long as influences on people’s behavior are minor and easily avoidable, usually by providing free choice, then it is often possible to reinforce positive values in a society. This small version of social engineering, known as nudge theory, has had many successes in increasing tax returns, improving reply rates to requests for information, and other efficiency gains for governments and companies (Thaler and Sunstein, 2008). The key point is it is not secret, people can see it happening, and they still have choices.
Once again, we see that we must insist on truth and transparency—no “alternative facts,” no “confidential” sources or processes. We have the human right to know, over and above any excuses of “digital complexity.”
[...]
It's not all doom and gloom!
The late physician and statistician Hans Rosling became well known for presenting innovative documentaries on global health with titles like: Don’t Panic–The Truth about Population and The Joy of Stats. He used graphs and novel dynamic displays to show, with great enthusiasm, that the world, contrary to most opinion, is becoming a better place. Rosling used reliable statistics to show that global population growth is slowing down and looks like it is stabilizing. There are many fewer deaths from natural catastrophes, global health is improving dramatically in all countries, and altogether, the global standard of living is enormously improved over that of a century ago. Rosling’s argument is that we have a kind of instinctive pessimism; we notice the threats and the worries more than we do the good times. This is reinforced by the media, which struggle with uneventful, upward trends. The title of his posthumous book, written with his son and daughter-in-law, is appealing: Factfulness: Ten Reasons We’re Wrong about the World—and Why Things Are Better than You Think (Rosling, Rönnlund, and Rosling, 2018). Another source of optimism is found in the work of Steven Pinker. Pinker argues that violence and violent death are in decline and much less of a threat in modern life than in the past (Pinker, 2011). The basic point here is that tribal and casual violence reduces as national governments gain control and enforce laws. Slavery, torture, and tribal clashes are much reduced, and, in the ideal, only the state executes war or violence. This leads Pinker to suggest we should use our talents to enter a new age of enlightenment, where the combination of reason and human values drives a progressive and peaceful age (Pinker, 2018).
These encouraging findings are not without detractors, but they are relevant to current concerns and the decisions that affect us all and require our involvement.
see also
- Mick Champayne | Your cloudy memories
Trends
Mick Champayne | Your cloudy memories
- Meredith Broussard | The trolley problem
Trends
Meredith Broussard | The trolley problem
- Radom in pastels: a Polish photographer recognised by Apple
News
Radom in pastels: a Polish photographer recognised by Apple
- What’s Next for VR?
Opinions
What’s Next for VR?
discover playlists
-
Muzeum Van Gogha w 4K
06
Muzeum Van Gogha w 4K
-
Nowe utwory z pierwszej 10 Billboard Hot 100 (II kwartał 2019 r.)
15
Nowe utwory z pierwszej 10 Billboard Hot 100 (II kwartał 2019 r.)
-
03
-
CLIPS
02
CLIPS