We’ve found for you a lovely review of the AI perspectives written by the Knife journalists. Enjoy it!
“I now know why you cry, but it is something I can never do.”
– T-800, “Terminator 2: Judgment Day”
Who invented robots
Karel Čapek managed to look into the future when in 1920 he wrote his play “R.U.R.”, also known as “Rossum’s Universal Robots”. After this invention by Czech science fiction writer, the word “robot” entered everyday life and science fiction.
The play begins in a factory where artificial people – “robots” are created from synthetic materials. In Czech, “robota” means “forced labor; the debt that the serf must fulfill on the land of the feudal lord.” A robot is not only a fantastic machine, but also something so primitive and familiar to us, for example, an elevator or an ATM.
Despite the name, Chapek’s creation is more likely androids, since they look like people and are capable of independent thinking. In the beginning, they happily work for the benefit of people, but everything ends with the uprising of machines – the first among the many in fiction. Within a few years, the play was translated into thirty languages, and the “robot” replaced the former “automaton” in all world languages.
“Czapek’s robots – the consequences of the traumatic transformation of world society by the First World War and the Ford conveyor belt.”
– critic John Reader
An android (translated from the Greek “humanlike”) is a humanoid robot or synthetic organism that looks like a human being in appearance, but is not actually alive. Often the similarities are so strong that it is almost impossible to distinguish a person from an android, but there are exceptions. For example, C-3PO from the Star Wars universe is a droid (a shortened form of the word “android,” invented specifically for “Star Wars”). Replicants from Blade Runner and Data from Star Trek fall into the same category. Of course, this division is not spelled out in any set of rules or laws. For example, R2-D2 from Star Wars is also a “droid,” although actually, this robot is a sentient mailbox that in no way resembles a person. There is also the term “gynoid” – a feminine humanoid robot.
Where did the cyborgs come from?
A cyborg is a “cybernetic organism” made at least in part from organic tissue. Cyborgs are hybrids of biological and artificial, organic and mechanical. A person with a mechanical arm or heart is, in fact, also a cyborg.
The first example of a cyborg in cinema is considered to be the character in Eugène Lourié’s black-and-white film The Colossus of New York. Probably the best example of a cyborg in pop culture is Robocop from Verhoeven’s movie, as it is based on the human body. There is a lot of controversy regarding the Terminator, since although it is declared as a cyborg in the Cinematic Universe, can rather be classified as an android, because he has no organic body parts other than the skin he wears like a coat. The Terminator can function just fine without it. In Terminator 2, Schwarzenegger’s character describes itself as “living tissue on a metal skeleton.” The functioning parts of the human body, as befits an exemplary cyborg, he does not have.
How the ethics of robotics appeared
If Chapek was the first to open the door to pop culture for robots, then Isaac Asimov pushed them into this door, ruthlessly subordinating all robots, cyborgs and androids to his laws of robotics. Asimov, who, along with Arthur Clarke and Robert Heinlein, is considered one of the three main luminaries of science fiction in general, is responsible for many tropes of this genre, which today have turned into templates: intergalactic empires, fellow robots, planetary cities.
It is reliably known that Azimov read Rossum’s robots and spoke about the work as follows: “Chapek’s play, in my opinion, is incredibly bad, but it is immortal thanks to this one word. She brought the word “robot” into English, and through English into all other languages in which science fiction is now written.”
Asimov’s three laws were created precisely in order to prevent a situation similar to the one that develops in “Rossum’s robots” and in the science fiction of the 1930s-1940s. These rules Asimov introduced in his 1942 short story “Runaround” (included in the 1950 collection I, Robot), although they had been foreshadowed in some earlier stories. The Three Laws, quoted from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
One of the most egregious examples of violation of the “Three Laws” in pop culture is the behavior of the android Ash in the movie “Alien.” To carry out Special Order 937, Ash must sacrifice “Nostromo” spaceship team to bring Alien samples to the corporation.
It is the zero law that is the main source of conflict: the good of an individual and humanity is not always equal.
In addition, the human being is concrete, while humanity as a whole is an abstraction. A robot cannot harm a person, but what if a robot finds a way to prove that by harming a person it will help humanity?
Laws in one form or another have successfully migrated from Asimov’s books to the books of Terry Pratchett, Lester del Rey, Roger McBride Allen, Jack Williamson and others, as well as the “Doctor Who” TV show, the Aliens and Robocop franchises. In the 1970s, Bulgarian science fiction writers Lyuben Dilov and Nikola Kesarovski added two more laws:
- A robot must always designate what it is a robot.
- A robot must know that it is a robot.
The latter two laws are demonstrated in action in the series “Westworld”, where the self-awareness of androids plays an important role, and the question of what distinguishes a robot from a real person is constantly discussed. Particularly in the first season, it is unclear if Bernard’s character is aware that he is not human.
Will a robot harm humanity?
If at the time of Chapek the appearance of robots seemed like a fantasy, today it is rather a reality. In the next decade, computers will surpass the performance of the human brain. There is already a heated discussion of robots’ place in modern society and what rules they must obey. After all, artificial intelligence does not have moral norms imposed on it by society, parents, teachers, environment – only what is laid down in them by the program.
In the fall of 2017, Elon Musk told the National Association of Governors that “artificial intelligence is a fundamental risk to the very existence of human civilization.”
Musk is the founder of the San Francisco-based OpenAI project. The organization is concerned about the potential risk of artificial intelligence being unnecessarily influenced by society. In 2016, OpenAI released a program that measured the level of intelligence of AI in games, applications and websites. Musk left the project due to a conflict of interest with his work on AI for Tesla, but the project continues to function.
The inspiration for the founding of OpenAI was the fears of scientists Stephen Hawking and Stuart Russell, who believed that advanced AI could gain the ability to change itself and improve on its own, leading to the extinction of humanity. Stephen Hawking thought that AI may be responsible for “the worst event in the history of civilization”, Elon Musk agreed with him, arguing that “AI is the main existential threat to humanity.”
OpenAI representatives are far from complete reactionaries. They believe that “it is difficult to overestimate how much benefit artificial intelligence, equal to the human one, can bring,” but it is also difficult to understand “how it can harm society if mistakes are made in its creation.” The project believes that radical improvements in AI will happen very soon, and it is impossible to hesitate to prepare deterrent measures.
Evolution, segregation, or a society of universal control? What are transhumanistic technologies and where are they taking us
According to a study by the Pew Research Center, 72% of respondents are “excited” about the impact of artificial intelligence on the workplace. In his book “Who Owns the Future,” Jaron Lanier says technology advances will lose millions of jobs and increase social inequality. If only those who manage computers will manage society and receive a good income, then what will happen to simple “blue collars”? A significant part of the people, according to scientist, will live in poverty as long as only an insignificant technocratic elite will prosper. And do we have the right to sacrifice humanity, leaving so many people overboard for the sake of progress?
According to Lanier, the middle class is excluded from the online economy. By convincing users to give out valuable personal information in exchange for free services, firms get a lot of important data for free. Lanier calls these companies “Siren servers,” referring to the Odyssey: instead of paying each person to contribute to a database, Siren servers concentrate wealth in the hands of the few who control the data.
A growing number of socio-political groups are concerned about the development of AI. American Democratic Congressman John Delaney and fellow Republican Pete Olson launched the Artificial Intelligence Caucus project. Its goal is to inform policymakers and public figures about AI development’s technological, economic, and social effects.
In addition to the Elon Musk OpenAI research center, there is also the Partnership on AI – a joint project of Google, Facebook, Apple, Amazon, IBM and Microsoft, companies that actively use AI to control their own workers. Created in January 2017, this industry-wide consortium aims to “set AI best practices and educate the public.” In October, Tera Lyons joined the project, who was engaged in AI and robotics research and integration into society during the Barack Obama administration.
Will a robot harm a person?
In September of this year, Russia and the United States vetoed UN attempts to restrict the development of robotic weapons as part of the Stop Killer Robots campaign. The Independent notes the particular interest of these two countries, as well as South Korea, Israel and Australia, in creating weapons systems that are completely independent of humans.
Until now, all developments have faced the problem of energy supply. Despite this, according to the report of the UK Department of Defense, “The future begins today”, robots and genetically modified fighters, like the “universal soldiers” from the film of the same name with Jean-Claude Van Damme, will fight in the wars of the future.
Robots will take part especially widely in army formations as signalmen, as well as in the Air Force.
DARPA, the US Defense Advanced Research Projects Agency, intends to create a new generation of artificial intelligence, as close as possible to human. In 2016, media reported that the US military had invested $ 62 million in chips for soldiers that would allow them to communicate directly with computers. If they succeed, the cyborgs will become very real. Connor Walsh, professor of mechanical and biomedical engineering at Harvard, told CNN that the implant “will change everything” and added that “in the future, exoskeletons will be controlled by implants.”
In parallel, under the control of the US Special Operations Command, the development of TALOS is underway – an exoskeleton for the military, similar to the Iron Man suit from the Marvel universe.
A functional prototype should appear already this year. Despite this, many doubt the success of the project, since the complete rearmament of the armed forces will be too expensive and the benefits are insufficient. Since development began in 2013, TALOS has been redesigned five times. Program Director Colonel James Miller admitted in 2017 that the exoskeleton has problems other than supplying power to develop adequate protection against enemy fire to not make the suit too heavy.
In August 2018, the media reported that Russia is conducting research in this direction and has achieved some success by creating the Ratnik-3 exoskeleton, which was presented to the Army-2018 forum. What the military of all world countries dreamed of since the eighties, may soon become a reality.
Nevertheless, instead of a world without a human army and wars, we can get the decay of society. The army is the largest national institution that embodies the strength of the state and serves the integration of society. Political scientist Anthony D. Smith believes that “war myths are effective in creating the experience of connectedness and inclusion in the whole; they feed the culture.” The army is a symbol of the power of the country it serves, and the robotization of troops will lead to the loss of troops of their unique identity.
Paul Shar, author of “Army of None: Autonomous Weapons and the Future of War”, told The Verge: “There are many reasons I’m not inclined to think that reducing human control over violence is not such a good idea, but stopping development technologies are difficult <…>. There is an old question: “do we control technology or is it us?” I don’t think there is a simple answer to that. “
Will the robots obey human orders?
The AI Now group, led by a researcher from the University of New York Kate Crawford, intends to prevent the emergence of authoritarian AI.
The project expresses doubts that democracy will not be able to survive in the era of artificial intelligence and huge amounts of digital information. Not only are there already now concerns about the viability of existing democratic societies – advanced dictatorships can use artificial intelligence to spy on their citizens and persecute unwanted ones.
Similar processes, which Crawford speaks of, are already taking place in China, where AI forces are used for total control and information gathering.
Do modern robots even know about ethics?
In reality, robots and artificial intelligence do not obey three laws by definition – it all depends on what they are programmed for, what their capabilities are, and who their masters are. A conventional robot vacuum cleaner does not have a “mind” that would help him understand whether he is breaking the law or not. Even the most sophisticated and advanced robots of today cannot understand and interpret the Three Laws. However, the more robots evolve, the more ethical constraints are raised.
Futurist and transhumanist ideologist Hans Moravek proposed adapting the laws of robotics for “corporate minds” – corporations controlled by AI and using robots for manufacturing their products. Moravec considered the emergence of such firms a matter of time back in the mid-90s, but so far, ethics for corporations remains a marketing tool more than a way of fighting for the public good.
“AI development is a business, and business is notorious for not being interested in fundamental constraints, especially those that are philosophical in nature,” wrote Robert Sawyer in Science, pointing out that the US military-industrial complex is the primary source of funding for research in the field of robotics. According to the author, while the military is more interested than others in creating artificial intelligence, no ethics will penetrate this area. Perhaps only the creation of a new type of Doomsday Machine will raise the ethical question squarely.
A human being can empathize and experience empathy, but an android cannot. Looks like a science fiction novel plot? And it’s really so! Check our author’s column about Philip Dick’s legacy here.