Producing personhood

<p></p>

During Final Jeopardy in the episode that aired on February 16, 2011, a computer screen sat between long-running stars Brad Rutter and Ken Jennings. On it, swirling green and blue lines represented the thought patterns of IBM’s Watson—a question-answering computer system.

Watson’s hardware filled a neighboring room as he worked to process natural language and sift through 200 million pages of data to find the winning answer. The category was 19th Century novelists. Alex Trebek, the show’s host, read the clue: “William Wilkinson’s ‘An Account of the Principalities of Wallachia and Moldavia’ inspired this author’s most famous novel.”

Watson had thirty seconds to find the correct response.

“Who is Bram Stoker?” he answered in typical Jeopardy! style, and a group of IBM scientists jumped to their feet in applause.

Scientific developments, from the Copernican Revolution to Darwin, have challenged the perceptions humans hold of themselves and their place in the universe.

That narrative of advancement is best understood through the evolving relationship between humans and technology, says Jen Rowland, a Ball State University philosophy instructor. Relatively new technologies, like automobiles or the Internet, are now often considered necessities to a modern lifestyle.

Artificial intelligence (AI) is still more of a luxury than a necessity. Advanced versions are seen mostly in scientific-based applications, like medical diagnosis. It hasn’t dramatically affected the common person yet, but it could evolve into something far more influential.

If programmers create a machine with a human-like mind, says Jeffrey Fry, a Ball State University philosophy professor, it might lead to a diminished sense of human superiority and uniqueness.

Technology, he says, puts humanity in its place.

Complex technological advancements force humans to answer questions about themselves that they might not have asked before, Rowland says. The central question that arises in the debated possibility of artificial general intelligence, systems that might resemble humans in their ability to respond to various unpredictable environments, is what makes humans people. Scientists debate how closely a mechanical being must meet these criteria to be considered a person as well.

Understanding Current Artificial Intelligence

Scientists don’t agree on the definition of artificial intelligence, according to a 2016 report published by the Office of Science and Technology Policy (OSTP). Some choose loose criteria, including any “computerized system that exhibits behavior that is commonly thought of as requiring intelligence.” Others believe AI must be able to rationally solve complex problems or figure out the appropriate actions to achieve goals in any situation.

But either way, according to a report by the Machine Intelligence Research Institute (MIRI), most professionals do agree that current artificial intelligence “falls short of human capabilities in some critical sense.”

To create a machine that resembles complex human functioning closely enough to be considered a person, humans must decide what elements define personhood, and then whether it is possible to create electronic versions of those elements.

Rowland says some people believe this kind of artificial general intelligence is impossible because whatever is at the heart of human intelligence cannot be programmed into a machine. Others, including Rowland, believe artificial general intelligence will eventually happen because there is no such thing as a supernatural human element.

She believes human brains are no different from complicated computers, so scientists are bound to figure out how to simulate all the mind’s processes.

But according to MIRI, they aren’t close yet. Today’s AI don’t resemble those of science fiction. Their behavior is mostly predictable. They can’t exactly think, the report said, but they can learn from experience.

Before AI, there were expert systems, says Sushil Sharma, associate dean of Information Systems at Ball State University. These computer systems were programmed with fixed reactions to every scenario the creators could think of. If some part of a situation wasn’t programmed, the system could not make the necessary logical connections to react at all.

Now, computerized systems with “self-adaptive” capability can use experience and patterns to make their own decisions when presented with scenarios not written into their programming. Self-driving cars use this kind of process to avoid collisions in unpredictable traffic environments.

Nevertheless, says Sharma, this ability still depends on human programmers. Scientists have not created mechanical people. They still haven’t come close to understanding the full power of the human brain, Sharma says, so creating artificial intelligence that meets or surpasses the brain’s complexity might not be possible.

Decades of research attempting to create artificial general intelligence has not been very successful, according to the OSTP report, and most experts agree that it will take several decades more.

Creating Something Ethical

While Sharma supports the continued development of artificial intelligence, he wouldn’t want to see it in all areas of life. He says it should be reserved for objective procedures, like surgery or space exploration, and avoided for tasks that demand the human emotion and empathy he doesn’t believe can ever be programmed into computers.

The intention shouldn’t be to create a human-like machine that can perform the assorted tasks of a human’s daily routine, says Sharma.

A person is more than a data processor. Humans have morals, emotions, and personalities. They make decisions based on more than numbers and patterns, says Sharma—sometimes it’s just a gut feeling. The challenge is creating an ethical computerized person.

Sharma and Rowland agree that ethical behavior can’t be programmed into artificial intelligences. Wisdom comes from observation of one’s own actions and the actions of others, says Sharma. It comes from experience. Decision-making is based on upbringing, value systems, and the human emotions of sympathy and empathy, so Sharma doesn’t think artificial general intelligence can be possible.

But Rowland doesn’t believe in a human element. She believes AI can develop their own morals through learning, experience, and mistakes, just like humans.

Creating trustworthy artificial general intelligence will require software that “thinks like a human engineer concerned about ethics, not just a simple product of ethical engineering,” according to MIRI.

Attributing Moral Status

Experts generally agree that current AI systems have no moral status, according to MIRI. This means humans can treat them like things, not people—altering or deleting, or otherwise manipulating and using computer systems, as they please. The current ethical limits concerning AI are all based on the well-beings of other humans, not the well-beings of the systems.

The attributes of moral status remain unclear, but the MIRI report explains two often-accepted requirements. Sentience is the capacity to feel and experience the world subjectively. For example, sentient beings can feel pain and suffer. Sapience, or personhood, is associated with intelligent abilities like being self-aware and responding to reason.

According to the MIRI report, any AI system with either of these characteristics can’t be treated like an inanimate object, but the level of moral status would vary. A sentient but not sapient AI might be treated like a living, non-human animal. But if it also had sapience like that of a human, according to the report, it would have the full moral status of a person.

Rowland agrees with this perspective. She believes that if AGI is ever developed that can think in the same way humans do, it might be immoral to program it, just as it would be immoral to brainwash a human.

Author Isaac Asimov’s first law of robotics was that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” Rowland doesn’t believe this “law” could apply to artificial general intelligence.

She says turning off an intelligence with moral status, even if it was endangering humans, would be murder. And MIRI agrees that AGI would be protected under human laws.

“It makes no moral difference whether a being is made of silicon or carbon, or whether its brain uses semiconductors or neurotransmitters,” the report said.

According to this perspective, artificial general intelligence that would meet moral status requirements cannot be created with human benefit in mind. It can’t be used by humans for any purpose. It needs to be free.

The extent to which human biology is necessary for a mind to function like a human’s remains an unanswered question. It might not be a mystical, supernatural element that prevents AGI’s existence, Fry says, but simply some irreproducible physical aspect of human neurobiology.

Fry also suggested that the current definition of intelligence might be subjective. Maybe machines would have their own kind of intelligence. Their moral status cannot be determined by whether their particular kind of personhood resembles a human’s.

To create an artificial conscious being, says Rowland, humans must first solve the mystery of how their own intelligence works.

Comments

More from The Daily






This Week's Digital Issue


Loading Recent Classifieds...