Intelligence without the soul to balance it must of necessity be evil. —John Steinbeck

Many verbs that entail some advanced cognitive capacity are commonly used in predicates for subjects that are not human. All speakers are comfortable with sentences like these:

  • Verizon revamps mobile plans and ends 2-year contracts & subsidies.
  • Chevron has settled with homeowners over the oil spill at Red Butte in 2010.
  • Downing Street was warned about Kids Company last year.

We recognize that there is some metaphor at work in these sentences but it’s a figurative leap that is easy to make: entities composed of humans can be understood to behave in concert, as if with one mind. The word 2020欧洲杯时间, after all, is based on the Latin term for "body" and one of its earliest senses is "a number of persons united, or regarded as united, in one body; a body of persons."

2020欧洲杯时间Most speakers, however, reject sentences like these:

  • Microsoft is vividly imagining a purple square.
  • Disneyworld feels extremely anxious.
  • The Pentagon experienced excruciating pain.

2020欧洲杯时间These sentences don’t work because their predicates imply the ability of a subject to experience what philosophers call "phenomenal consciousness," and you only get that if your consciousness arises in a single body that is connected to the world around you via sense organs. We make intuitive and (up to now) fairly easy judgments about whether entities are suitable subjects for either or both kinds of predicates above, based on our ideas about what it takes to have the capacity for sense-based stimulus and response. We seem to believe that certain kinds of cognition depend on experience that comes from having a body with sensorimotor capacities.

2020欧洲杯时间An experimental philosopher at Yale, Joshua Knobe, has explored the ways in which people make assessments about the capacity of other entities to experience the kind of cognition that we think of as uniquely human. His experiments show that information about physical constitution (e.g., having a body) plays an important role in our willingness to assume that certain entities can think and feel as we do.

We humans have a controlling interest in the way that verbs are interpreted and the distinctions that I’ve noted above currently enjoy consensus. But for how much longer? I’ve been looking at a couple of the other places where representations of consciousness pop up these days, and I wonder if the boundaries of our capacity to ascribe phenomenal consciousness is beginning to expand a bit.

2020欧洲杯时间One place this might be happening is the Twitterverse. Current news stories show that writers ascribe a number of mental states to the Twitterverse that seem to endow it with capacities beyond those that corporations and other abstract bodies are capable of. In the last month the Twitterverse has, for example, done all of these things:

  • spoken.
  • moaned about . . .
  • wondered whether . . .
  • responded with a mixture of amusement and consternation.
  • been abuzz.
  • shone a spotlight on the World Games opening.
  • weighed in on the nonsense controversy.
  • not wasted a second when it came to . . .
  • drawn comparisons between . . .
  • howled with frustration . . .
  • had a lot of feelings about . . .
  • imagined the USA when Trump is elected . . .

Occasionally such ascriptions are made not to the Twitterverse but to Twitter itself. In such cases the context makes it obvious that what is meant is in fact the Twitter data stream—not Twitter, the company.

Granted, we all recognize in these sentences that metaphor is at work, and that the writers of these sentences are generalizing from patterns of what is "trending" on Twitter and then reporting the content as if the Twitter data stream were a single mind, when in fact it is something like a —a hive mind composed of thousands if not millions of individual, embodied consciousnesses. Many speakers would reject these predicates if ascribed to some other abstract entity—a corporation or government agency, for example. But the Twitterverse is a new and unprecedented phenomenon, and it seems likely that its phenomenal consciousness (or our ability to talk about it as if it existed) is emergent.

More food for thought along these lines comes to light in the recent dystopian sci-fi thriller "Ex Machina." The plot [warning: spoilers galore ahead] concerns a , in which a young programmer is lured to the heavily-secured research headquarters of a genius who has created a very advanced robot. The programmer is asked to assess whether, appearance aside, the robot can think, act, and experience existence like a human. She (Ava, the robot, who is sexually attractive to the programmer), performs admirably: she convincingly uses predicates that are normally the reserve of human speakers, reporting that she feels nervous, she feels sad, she doesn’t want to make someone else feel uncomfortable. She expresses a desire for human contact, she has an opinion on the states of mind of her human contacts. She is, in short, utterly convincing as a being capable of humanlike consciousness and experience. As her creator in the film puts it: "To escape, she would have to use imagination, sexuality, self-awareness, empathy, manipulation—and she did. If that isn’t AI, what the **** is?"

2020欧洲杯时间But then Ava violently destroys her creator without compunction and imprisons the duped programmer who assisted her before escaping alone from the research headquarters. The linguistically unsettling feature of Ava is her ability to mimic human sensibility convincingly through her speech but then to suggest through her behavior that her words do not map to expected meanings that are inextricably bound with human feelings.

In July, more than a thousand AI researchers, including thought leaders such as Steve Wozniak, Elon Musk, and Stephen Hawking, signed an about the ability of artificial intelligence to eventually become a hostile and destructive force in the world, threatening and perhaps eventually even supplanting human life. The letter expresses fears that AI, acting without human intervention, might be able to "engage targets," and "search for and eliminate people meeting certain pre-defined criteria." It notes that autonomous, intelligent weapons (in other words, weaponized robots) would be "ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group." Here again we see a level of discrimination and judgment ascribed to entities that possess humanlike intelligence and may enjoy some of the sensorimotor capacities of humans, but without the vulnerability that is part and parcel of the human condition: namely, a body that is subject to pain, suffering, loss and death.

The common thread here is perhaps an unsettling of the certainty of the line between our humanity and the aspects of it that we can increasingly outsource to technology. It doesn’t matter so much that thousands of minds can converge on single emotions in the Twitter stream—but there is an understandable nervousness about putting a mind into a body without endowing the mind and the body with the qualities that distinguish and (we think) ennoble us. It is unnerving to think that the genuineness of our use of the predicates that define our humanity might become undependable.