“But does empathy for bots matter?”

Had a dream last night, in which I received a message that 2021 will be a milestone/key point. I woke up and searched for stuff on 2021. On the Wikipedia page, “Do Androids Dream of Electric Sheep?” is set in 2021 (although earlier versions are set in 1992).

“Unlike humans, the androids possess no empathic sense. In essence, Deckard probes the existence of defining qualities that separate humans from androids.” (Wikipedia)

Empathy. I know people with serious personality disorders that lack empathy. They’re not robots, but they can fake a lot of emotions, and they do severe harm. So lack of empathy is not a purely robot trait!

Also saw a response to the project today: “But does empathy for bots matter?”

This is where I think the approach of saying robots are a new species that humans have invented, rather than bits of intelligent metal, will be more persuasive than dancing around the fringes of whether robots should be cared for or not.

But this is where designing games around emotions will only resonate with people who are open to their own emotions or capable of feeling. Reminds me of the Turing Test, and then the Loebner Prize (in which ALICE Bot has placed – the conversational software that got me into interactive storytelling all those years ago).

I have Star Trek Voyager on TV in the background while I work. It is an episode where there is a “technologically superior race”:

Voyager encounters a slightly more technologically advanced race of people called the Qomar– a race of people who measure at a maximum of 5 feet, who are minorly injured from a reaction to Voyager’s engines. They appear to have enormous egos and dislike the Doctor simply for being a holographic entity. The Qomar become enthralled, after having initially dismissed him as a simple hologram, with the Doctor’s ability to sing since they never conceived the concept of music.” (Wikipedia)

Thinking about the disconnect between two people who each think of themselves as being superior. Or even when there is a twist when we realise our bias has lead us in the wrong direction (such as this great short story I read a week or so ago).

So, with these thoughts swimming around me head, I suddenly realised an approach I could use:

The visitors to the installation think they’re going to Robot University to teach robots; whereas the robots are there to teach humans…learn about graduating from humans…Something where there is an assumption that is discovered to be incorrect. Visitors will assume they are there to train, teach, or command the robots when it may be the other way around. The way for the robots to graduate is to teach a human something…

In the pitch for the project, I had a domestic robot who does what the visitor wants, including turning off all the lights of the whole installation – affecting everyone. But then when the visitor tries to do it again, the robot proudly shows a photo of themselves with a ribbon (they had shown another ribbon earlier) for “Best Autonomous Robot” (or something like that).

This relates to one of the aesthetic issues – what is the reason/relationship between the robots, screen, and humans. Why is there is a screen?

Also, need to keep overall goals in mind:

  • replayability (therefore needs to have branching choices with wildly different outcomes). I’m dead keen on playing with choices in which all options have good and bad aspects and the choice is about what you value (eg: family over work? Friendship over family? Honesty over Loyalty?)
  • variation of interaction – touch-screens (touching robot parts, dialogue or text action selections, object selection) and motion-sensing (dancing…)

Leave a Reply

Your email address will not be published. Required fields are marked *