Tweaking interactions for big screen interaction

UI

Earlier I have written about my approach to a twist with the Destructo UI session. At the beginning I was worried about treating the twist as an intellectual exercise rather than an emotional one. It seemed to me that twists in films, TV shows and books are usually intellectual in that the experience of it is a thinking response (suddenly going over all the moments in the story that were clues).

Nevertheless, I went ahead and created an interaction that seems to the lead the player in one direction (that their role is to choose targets to kill). The technique of twists demands you create at least two believable truths: the one you want people to believe or see, and the one that is actually the case. Both have to operate side-by-side. So I have been tweaking the interface to use language that is ambiguous: using words such as “clear” and “rebuild” and “destroy” — which could refer to either killing humans or demolishing a building.

My early paper-and-pen playtests I conducted did show the interaction was successful: some were shocked to discover the twist, and another that did realise what was going on still found the experience satisfying because it was so easy to do harm to people. So the twist was working because I was cleverly leading people down one path. I wanted the twist because I wanted to highlight the bias people have operating: that robots and killing go together. But when I read the following paragraph in an article by Rosy Rastegar about “12 Years a Slave” I changed my mind:

“Movies are an empathy machine,” Rogert Ebert said. “Good films enlarge us, and are a civilizing medium.” To be earnestly enlarged – and civilized – by “12 Years” would mean to recognize the slavers’ pathological racism as tied to one’s own, even as it repels, and hold that difficult, internal confrontation long enough to cause a crisis within one’s sense of self and place in the world. Empathy compels us to see ourselves reflected in the very things we judge as evil in others, to implicate ourselves in society’s ills, and to rearrange our desires towards the building of a better life, for all of us.

I realised that people need to really know what they’re doing if I want to get through to them. If my goal is deeper understanding of themselves, then I need them to make conscious choices. It is weaker if they’re not really making decisions but instead blindly going down a path they’re lead down. so I can put the language back, language such as “demolition training” instead of the ambiguous “destruction training”. This means the goal of the interaction is obvious. But the key is I make each choice easy to make. I’m not putting up consequences and warnings, so it is easy for them to decide to be their worst. There is nothing sitting between them and their morality.

So what happens is the outcome will (I presume) be the same, but the personal connection to the outcome will be stronger. It isn’t about me tricking them (which is not what I wanted), but about showing them themselves.

In addition, what I have noticed with both the Destructo UI and Clunkybot dialogue interactions: the environment is quite noisy. Every medium has its affordances. The experience of a tablet is different to a phone is different to a theatre screen. The screens at The Cube are in a public space that has frequent traffic. People often stop and play with the screens and then continue on their journey. Some do make special trips to The Cube, but people are never alone. It is never a private interaction. So what I have noticed is that people have a lot of noise in their head, and visual distractions in their periphery. so this means I need to make sure the language communicates clearly through the mental and visual noise. Once the dialogue UI is properly running, I think I will need to change the language (to take people through the steps at the beginning, and shorten the overall interaction as well).

Testing Kinect, object response, and the dialogue plugin

For our playtest yesterday there weren’t many changes to the appearance and animations, but we were able to test some of the responsive aspects of the installation. One of the best aspects of having the installation on the screens is being able to watch people interact with it. A mom pointed out the big robot to her son and so Adam set the weapons in action to show the kid.

happykid

When the kid saw the robot hands open up to fire the laser, he yelled “nippers!”. Fun. 🙂

nippers

Continue reading

Our latest playtest – 30th Oct

We met today for our last playtest in October. Brian (a Cube staff member) told us that the equipment stopped working yesterday. But after working on it all day and night, it was is fine today. Except for one thing. Sound has started crackling and popping, and isn’t even playing in some of the areas. So the team are working on that. But as a back-up I’ve arranged for us to use another zone which has the same setup as ours. That way we can still go ahead with our audio testing schedule too.

One thing I noticed with this build is that Destructo looks the right size for the environment now. We’ve been tweaking the positioning of Destructo and the environment around it for a few weeks now. Today Destructo does look as if the size and position match. I realise now, though, that if we had designed a giant robot with smaller arms then we wouldn’t have needed to give the sense of distance as much. One of the problems has been that the long arms raise up towards the visitor. So there needs to be enough space for that movement. If the robot was build differently, then it could be closer to the visitor. But at the same time, if the robot was built differently then it would be harder to do a good pull-back weapon-firing sequence.

RobotUni_Test_301013_middle

Continue reading