Tweaking interactions for big screen interaction

UI

Earlier I have written about my approach to a twist with the Destructo UI session. At the beginning I was worried about treating the twist as an intellectual exercise rather than an emotional one. It seemed to me that twists in films, TV shows and books are usually intellectual in that the experience of it is a thinking response (suddenly going over all the moments in the story that were clues).

Nevertheless, I went ahead and created an interaction that seems to the lead the player in one direction (that their role is to choose targets to kill). The technique of twists demands you create at least two believable truths: the one you want people to believe or see, and the one that is actually the case. Both have to operate side-by-side. So I have been tweaking the interface to use language that is ambiguous: using words such as “clear” and “rebuild” and “destroy” — which could refer to either killing humans or demolishing a building.

My early paper-and-pen playtests I conducted did show the interaction was successful: some were shocked to discover the twist, and another that did realise what was going on still found the experience satisfying because it was so easy to do harm to people. So the twist was working because I was cleverly leading people down one path. I wanted the twist because I wanted to highlight the bias people have operating: that robots and killing go together. But when I read the following paragraph in an article by Rosy Rastegar about “12 Years a Slave” I changed my mind:

“Movies are an empathy machine,” Rogert Ebert said. “Good films enlarge us, and are a civilizing medium.” To be earnestly enlarged – and civilized – by “12 Years” would mean to recognize the slavers’ pathological racism as tied to one’s own, even as it repels, and hold that difficult, internal confrontation long enough to cause a crisis within one’s sense of self and place in the world. Empathy compels us to see ourselves reflected in the very things we judge as evil in others, to implicate ourselves in society’s ills, and to rearrange our desires towards the building of a better life, for all of us.

I realised that people need to really know what they’re doing if I want to get through to them. If my goal is deeper understanding of themselves, then I need them to make conscious choices. It is weaker if they’re not really making decisions but instead blindly going down a path they’re lead down. so I can put the language back, language such as “demolition training” instead of the ambiguous “destruction training”. This means the goal of the interaction is obvious. But the key is I make each choice easy to make. I’m not putting up consequences and warnings, so it is easy for them to decide to be their worst. There is nothing sitting between them and their morality.

So what happens is the outcome will (I presume) be the same, but the personal connection to the outcome will be stronger. It isn’t about me tricking them (which is not what I wanted), but about showing them themselves.

In addition, what I have noticed with both the Destructo UI and Clunkybot dialogue interactions: the environment is quite noisy. Every medium has its affordances. The experience of a tablet is different to a phone is different to a theatre screen. The screens at The Cube are in a public space that has frequent traffic. People often stop and play with the screens and then continue on their journey. Some do make special trips to The Cube, but people are never alone. It is never a private interaction. So what I have noticed is that people have a lot of noise in their head, and visual distractions in their periphery. so this means I need to make sure the language communicates clearly through the mental and visual noise. Once the dialogue UI is properly running, I think I will need to change the language (to take people through the steps at the beginning, and shorten the overall interaction as well).

Testing Kinect, object response, and the dialogue plugin

For our playtest yesterday there weren’t many changes to the appearance and animations, but we were able to test some of the responsive aspects of the installation. One of the best aspects of having the installation on the screens is being able to watch people interact with it. A mom pointed out the big robot to her son and so Adam set the weapons in action to show the kid.

happykid

When the kid saw the robot hands open up to fire the laser, he yelled “nippers!”. Fun. 🙂

nippers

Continue reading

Working out the Dialogue System

A visitor to the Robot University installation can interact with the robots in different ways. One of the robots involves dialogue interaction. The user can select messages to send to the robot, and it responds. It is an interaction that involves the user remotely running a mission on Mars. They command the Mars rover through the clunky and sometimes anxious robot in front of them.

I have been playtest-driven with this project, and because editing programs are not always publishing programs I’ve had to keep changing what tools I use. For the first playtest I conducted with actors, I used the free online interactive narrative publishing tool Inkle Writer. I had begun writing the dialogue just in Word, but switched to this online service.

InkleWriter

Continue reading

Wed Nov 6th Playtest: The Day Destructo Disappeared

This morning we had a long time (an hour and a half?) before most of the all elements were working. So Adam and Brian kept making changes and uploading new builds to check if the changes made it work. For a long time, Destructo our trusty giant robot couldn’t be seen (see pic below). But we got it working finally.

NoDestructo

Continue reading

Our first technical test!

On Wednesday 18th September, we conducted our first technical test on the screens at The Cube. With projects that have never been done before, tests not only give you an insight into what works and what doesn’t but importantly also tells you things you didn’t know would happen. The team had been preparing for this test, getting the 3D models and code ready so it can be displayed on the screen. The process isn’t a simple click-to-display option. The screens in the space are run by different computers, and so there are complex calculations that have to take into account rendering the imagery across different screens…which is turns out have different frame widths. So to be honest, I wasn’t sure if anything would work on this first test.

AdamandBrian

Continue reading

Leading with the Face

In my previous post about the embodied playtest (where we tested the first designs with actors), I comment on how I want to vary the emotional end-point of experience. What I found during the playtest is that the testers could feel a bit bad about themselves, or at least be shocked at the end of the their session. I want to ensure there is at least one positive end-point to a session. So last night I flicked open Katherine Isbister’s book “Better Game Characters by Design: A Psychological Approach,” and funnily enough I opened the book at a section on emotional feedback.

The section is about how the design of faces can influence the player’s emotional experience. Specifically, the section talks about mirroring and how people involuntarily mirror other people’s facial expressions, which then influences their own emotional state. What this means is that the design of a face in your game can then directly influence the emotional state of the player. Indeed, what stood out for me was Isbister’s comment:

Good character designers direct player emotions by using player-characters to underscore desirable feelings (such as triumph or suspense) and to minimize undesirable ones (such as fear or frustration). [page 151]

Isbister refers to the design of Link from The Legend of Zelda: The Windwaker to illustrate the point. As you can see with this moment in The Windwaker, the designers guide the player’s emotions through Link’s (the player-character) emotions. He is about to shot out of a cannon, but his emotions shift from fear to determination (see at 1:53). What happened during the embodied playtest with the actors, is that a robot session or “scene” I had designed to be one that facilitates player-empathy was more disturbing than intended. A large part of that was the delivery by the actor. She did a great job of feeling and communicating with her face the trauma the robot is feeling. This, coupled with the text I had written, sent the scene into a much more negatively-intense experience than I had planned. This is why work in directing as well as writing, so I can be a part of all the elements that shape the experience. So while I can temper the experience with my writing, I will also work with Simon (concept artist) and Paul (modeler and animator) to temper the experience via facial expression. This means it isn’t a case of the facial emotions duplicating the emotion of the dialogue (interface text), but of adding more complexity to it. And importantly, as Isbister notes, accenting the moments I want the installation visitor to be lead to. The lead emotions rather than all of them.

The embodied playtest

On Tuesday 6th and Wednesday 7th August, the whole team came together for our first group onsite meet. Jacek, Simon, and myself flew from Melbourne; and Adam and Paul from outer Brisbane. Over the two days, The Cube team gave us an induction of the site – showing us around the space and facilities. The Cube curator Lubi Thomas and UX designer Sherwin Huang shared things they’ve learned from previous installations; and we went over the game plan with The Cube technical team. We also visited the QUT Robotics Department to talk about their robot research projects, and see a Nao dance! Adam shares a video of it below:

The rest of the team was very excited about the space – they’re as keen on the possibilities as I am. We talked about the design of the project in light of seeing the space, and debated different approaches. The important part for me was also running an embodied playtest.

Continue reading

Meet with Dan Donahoo: “Robots@School”

robotsatschool-500x370

When “Robot University” was announced, journalist and digital education producer Dan Donahoo tweeted to me about a project he worked on: “Robots@School”.

Robots@School was a narrative-based research project to explore children’s expectations of robots and learning. Project Synthesis partnered with US-based research company Latitude and the LEGO Learning Institute to design a research framework and support the gathering and analysis of data worldwide.

The PDF report is on his Project Synthesis site. His research interested me, because it shows that kids don’t have negative views of robots. This affects my design: because if some of the visitors have no negative bias, then they won’t be going through a transformation…

Continue reading

Twists

Thinking about twists in interactive projects, and so I post to my Facebook page:

Hey interactive storytelling mates – I’m thinking about the experience of twists. I think most of the time they become an intellectual experience. The player suddenly sees another side to what they’ve been experiencing and becomes an intellectual moment about design. What I’m keen on are the times when a twist isn’t primarily an intellectual experience, but an emotional one. For instance, a reversal or reframing of your actions: “what have I done?” “oh, I did good!”. What are your thoughts on twists in interactive experiences?…

Continue reading