Leading with the Face

In my previous post about the embodied playtest (where we tested the first designs with actors), I comment on how I want to vary the emotional end-point of experience. What I found during the playtest is that the testers could feel a bit bad about themselves, or at least be shocked at the end of the their session. I want to ensure there is at least one positive end-point to a session. So last night I flicked open Katherine Isbister’s book “Better Game Characters by Design: A Psychological Approach,” and funnily enough I opened the book at a section on emotional feedback.

The section is about how the design of faces can influence the player’s emotional experience. Specifically, the section talks about mirroring and how people involuntarily mirror other people’s facial expressions, which then influences their own emotional state. What this means is that the design of a face in your game can then directly influence the emotional state of the player. Indeed, what stood out for me was Isbister’s comment:

Good character designers direct player emotions by using player-characters to underscore desirable feelings (such as triumph or suspense) and to minimize undesirable ones (such as fear or frustration). [page 151]

Isbister refers to the design of Link from The Legend of Zelda: The Windwaker to illustrate the point. As you can see with this moment in The Windwaker, the designers guide the player’s emotions through Link’s (the player-character) emotions. He is about to shot out of a cannon, but his emotions shift from fear to determination (see at 1:53). What happened during the embodied playtest with the actors, is that a robot session or “scene” I had designed to be one that facilitates player-empathy was more disturbing than intended. A large part of that was the delivery by the actor. She did a great job of feeling and communicating with her face the trauma the robot is feeling. This, coupled with the text I had written, sent the scene into a much more negatively-intense experience than I had planned. This is why work in directing as well as writing, so I can be a part of all the elements that shape the experience. So while I can temper the experience with my writing, I will also work with Simon (concept artist) and Paul (modeler and animator) to temper the experience via facial expression. This means it isn’t a case of the facial emotions duplicating the emotion of the dialogue (interface text), but of adding more complexity to it. And importantly, as Isbister notes, accenting the moments I want the installation visitor to be lead to. The lead emotions rather than all of them.

The embodied playtest

On Tuesday 6th and Wednesday 7th August, the whole team came together for our first group onsite meet. Jacek, Simon, and myself flew from Melbourne; and Adam and Paul from outer Brisbane. Over the two days, The Cube team gave us an induction of the site – showing us around the space and facilities. The Cube curator Lubi Thomas and UX designer Sherwin Huang shared things they’ve learned from previous installations; and we went over the game plan with The Cube technical team. We also visited the QUT Robotics Department to talk about their robot research projects, and see a Nao dance! Adam shares a video of it below:

The rest of the team was very excited about the space – they’re as keen on the possibilities as I am. We talked about the design of the project in light of seeing the space, and debated different approaches. The important part for me was also running an embodied playtest.

Continue reading

Meet with Dan Donahoo: “Robots@School”

robotsatschool-500x370

When “Robot University” was announced, journalist and digital education producer Dan Donahoo tweeted to me about a project he worked on: “Robots@School”.

Robots@School was a narrative-based research project to explore children’s expectations of robots and learning. Project Synthesis partnered with US-based research company Latitude and the LEGO Learning Institute to design a research framework and support the gathering and analysis of data worldwide.

The PDF report is on his Project Synthesis site. His research interested me, because it shows that kids don’t have negative views of robots. This affects my design: because if some of the visitors have no negative bias, then they won’t be going through a transformation…

Continue reading

Twists

Thinking about twists in interactive projects, and so I post to my Facebook page:

Hey interactive storytelling mates – I’m thinking about the experience of twists. I think most of the time they become an intellectual experience. The player suddenly sees another side to what they’ve been experiencing and becomes an intellectual moment about design. What I’m keen on are the times when a twist isn’t primarily an intellectual experience, but an emotional one. For instance, a reversal or reframing of your actions: “what have I done?” “oh, I did good!”. What are your thoughts on twists in interactive experiences?…

Continue reading

“Is robot killing all about human killing?”

I came across the research of Julie Carpenter:

My research examines how robot design encourages or discourages human emotional attachment and affects operator decision-making. Of particular interest to me are (U.S. military) EOD personnel human-robot interactions; specifically, experiences with field robot models used every day, such as PackBot and TALON.

Continue reading

A King Kong post

Saw the King Kong Musical. Really affected by the giant puppet: the realism (spectacle emotion) and the story – a giant beast we start off being scared of, but then we feel really sad for it being treated badly and dying. Thinking about the giant military robots, and how I want a reversal there. Had come up with the idea of a military robot and while talking with Simon, how it could have lots of gadgets the visitor can activate with a remote control. Like a cable that links to the robot and can activate different areas. This would be a fun thing to do. But how to communicate a reversal without words? I have thought about the backstory, and how these military robots are shaped by humans, not robots. They are a victim of their nature, a nature that was created by humans. But it was when I saw King Kong, standing on the Empire State building, being shot by military planes, that it dawned on me. Every time the visitor activates the military gadget, it makes the robot bleed or hurt in some way. Then it is up to the visitor to keep going or not. It is a difficult choice, because the action is fun. Toy killer machines are fun. It is just a robot, so I shouldn’t have to stop having fun just because it is apparently being harmed. How does this relate to my theory of choice?

Continue reading

First meet with Simon Boxer

Simonmeeting

Yesterday, on the 10th of July, this digital writing residency officially started. Today I met with the visual artist on the project, Simon Boxer, to talk ideas. It was a great chat, riffing off ideas about the robots and the nature of human-robot interaction. We spoke about people’s fears with robots – being wiped out by them, and Simon added being replaced at work by them.

We spoke about constraints, and how due to the limited time I’ve chosen to create 3 robot types that are repeated or cloned. So six robots in total. We can reskin the robots, and I can alter the UI elements. We could also do gender differences (how would that work with robots?). I like the idea of robot twins, not clones. “We’re twins, not clones!” So brother and sister maybe.

There are twelve multi-touch panels, and so we spoke about the spacing across the panels. It won’t be one every second panel for instance. Originally I had planned to have the robot types separated (1,2,3, and then 1,2,3). But in order to help with the background illustration, it is better to have the robot types close to each other. That way they can have their own *space*. The background illustration is vitally important – as it needs to have strong environmental storytelling. This project is all about concentrated storytelling: getting across lots with little.

The above sketch is a photo of my own scribbly notes during our conversation. Simon drew better ones. The section on the top left is about my thoughts on utilising people’s spatial navigation bias to support the robot experience. The *wise* robots will have bodies that look thin and old robot pathetic. Not cool tech. If we put the giant military robots in the center, then they will be the visual draw card. So then put the domestic robot with lots of cool gadgets and arms on the left, then on the right (the side the visitor enters the space) the pathetic wise robots will be. This means they will naturally be walked past, just as they are passed over because of their appearance. So appearance and spatial positioning working together to get people thinking less of these robots more than the others.

“I’m a Cyborg and it is OK”

I'm_a_Cyborg_film_posterWatching Park Chan-wook’s (one of my favourite directors) feature film “I’m a Cyborg and it is OK”. There is a scene where the cyborg is overhearing a conversation where one character confesses to “stealing a Thursday”. The cyborg is confused by this. Would like to do an interaction where the human visitor is forced to say weird things to the robot. The robot finds us really entertainingly weird, or illogical? Can’t be illogical – because there is already that dynamic in the robot-human relationship. “Does not compute!” But weird fun stuff, like the interactions between the player and Wheatley in Portal 2, and my fun-park interactive story.

Loving the magic realism of the film. Can’t wait to do a film with magic realism. Makes me think about the possibility of doing magic realism in the installation. We could incorporate elements from the visitor environment into the installation, and give it a slightly different world – an alternate reality that is existing right beside ours. This is part of the rationale behind why there is glass there…A robot university within a human university. Who is studying who?

 

[Image source]

I saw The Cube!

I flew up to Brisbane to finally see The Cube in person! What a great space. Amazed at the quality of screen resolution close up, and how the screens don’t smudge! They are specially-made so they don’t smudge.

Cube1

The zone our installation will be in is the first screen space at the opening of the building. So good (and mixed) traffic. It isn’t a dedicated space though. This is fine, but it means if we get the motion-sensing happening it will have to work with non-intentional triggering. For instance: not all people close to the robots will be approaching the robots.

Watching kids playing with the screens, it is clear that height is important. There needs to be things down below as well as high. We spoke about the special bus posters that are designed for kids to see things at their own height, and how different experiences can happened at different heights.

The senior curator of The Cube, Lubi Thomas, gave some guidance: make it something that can only happen in the digital realm. People love playing with gravity in the science installation. What is something people cannot do outside the digital realm?… Lots more taken in, but these are some quick thoughts to share…

 

Robots and Zoos

Watched a video on RealTime tv of robotics pieces at ISEA2013. I see a similar goal/outcome that I am trying to do with a work by Petra Gemeinboeck and Rob Saunders: ‘Accomplice’ at Artspace, Sydney, 2 May – 16 June, 2013. Petra talks about zoos, and how people think the animals are performing for them but the animals are performing in response to the humans trying to get the animals to perform. Once again, I’m fascinated by this idea of there being an assumption from the visitor’s side and it gets turned on them. Rob talks about ‘Accomplice’ and how people see the robots punching holes in the walls but then when they spend time with it, they realise the robots are actually playing with the walls.

And that switch of viewpoint is an important moment for a lot of people who come and see the work. They switch their viewpoint from how their environment is being destroyed to how the robots environment is being explored and being creatively or playfully being experimented with. […] It’s again going from a sort of fearful or disturbing encounter to one [that is a curious one]

It would be good to give the visitors the impression that the robots are learning just like they are. I don’t mean in terms of content, I mean technologically. AI. That your interaction has changed things in some way. For instance, every time a person does something that is good for the robot, a new light or object is added to the space; others for bad ones as well. Something that shows people not only their mark, but also what is happening to the robots/to their space because of people.

What if one of the robots is incredibly wise? What if this is the robot people keep coming back to, like ELIZA?…