The embodied playtest

On Tuesday 6th and Wednesday 7th August, the whole team came together for our first group onsite meet. Jacek, Simon, and myself flew from Melbourne; and Adam and Paul from outer Brisbane. Over the two days, The Cube team gave us an induction of the site – showing us around the space and facilities. The Cube curator Lubi Thomas and UX designer Sherwin Huang shared things they’ve learned from previous installations; and we went over the game plan with The Cube technical team. We also visited the QUT Robotics Department to talk about their robot research projects, and see a Nao dance! Adam shares a video of it below:

The rest of the team was very excited about the space – they’re as keen on the possibilities as I am. We talked about the design of the project in light of seeing the space, and debated different approaches. The important part for me was also running an embodied playtest.

What is an Embodied Playtest?
I came up with this idea early on when I reflected on what process I wanted to do for this installation. I knew I wanted this project to be playtest-driven rather than document-driven. One thing I have learned is that everything about your project is perfect in your mind. When I put it on paper it is less perfect, but still existing only on paper as a near-perfect vision. It is only when the project is enacted, through playtests and production, that the real obstacles occur. Obstacles and how you respond to them are what shape your project. There is less friction in your mind and on paper than there is when you manifest your world. This is why whenever I run workshops on pervasive gaming or transmedia, I get the students or industry professionals to create prototypes rather than pitches. I’ve mentored at industry residentials for years, and I believe that the outcome of a document or “pitch-deck” is least helpful to writers, designers, and directors. It helps producers get finance for the project, but it doesn’t give the creatives any real insights into what does and doesn’t work.

I’ve also found as a writer, that the earlier I get my story and characters into the mouths of actors and hands of players the better. An important part of the writing process is negotiating how the world you’ve imagined can be manifest in production, and in the minds of players/watchers/listeners/etc. You certainly don’t have control over everything, but figuring out how to provide the right triggers is my aim. Writing for games, screenplays, audio drama, and so on involve your words being enacted and mediated by other people and things. So it isn’t just about writing for a delivery you can do in your own room, but encoding the writing so its essence may survive the transformation points of actors, artists, animators, and so on. So, because we’ve only got a short period of time to create this robot installation, I decided to go straight to the closest point of friction and transformation possible: a playtest.

With not even a month into the project though, it wasn’t feasible to have a technical playtest on the huge screens (and even on the small screens). So I planned instead to have a playtest, with actors. Of course, playtests with paper and pen are not uncommon in gaming. But I thought using actors to play the “robot” characters would facilitate more learning. I have been doing embodied playtests for my own social game (“5 Minute Spy Academy”), where I play the narrator/game-master/game-engine along with bits of paper for the GUI (graphical user interface) elements. So I knew it could work practically. What I hoped is that using actors would facilitate insights into the human-character and human-robot interactions. How are people responding to the actor expressions and movements? How does that affect their choices in the GUI? Also, what insights can the actors give us into the design of the robots? What expressions do they use? What emergent moments are there that can be used in the design? It is also important to do this in the space, to see what the experience is like in the space.

Concerns?
I haven’t used actors in a playtest before, it is the first time the team has done a playtest together, it is the first time my work so far has been *embodied*by someone else, and we’re doing all this in public. My writing and design wasn’t polished, and it was the first time the team was seeing my work in a playtest. Also, sometimes people don’t understand that projects go through a process of being weird and ugly. So I was prepared for scrunched faces, confusion, and maybe some insecurity about the project. But I felt the embodied playtest could reap valuable insights, fast-forward the development process, and could facilitate shared ownership.

Christy with Nao[Christy with a Nao robot, in the QUT Robotics Lab]

How did it go?
It went better than I expected. I did feel a bit torn between encouraging the team to participate in design insights, and guiding the actors. I’m used to doing each of these things in different sessions. I’m used to guiding actors for a recorded performance, not for a playtest. How is it different? For a playtest, it doesn’t matter as much what they do every moment. You don’t have to keep going to get each line recorded in the manner you need. I just wanted some basic physical responses to inform the modelling and animating and well as my design. So it was recorded for these purposes. The whole process takes longer though, because it isn’t about just getting a performance. Instead, it is about doing the performance and then discussing insights and coming up with ways to vary the experience with the actors. The team certainly came to the table with reflections on the process. But I tried to keep things short and relevant to what we can do with the actors. I didn’t want to confuse the actor’s head space too much discussion. I know with directing performances, the less rattling on you do the better because you don’t want the actor shifting into an analytical state of mind (which is the death of intuitive performance). Upon reflection though, I think having improv actors that also learn the process of group playtests (where everyone is pitching in ideas and insights) works. It doesn’t need to be a sacred performance space, it can be more loose and everyone can talk about anything.

What went better than what I expected was the emotional impact of the *scenes* or sessions I can designed. One is a dialogue-selection interaction with a robot. This session is designed to facilitate empathy in a number of ways. The other robot session tested was with a giant robot and interface, that involves a twist at the end. The user (who was actually one of the actors), was totally surprised by the twist. So my high-level aims with those sessions are working. But I did also discover some things I want to change.

Changes you want to make?
In both of the ending moments of these sessions, the visitors didn’t necessarily feel good about themselves. While this works in terms of having an emotional experience, I don’t want all three robot interactions to involve the human visitor feeling bad about themselves. So I’ll be looking at my writing-design to make sure there are different end states.

It was also interesting to note the different responses to the dialogue-interaction. At the beginning, I offer a few choices on how you can greet the robot. I originally designed the interaction to begin with the robot bowing to the visitor, and so they can choose whether to bow back or not or choose something else. However, because the “robot” didn’t bow, this made the bow option seem too extreme for the moment and everyone chose the neutral option. So I’ll need to tweak the opening now that the robot won’t be bowing (it doesn’t fit how I’ve developed the character).

There were also differences in the motivation for the choices at the end. One visitor (a team member actually – strangers were too scared to try out this weird thing we were doing), said they chose to deny the robot’s request because they didn’t feel the robot was emotionally-stable enough. This is an unanticipated response that is due mainly to the traumatised robot performance the actor gave and partly to my writing. So I want to tweak the writing and work with the animator to make sure the performance doesn’t corrupt the choice at the end. If the choice is pretty clear, than it isn’t much of a choice. There needs to be fairly evenly weighted good reasons for agreeing to and denying the robot’s request. But beyond this tweaking on my end, this outcome showed that there could be great discussions after the experience where people can discuss their choices.

For the giant robot session, it was clear the visitor needed something to show the repercussions of their choice. Although the robot does respond by firing the device they chose, the “attack” goes off into the unknown. The visitor chose a particular thing to “attack” because they wanted to see what would happen to it. Because the activity is a simulation (in the world), the activity does not need to be actual. So what I’ll do is include the result of their choice in the GUI. This also brings the attention back from the robot to the GUI, which is where the next step is. So this is a nice attention loop guiding the visitor. Another aspect to this expectation insight is how specific the images in the GUI need to be. The visitor chose a “building” for instance, because she wanted to see all the people running for their lives when bombed. As I guessed and is now confirmed, this means the buildings need to be clearly decrepit in some way and at the least somewhat obviously vacant. I also changed the order of the GUI interaction, and loved some of the moves the actor did. He looked around, for instance, and had this presence that was watching everyone in the space. He also waved goodbye to the visitor, which was kind of fun and kind of mocking.

It was great having the actors do this in front of the actual screens we’ll be using, as we could see the experience in the context of the site. Because the interactions are one-on-one intimate experiences, people will be watching other visitors quite a lot. At present, none of the other installations do this – they’re all fairly dispersed and shallow interactions. The robot installation involves personal and deep (as in you spend a couple of minutes with each robot) interaction. People watching the interactions felt a great empathy for the robot and wanted to know what the visitor was doing to make it so upset. This works well for crowd performance and drawing the next visitor in.

CubeTable3[Quick team meeting before the playtest]

Other insights?
Generally, in terms of just being in the space a few changes have taken place. We’ve reordered the robots now we know the right-hand side is the high-traffic section with the most casual interaction. I had put the dialogue-driven robot there, but that interaction isn’t suited for shallow interactions. The domestic robot with it’s simple touch interaction is now there.

Now I’m realising just how personal the interactions will be, I want to put some shallow and casual interaction in the space that anyone can do. For instance, in the physics display currently in the space a visitor can throw blocks into space. Just something with low effort and a big reward. But I want it to be something that relates to the fictional world of course, and the message of the piece. Timing is a big issue though, so I’m grading every idea according to whether it is essential to the main experience or not. If it isn’t, it goes into the “if we have time” list. It would be easier if we had more time, then there could be more animations to communicate things. But I also relish this chance to communicate as much as possible with restricted animations and words.

Related to that, another condition we learned is that the installation cannot have sound as part of its critical design. The sound has to be turned off during exam times, and so it needs to work without sound. So this just means creating canny duplications of moments across sound and visual, action or word. The good thing is we discovered a huge subwoofer in the interior of the installation. So we’ll be utilising that sweet thing for the launch!

There were lots of other insights and changes we made over those couple of days, and so we’re busy quickly developing so we can move into the next stage of modelling and animation!

Leave a Reply

Your email address will not be published. Required fields are marked *