Making-of documentary teaser

During the creation of Robot University a group of documentary makers from QUT have been capturing footage. They’re putting together a documentary on the process, but for now they have released this teaser. The teaser doesn’t include any footage of the final project state, and so it is all work-in-progress.

The spike at the end

At the end of November, the playtests increased from once per week to twice and four times a week. We had tests on Monday 25th and Friday 29th November, and then on the 3rd, 4th, and 5th of December we had the Melbourne team members in Brisbane. This was the first time Jacek and Simon were able to see their work on the big screens in person. We had waited to finalise the lighting and sound until Jacek and Simon were able to see and hear everything live. Tweaking the lighting and perspective wasn’t working remotely. Then we did some more tests on Monday 9th and Wednesday 11th. Then the following week, the last week we would be onsite at The Cube, we did tests on Monday 16th, Wednesday 18th and Thursday 19th.

finalgroup Studying TeamFixing Wholesideview Boysplaying Clunky Lastday NewPress Parentandchild Parentsandkids Teamatdesk

During January I tested onsite with small tweaks in preparation for the handover…

 

Tweaking interactions for big screen interaction

UI

Earlier I have written about my approach to a twist with the Destructo UI session. At the beginning I was worried about treating the twist as an intellectual exercise rather than an emotional one. It seemed to me that twists in films, TV shows and books are usually intellectual in that the experience of it is a thinking response (suddenly going over all the moments in the story that were clues).

Nevertheless, I went ahead and created an interaction that seems to the lead the player in one direction (that their role is to choose targets to kill). The technique of twists demands you create at least two believable truths: the one you want people to believe or see, and the one that is actually the case. Both have to operate side-by-side. So I have been tweaking the interface to use language that is ambiguous: using words such as “clear” and “rebuild” and “destroy” — which could refer to either killing humans or demolishing a building.

My early paper-and-pen playtests I conducted did show the interaction was successful: some were shocked to discover the twist, and another that did realise what was going on still found the experience satisfying because it was so easy to do harm to people. So the twist was working because I was cleverly leading people down one path. I wanted the twist because I wanted to highlight the bias people have operating: that robots and killing go together. But when I read the following paragraph in an article by Rosy Rastegar about “12 Years a Slave” I changed my mind:

“Movies are an empathy machine,” Rogert Ebert said. “Good films enlarge us, and are a civilizing medium.” To be earnestly enlarged – and civilized – by “12 Years” would mean to recognize the slavers’ pathological racism as tied to one’s own, even as it repels, and hold that difficult, internal confrontation long enough to cause a crisis within one’s sense of self and place in the world. Empathy compels us to see ourselves reflected in the very things we judge as evil in others, to implicate ourselves in society’s ills, and to rearrange our desires towards the building of a better life, for all of us.

I realised that people need to really know what they’re doing if I want to get through to them. If my goal is deeper understanding of themselves, then I need them to make conscious choices. It is weaker if they’re not really making decisions but instead blindly going down a path they’re lead down. so I can put the language back, language such as “demolition training” instead of the ambiguous “destruction training”. This means the goal of the interaction is obvious. But the key is I make each choice easy to make. I’m not putting up consequences and warnings, so it is easy for them to decide to be their worst. There is nothing sitting between them and their morality.

So what happens is the outcome will (I presume) be the same, but the personal connection to the outcome will be stronger. It isn’t about me tricking them (which is not what I wanted), but about showing them themselves.

In addition, what I have noticed with both the Destructo UI and Clunkybot dialogue interactions: the environment is quite noisy. Every medium has its affordances. The experience of a tablet is different to a phone is different to a theatre screen. The screens at The Cube are in a public space that has frequent traffic. People often stop and play with the screens and then continue on their journey. Some do make special trips to The Cube, but people are never alone. It is never a private interaction. So what I have noticed is that people have a lot of noise in their head, and visual distractions in their periphery. so this means I need to make sure the language communicates clearly through the mental and visual noise. Once the dialogue UI is properly running, I think I will need to change the language (to take people through the steps at the beginning, and shorten the overall interaction as well).

Testing Kinect, object response, and the dialogue plugin

For our playtest yesterday there weren’t many changes to the appearance and animations, but we were able to test some of the responsive aspects of the installation. One of the best aspects of having the installation on the screens is being able to watch people interact with it. A mom pointed out the big robot to her son and so Adam set the weapons in action to show the kid.

happykid

When the kid saw the robot hands open up to fire the laser, he yelled “nippers!”. Fun. 🙂

nippers

Continue reading

Working out the Dialogue System

A visitor to the Robot University installation can interact with the robots in different ways. One of the robots involves dialogue interaction. The user can select messages to send to the robot, and it responds. It is an interaction that involves the user remotely running a mission on Mars. They command the Mars rover through the clunky and sometimes anxious robot in front of them.

I have been playtest-driven with this project, and because editing programs are not always publishing programs I’ve had to keep changing what tools I use. For the first playtest I conducted with actors, I used the free online interactive narrative publishing tool Inkle Writer. I had begun writing the dialogue just in Word, but switched to this online service.

InkleWriter

Continue reading

The swarm is alive!

For today’s playtest we had more tinkering to figure out technical problems. Thankfully we figured out what was stopping the Kinect from working, and also managed to find the culprit behind the dwindling frame rate. Once these were sorted, we were able to enjoy the scene at an almost proper composition.

scene

The scene is looking great. The lighting in the back of the hanger works well, and the detail on the foreground environment is looking great with the dirtiness of living added to it. Some people who hadn’t seen Destructo’s new golden lighting really liked it.

Continue reading

Wed Nov 6th Playtest: The Day Destructo Disappeared

This morning we had a long time (an hour and a half?) before most of the all elements were working. So Adam and Brian kept making changes and uploading new builds to check if the changes made it work. For a long time, Destructo our trusty giant robot couldn’t be seen (see pic below). But we got it working finally.

NoDestructo

Continue reading

Our latest playtest – 30th Oct

We met today for our last playtest in October. Brian (a Cube staff member) told us that the equipment stopped working yesterday. But after working on it all day and night, it was is fine today. Except for one thing. Sound has started crackling and popping, and isn’t even playing in some of the areas. So the team are working on that. But as a back-up I’ve arranged for us to use another zone which has the same setup as ours. That way we can still go ahead with our audio testing schedule too.

One thing I noticed with this build is that Destructo looks the right size for the environment now. We’ve been tweaking the positioning of Destructo and the environment around it for a few weeks now. Today Destructo does look as if the size and position match. I realise now, though, that if we had designed a giant robot with smaller arms then we wouldn’t have needed to give the sense of distance as much. One of the problems has been that the long arms raise up towards the visitor. So there needs to be enough space for that movement. If the robot was build differently, then it could be closer to the visitor. But at the same time, if the robot was built differently then it would be harder to do a good pull-back weapon-firing sequence.

RobotUni_Test_301013_middle

Continue reading

It’s Alive

Hi, my name is Paul. I am responsible for the robot models and animation during the Robot Uni Project. I am lucky enough to be working with a group of talented folks. I have mostly been going back and forth with Simon. He has been working on the robot designs and helping me translate them into 3d models. I think we are at a good place with the Destructo model for now. Here is a quick comparison image of where we are with the model so far.

Destructo_Comparison Continue reading

Firepower

Here are some model sheets for the weapon arms of our beastly demolition mech, Destructo.

I’ll be texturing these (+/- sculpting) so I focussed on communicating the base forms to our 3D modeller, Paul.

Destructo_LazerArm1_GrenadeLauncher Destructo_LazerArm2