We met today for our last playtest in October. Brian (a Cube staff member) told us that the equipment stopped working yesterday. But after working on it all day and night, it was is fine today. Except for one thing. Sound has started crackling and popping, and isn’t even playing in some of the areas. So the team are working on that. But as a back-up I’ve arranged for us to use another zone which has the same setup as ours. That way we can still go ahead with our audio testing schedule too.
One thing I noticed with this build is that Destructo looks the right size for the environment now. We’ve been tweaking the positioning of Destructo and the environment around it for a few weeks now. Today Destructo does look as if the size and position match. I realise now, though, that if we had designed a giant robot with smaller arms then we wouldn’t have needed to give the sense of distance as much. One of the problems has been that the long arms raise up towards the visitor. So there needs to be enough space for that movement. If the robot was build differently, then it could be closer to the visitor. But at the same time, if the robot was built differently then it would be harder to do a good pull-back weapon-firing sequence.
I also invited Andrew Sorenson (the creator of the Physics Playroom installation), and Marcus (co-creator of the Virtual Reef installation) dropped in too. Both said things I already knew, but the fact they thought it was important spurred me to solve the problem straight away. One is there needs to be some activity in the top part of the screen (the projector space). It is quite static at the moment. We have already planned to have little robot lights flying around there, so that should be fine. If needed, we’ll also include Destructo’s *weapon flex* sequence during the idle state. The other issue was having a shallow activity for people to do with the screens.
Currently, the installations at The Cube have quite dispersed activities: a group of 40 school kids can come up and tap all over the screens and they is stuff for them to do. (Indeed, the reason why they do is because the design invites that use.) The Virtual Reef does have some *deep* or *z-axis* activity, in that the visitors can spend time with a fish reading information about it. But “Robot University” takes this approach further, but doing quite deep, and narrative-based, interaction. A visitor spends time with a character. While the earlier playtests with actors revealed that people do enjoy watching the interactions individuals have, and feel enticed to try it for themselves, I do feel the need to have some shallow quick activity that anyone can do with the screens as they pass by.
There are problems with this though: there isn’t much screen real estate to play with. There are the places where the robots are situated, as well as the UIs. To this doesn’t leave much space to spend time on the screens. Added to this is the fact I don’t want people to be spending time in front of the robots, obscuring the experience for those interacting with it. It needs to be an activity that fits with the fictional world.
Lubi Thomas and I started brainstorming about what we could do with the Kinects that will be installed shortly. The Kinects will allow us to also use motion-sensing: the people don’t have to touch the screen to get a response from the fictional world, they can just approach it. What I had already come up with in the last playtest was having little flying robots zap along the screen beside a person as they walk past. This is an attempt to “convert” them into a user of the installation, but also to show how the world is responsive to people in the environment.
One of the technical restrictions is having the network recognise the user as they pass from one node (one computer which controls two screens) to the next. So then I thought about having some type of flying object that works like a relay. They have an inworld (diegetic) reason for the behaviour. Then I realised they could be like a swarm. So they followed you around their area, but if you can towards them they scattered. So they chase people, but run away when the person comes towards them. It becomes a fun little game where people try and figure out how to get close (try and corner them for instance). Then I was thinking about how we can depict these tiny robots. Perhaps they are made of steel? Perhaps they have little helicopter wings? Then Rachael suggested they can be lights. And then it suddenly dawned: what I can do is have baby flying robot lights in the foreground level, with adult robot lights at the top of the hanger. We will need to have some response from the robots in the scene, and so ButlerCat can try and perhaps swipe them every now and then perhaps.
So here is a shallow activity that could work. It will change the lighting somewhat. So we’ll need to implement these when we can hopefully. Adam said he could do some testing on his own Kinect at home, so that means we don’t have to wait for the Kinects to be installed before implementing this idea. Related to this shallow activity is the buttons on the current backgrounds. At present, there are some buttons in the background of the ClunkyBot on the left. When people walk by the installation they immediately try and click on them. It is disappointing to have these seem to be active elements but not do anything. When I had seen this previously, I mentioned this to Simon and I kept them in as a possible shallow activity. But now we’ve decided on a shallow activity (that seems to entail less work then other shallow activity plans), we’ll have to make sure these elements are out of reach of visitors.
Another issue that came up was how the scene is interpreted. The environment is meant to be a Robot University. It is a training space, with somewhat themed training zones. I wasn’t sure about having a robot university banner and so I had previously come up with the idea of room or zone numbering. They’re written in a way so that the numbers and letters don’t lean towards a hierarchy or order of the robots, but trigger thoughts of a school or zoned or company area. Andrew mentioned that he would actually love to see some university sign in the scene, as he didn’t get the logic of the space from the way it is now. He would love to see a metal logo of the university on the wall somewhere.
Regarding the animations, in the last couple of builds the movement of the arm weapon from the touchscreen to the projector space has been staggered and lagging. In this build the action was smooth – so relief there. At present, the UI is of course linked to Destructo’s animations. But because Destructo’s animations aren’t complete, the sequence moves too quickly. I tried to get people to test the interaction but the sequence was too quick. People were taking in the clunkiness of it and unable to connect with the moment.
Some of the things that were developed from the previous builds:
* Destructo size and positioning
* Destructo animation from touchscreens to projector space resolved
* More Destructo animations
* Foreground flooring positioning and appearance
* UI for Destructo (including usage statistics)
* Sense of depth in the hanger
* ButlerCat environment
* ButlerCat model
Things we’re still working on:
* fix the pixelated poles
* temper the lighting so it is a bit darker
* keep playing with the lighting around the bridge
* continue animations for the Destructo actions
* ClunkyBot UI: dialogue plugin (more about this in another post)
* flying light robots (adult and baby) design & code
* university sign, zone signs
* move switches out of reach
* UI art
* Destructo UI – tweaking to interaction (to include “rationale” title and “training failure/success” and “step away” messages)
* ButlerCat modelling, rigging, and animation
* sound design & effects
In November we will be doing playtests every week. By my calculations of the schedule and tasks, we are as of the last build behind. The environment and modelling has taken longer than expected. The process of figuring out the differences with how to build for this peculiar space has taken longer than expected. We do know pretty much know how it all works now, and so we will be making big strides over these next few weeks. Jacek and Simon and will be coming up to Brisbane as well, and so they will be able to see first hand what we’ve been trying to relay remotely.
What is great is that every person that walks by the installation is entranced by it, and they want to play. So I can’t wait until we have a playable build so they can do that! Destructo, ButlerCat, and ClunkyBot will soon be interacting with people without us around. I love the thought of that.