Test #3 on the screens!

We just did our third text on the zone screens this morning. Great! More and more people are coming along to the tests, and so we have lots of excited smiles and discussions. In this photo, we have the local key members Paul and Adam (and myself), as well as Senior Curator Lubi Thomas, and most of The Cube technical team.

Test_161013

Some of the things we have been working on since our last two tests and now looking much better. For instance, the overall lighting of the space is much better. We can see the setting and characters in the projector area (above the screens) – whereas before we couldn’t. This is mainly due to the screens being in a public space with lots of light. We have to treat the screens as if it is an outdoor installation in a way.

RU_middle

What still needs to be done is having the more lighting around Destructo. We spoke about having some bottom-up lights for a few reasons. One, we spoke about a light reveal of Destructo when the doors open; two, Destructo’s body needs some more light; and three, the darkness of Destructo’s brings more reflections on the glass (see pic below). A big test for me has been the experience of viewing the screens on the side. This works really well with the Physics Playroom installation created by Andrew Sorensen (watch video). I’ve invited Andrew to come along to the next test, so it will be great to hear his thoughts then.

Glare

Destructo does look better than the oversized versions we had before, but now Destructo is too low in the space. We still also have problems with the animation crossing over the touchscreens to the projector space. We’ve spoken about making sure all the arm animation happens in the projector region, and so Adam moved Destructo up. Suddenly Destructo looked suitably big (although too high), and the animations worked well. So Paul needs to change the animation so the arms move further back and up to fit the positioning on the screens (and also to ensure the firing line is well over the heads of the visitors).

RU_left

On the left-hand side Simon added in the floor, some detail in the background, and smoke at the top. All three looked much better. Without the floor, the previous tests felt like the wall was flush against the screen. So the floor adds depth. The smoke looks fantastic, though it is missed if someone isn’t told it is there. So we spoke about having the smoke coming from the foreground as well, drawing the eye up. The background has more depth than it did before now that Simon has put more detail and staggered elements in. But our eyes still betray us and make the scene (and Destructo) seem closer and smaller than intended. So we spoke about putting in objects that communicate scale: busy people on the bridge and in the background areas, or tables and chairs? Something that people will immediately recognise and be able to compare with everything else.

We now have more movements with Destructo, including the important head animation. The first playtest I ran all those months/weeks ago was with actors. There was a moment when the actor playing Destructo looked around the room keeping an eye on them. It is then when I saw how powerful a simple head movement can be in faking robot awareness. By pretending it knows where you are and is watching you, the space is changed from being a blind character that ignores the visitor to one that takes note of your presence as much as you do it. So for the early concept drawings, we made sure the head was moveable. For this test, Paul guessed the angle of the visitor and moved the head accordingly (you can see Destructo listening in on the conversation being had below). I want Destructo to both look to the visitor (which is easy because they’ll be at the interface section), but also to look around while the visitor is busy sending instructions. The angle looks right from afar, but it isn’t right at the screen yet. So Paul will play with that again (and also take into account how Destructo will be moved as well) for the next test.

Destructowatching

Adam put together a basic UI for Destructo in time for this test too. I gave him some simple images to use for a mock-up interface, which he used to trigger the animation (as you can see below). Some elements are missing from the interaction flow, and the trigger isn’t in the right place, but it is working for now. One thing that is important is that I’ve made the decisions happen one after another so the visitor isn’t bending their head up and down between choices. But they will need to step back to see the weapon animation they’ve chosen. Now that we’ve got it up on the screen, it is obvious that we need to put in time and cues so the user can walk back and view the fire animation. Paul will try slowing down the arm-raising, we’ll include sound effects to give weight to the moment, and I’ll also put in a visual prompt on the UI (which will come up first): a flashing message saying “stand back”. I also asked Adam if we could get the whole scene to shake when a weapon is fired, as it is important to show the whole space is affected by this event. This is a giant robot firing weapons in a hanger, things will shake.

With Lubi at the test, we were able to get some movement with getting the Kinects installed. I did originally plan to have gesture interaction with one of the robots, as well as a robot response when visitors move into the proximity (to facilitate the sense of robot awareness). But since the Kinect installation has been delayed, I took out gesture interaction. I kept the proximity response because the animation can still be used even if the Kinect isn’t working (cycling randomly when there are no users around for instance). We spoke about how Kinects and I asked for there to be enough to allow for continuous tracking along the ground. This way we could include little running or flying robots that follow visitors as they walk by for instance. This is an added extra, but something that would be great in some way. We are still keen on the small flying robots with lights (and Robbo – a guest animator at the test – suggested them having hazard lights) around the top background area. The space needs some activity.

So that is the wrap for this test. The team will meet via Google Hangout tomorrow and we’ll talk about the resolved and new issues arising. Along with refining current tasks, we’re now adding on the next robot focus: ButlerCat. ButlerCat is the next animation-intensive robot, and has some actions we’re not sure how they’ll work yet (for example: spraying and cleaning the glass). Adam will also be working on the ClunkyBot dialogue as we need to figure out which Unity plugin will work best for that. Indeed, I’m keen to get all the interactions happening as soon as possible as these need to be tested and refined. I’ve done a lot of testing with paper and pen, and actors, and iPads. But the interactions with the robots on the screens in the space are still an unknown…

Leave a Reply

Your email address will not be published. Required fields are marked *