A King Kong post

Saw the King Kong Musical. Really affected by the giant puppet: the realism (spectacle emotion) and the story – a giant beast we start off being scared of, but then we feel really sad for it being treated badly and dying. Thinking about the giant military robots, and how I want a reversal there. Had come up with the idea of a military robot and while talking with Simon, how it could have lots of gadgets the visitor can activate with a remote control. Like a cable that links to the robot and can activate different areas. This would be a fun thing to do. But how to communicate a reversal without words? I have thought about the backstory, and how these military robots are shaped by humans, not robots. They are a victim of their nature, a nature that was created by humans. But it was when I saw King Kong, standing on the Empire State building, being shot by military planes, that it dawned on me. Every time the visitor activates the military gadget, it makes the robot bleed or hurt in some way. Then it is up to the visitor to keep going or not. It is a difficult choice, because the action is fun. Toy killer machines are fun. It is just a robot, so I shouldn’t have to stop having fun just because it is apparently being harmed. How does this relate to my theory of choice?

All choices have a sacrifice, a negative, and the choice you make therefore is ultimately about your morals – who you are or see yourself. Yep, that fits. I just need to work out what happens either way – how does it escalate if the visitor keeps activating the gadgets? (also will other people involved in the decision?), and how can the robot respond if they stop activating the gadgets? I need to make the choice to stop a conscious one, not incidental. If the visitor just happens to stop what will they see? Not a thank you, but some response that may lead them to understanding what they’re doing…which then enables the decision. But this is all very one-sided. The good path is this, the bad path is this. This is another factor in choice, which my approach doesn’t address. What if I brought it back to the character design technique of want versus need? Where the want and need are in conflict: you can’t have both at the same time, and what you need is unconscious to you. I think this is implied in the decision model. So I need to tweak both sides, to make not only a sacrifice on each side, but a benefit on each side.

Choice

Positive

Negative

Activate robot Activate and watch awesome gadget movement? Movement harms robot
Don’t activate robot

Perhaps my problem is that I’ve made the articulating of a choice a non-action? You either activate the robot (active) or don’t activate the robot gadgets (inactive).  I need to make each choice an action, which will also do away with the problem of defaulting into a choice. So there needs to be three states then:

Action

Benefit to you / What it says about you

Negative impact / What it says about you

Activate robot Activate and watch awesome gadget movement? / Having fun, control something amazing…. Movement harms robot / don’t care about robot harm – don’t think the robot is able to really feel harm.
Activate different gadget? Direction of attack
Don’t activate robot

I think a lot of the way choice is constructed in games is about a performance of the self – how we want to be perceived. Or at least, that is how I experience choice in many games. The moral choice is about what the game is reflecting back to you about who you are. As if it is bearing witness to you. It isn’t a private experience of choice. It is a public one. I feel as if my choices are being recorded (especially when I’m told so!). This is different to making a choice for myself. I think Journey is more about a private journey and Walking Dead is more about a public one. I am personally a fan of facilitating player-intuition, so I should steer more to the former than the latter. I think the latter is a further step away from yourself because it is about the construction of self. So the tactic of having a character remember things works to make you feel consequence, but also facilitates you constructing yourself on the fly. This is where lies can happen too, or at least a disjunct between the way we perceive ourselves and the way we want to be perceived. I don’t want this to be about the construction of an identity.

So how can I facilitate player-intuition more than player-self-construction? Intuitive design happens when the player is able to act unconsciously. So things operate in ways they are used to, they don’t have to think about it. This means the interface needs to be immediately understandable and with great overlap in the player’s world. No, or little, steps in abstraction. The user needs to know immediately: What can I do? I can press a remote, I get that. The remote can have different buttons for different places on the robot’s body. But what if I go further? Take away that interface, and make it be about touching the robot? Why would you want to touch a giant robot that they’re meant to be scared of? The controller is all about control. That makes sense. We create this giant robot to be scared of, and then you can manage that fear by controlling it. [Am I just falling into working with people’s worst bias here, and not changing anything therefore?]

Ok – so let’s go with the controller. Or let’s not. Let’s go with touching a giant scary robot – something extraordinary may happen down this path of weirdness. But here I’m asking the player to act against their intuition. I don’t like this. I’ve seen this in games and I rage quit when it keeps happening. This is a huge obstacle to player intuition, and immersion. Think about all the things they would naturally want to do in the situation and have them accounted for.

So I have a giant scary military robot. Natural responses? Be afraid of it – avoid it. Take control of it. So the controller makes sense in terms of intuitive design. Now what would you want to do? It depends what the controller lets me do… Activate gadgets. Fire missiles/ guns/ etc. Sit robot down. Get robot to crouch down to me. Make robot my friend. Make robot my tool. Ask the robot something. Make the robot do something silly. (Example: scene in Terminator 2 where the kid discovers he has control over the Terminator and he gets him to jump on one foot, and then start a fight.) Get robot to pick nose. Get robot to gyrate. Get robot to dance. Get robot to scare others. Get robot to… This can be an improv task with the actors. What do people want to get the robot to do? What if there are two buttons, one red and one green. The green is for silly things, and the red is for gadgets. But once again, the choice is obvious.

I think I need to go down the route of how can I make it so the visitor slowly discovers they’re doing harm? A gradual realisation rather than a overt choice moment. So people realise AFTER they’ve already been doing harm for a while, so it brings in guilt and shock at their own blindness to the situation. They didn’t realise how they were the perpetrator. They’re the problem.

So we have this military robot, with these gadgets for war. Each time the visitor presses a button, one of the gadgets is activated. A missile, for instance, that fires and is so powerful the robot is thrown to the ground. The visitor can do it again, but each time the robot takes longer and longer to get up. Their energy is draining, perhaps if we can, their body is getting dinted. But their energy is draining, the light is draining from their eyes. If the visitor doesn’t press the button for a certain amount of seconds, the robot recharges again. If they’re left alone, they recharge so much they then do something that gives the visitor an insight into something they didn’t know. So when a new person comes along, they’re seeing the robot in their best state (not interfered with), and then they take it away by controlling them. What is the something the robot gives to the visitor if they leave them alone? What is their natural fully-charged and great state? What is it the military robot knows that we don’t? It knows about control. It knows about harm. It knows about not having power. It is kind of passive protesting. What is something that works at the end of the experience and at the beginning? Robot puts a flower in the gun chamber? Robot shows some quote about nature versus nurture? I think the more specific and less overt with the meaning the better. The times I’ve written with overt writing are really on the nose (to me). I think symbolism works better than outright “this is what I mean folks!”. Is there something the visitor can do that is active though? I think the putting down of the controller works – on many levels. But I also see the desire/need for something the visitor can do as a positive action besides stopping. Perhaps they can actually touch the robot!!!! That is right – they assume the controller is the only way to interact with the robot, but they can actually touch the robot. So the robot can bend down if the visitor stops controlling them? Perhaps the robot can put out its hand and when the visitor touches it, it bends down and smiles? Need something more. I like the idea of people touching it’s feet and having it respond by bending down. I like this as a discovery moment. But I’m worried it will be lost because too many people won’t do it – because of the design of having the controller. So need some way to communicate that touching is possible. So people who do discover it for themselves, great. But need something other cue for other people not as curious or brave. (Kids might just do it anyway.) What if the robot, in it’s charged state, touches the glass? So it touches the glass way up high and the visitor can mimic by reaching out to the robot hand? Robot can’t touch glass up high because there is no glass there. It is above the glass – it is projection. So robot can reach hand out, and this is a cue to the visitor to reach out and touch. Touching any part of the robot area will bring the robot down to the visitor (to do what? Can’t do too many animations!).

So for the different points-of-entry we have:

  1. User touches robot = robot crouches down
  2. User touches robot = robot crouches down = user presses controller = robot gun fired & fall
  3. User presses controller = robot gun fired & fall = repeat (how many times?) = user stops = robot recharges = User touches robot = robot crouches down
  4. User presses controller = robot gun fired & fall = repeat (how many times?) = user leaves

All these journeys work.

So need: what robot does when they crouch down (needs to be fulfilling in itself for the before- or alternate-controller journey)? What does the robot do when it recharges (needs to be a good outcome to stopping controller pressing)?

6pm

Just tried acting this out myself, and I found the robot response – while it will be amazing to see – was actually a pretty small moment. Once it is experienced, it is done. There really isn’t a need to repeat the action. So need to go back to the controller activating different gadgets in the robot. Difficult to get across so much with a limited set of animations…

What about the robot backstory/character? So, this robot is a human-built robot. It is the future, and so it is a relic of old times (which is still our future). Why does it exist? It exists because robots have reached a point where any physical form is accepted. They all have the same brain power though? With different bodies? Perhaps robots change bodies in order to experience different ways of living? They live as a military robot, as a domestic robot, as a humanoid robot. It is the same artificial intelligence that is experiencing things at the same time through all the bodies. The type of robot body defines how much knowledge the robot has of the greater AI existence. In other words, at times the robot has less access to the entire pool of robot knowledge, or AI. Is this boring?

The Military Robot gets to experience what it is like to be controller, and to be a weapon of war. Why would a robot choose to experience this? What does it learn from the experience? What does it learn about humans?

I see the robot university being a place where robots learn about humans. They do this through immersion – through studying and being with humans. But the human visitors think it is a place where they can teach robots.

Perhaps the controller can be a tool for the visitor to teach the robot how to do things? The visitor teaches the robot how to use it’s weapons? Then they realise it is about the human not using the weapons in the first place. The robot is upset because humans choose to teach it to use it’s weapons. Perhaps the visitor can select what weapon to use against an object. A kitten? No. I think that is too obvious. What about using a weapon against another robot. Against a training robot. A robot that looks like a tool (example: crash test dummy). Also – Simon and I spoke about the robot being in development. That it only has the top functioning and the bottom is in scaffolding. There are hazard materials around. This relates to the idea that this path/this way of using and creating robots is in flux. We still have a chance to change the way we treat robots. So what if there is a target robot, or target human dummy? So the visitor is trying to figure out which is the most effective weapon? A target robot will lend itself to the visitor using the weapons quicker because a robot is a step removed from a human. It is okay to harm a robot. A target human dummy is a point between. It is a bit obvious though. I think a mini target robot also adds to the military robot feeling bad. Or is it better to have the target dummy feeling bad about attacking a human?

Now – how can the hurting of the robot work in this context?

9.15pm

I want the robot to have agency. At the moment the robot is just a (narrative and ludic…and robotic) device. What if the robot can take out the cable to the controller? That makes sense. It would work as an ending and beginning too. When the robot has had enough, it unplugs itself so the human can’t cause harm anymore. And when a new human arrives, once they pick up the controller the robot plugs it in. Giving another human a chance. So why would the robot do this? Why would the robot be this military robot? If it is a university for learning about humans, what is it learning? That humans will always choose violence? That seems pretty straightforward. There is no surprise here. No depth. No complexity. What if it is a puzzle? Where everything seems that it is about pushing buttons to use the weapons on the dummy, but then it isn’t. There is something hidden in plain sight. Something humans don’t see because of their bias/because of their fears/because of my leading design? Something more than just not pressing the buttons? What if the dummy was actually the legs of the robot? So it is a visual illusion.

Need to start with the player position: You’re a teacher at a robot university. You have a military robot student. It is your job to teach the robot how to aim properly. No. It is your job to teach the robot when to kill.

A robot with a gun faces you. Between you and it is a screen with an image on it. You have a button in front of you with the words “Kill” and “Don’t Kill”. The image keeps changing from a gunman, to a different kind of gunman, to gunman standing over a person, to two gunmen facing each other with guns, to a robot pointing a gun to a human, to a person pointing a gun at a child, to a person…., and so on…. Each one prompts you to ask whether it should kill? You hit the kill button to teach it what to kill. But the robot learns what to kill, and so after 5(?) selections the robot guesses based on the player’s choice. Or/and at the end the results of the player choices are shown. “You chose to kill 15 people and 6 robots”

Different paths? This won’t be accepted for kids to play…

One thought on “A King Kong post

  1. Pingback: Our first technical test! | Robot Uni Project

Leave a Reply

Your email address will not be published. Required fields are marked *