My research examines how robot design encourages or discourages human emotional attachment and affects operator decision-making. Of particular interest to me are (U.S. military) EOD personnel human-robot interactions; specifically, experiences with field robot models used every day, such as PackBot and TALON.
Saw the King Kong Musical. Really affected by the giant puppet: the realism (spectacle emotion) and the story – a giant beast we start off being scared of, but then we feel really sad for it being treated badly and dying. Thinking about the giant military robots, and how I want a reversal there. Had come up with the idea of a military robot and while talking with Simon, how it could have lots of gadgets the visitor can activate with a remote control. Like a cable that links to the robot and can activate different areas. This would be a fun thing to do. But how to communicate a reversal without words? I have thought about the backstory, and how these military robots are shaped by humans, not robots. They are a victim of their nature, a nature that was created by humans. But it was when I saw King Kong, standing on the Empire State building, being shot by military planes, that it dawned on me. Every time the visitor activates the military gadget, it makes the robot bleed or hurt in some way. Then it is up to the visitor to keep going or not. It is a difficult choice, because the action is fun. Toy killer machines are fun. It is just a robot, so I shouldn’t have to stop having fun just because it is apparently being harmed. How does this relate to my theory of choice?
Watching Park Chan-wook’s (one of my favourite directors) feature film “I’m a Cyborg and it is OK”. There is a scene where the cyborg is overhearing a conversation where one character confesses to “stealing a Thursday”. The cyborg is confused by this. Would like to do an interaction where the human visitor is forced to say weird things to the robot. The robot finds us really entertainingly weird, or illogical? Can’t be illogical – because there is already that dynamic in the robot-human relationship. “Does not compute!” But weird fun stuff, like the interactions between the player and Wheatley in Portal 2, and my fun-park interactive story.
Loving the magic realism of the film. Can’t wait to do a film with magic realism. Makes me think about the possibility of doing magic realism in the installation. We could incorporate elements from the visitor environment into the installation, and give it a slightly different world – an alternate reality that is existing right beside ours. This is part of the rationale behind why there is glass there…A robot university within a human university. Who is studying who?
I flew up to Brisbane to finally see The Cube in person! What a great space. Amazed at the quality of screen resolution close up, and how the screens don’t smudge! They are specially-made so they don’t smudge.
The zone our installation will be in is the first screen space at the opening of the building. So good (and mixed) traffic. It isn’t a dedicated space though. This is fine, but it means if we get the motion-sensing happening it will have to work with non-intentional triggering. For instance: not all people close to the robots will be approaching the robots.
The senior curator of The Cube, Lubi Thomas, gave some guidance: make it something that can only happen in the digital realm. People love playing with gravity in the science installation. What is something people cannot do outside the digital realm?… Lots more taken in, but these are some quick thoughts to share…
Watched a video on RealTime tv of robotics pieces at ISEA2013. I see a similar goal/outcome that I am trying to do with a work by Petra Gemeinboeck and Rob Saunders: ‘Accomplice’ at Artspace, Sydney, 2 May – 16 June, 2013. Petra talks about zoos, and how people think the animals are performing for them but the animals are performing in response to the humans trying to get the animals to perform. Once again, I’m fascinated by this idea of there being an assumption from the visitor’s side and it gets turned on them. Rob talks about ‘Accomplice’ and how people see the robots punching holes in the walls but then when they spend time with it, they realise the robots are actually playing with the walls.
And that switch of viewpoint is an important moment for a lot of people who come and see the work. They switch their viewpoint from how their environment is being destroyed to how the robots environment is being explored and being creatively or playfully being experimented with. […] It’s again going from a sort of fearful or disturbing encounter to one [that is a curious one]
It would be good to give the visitors the impression that the robots are learning just like they are. I don’t mean in terms of content, I mean technologically. AI. That your interaction has changed things in some way. For instance, every time a person does something that is good for the robot, a new light or object is added to the space; others for bad ones as well. Something that shows people not only their mark, but also what is happening to the robots/to their space because of people.
What if one of the robots is incredibly wise? What if this is the robot people keep coming back to, like ELIZA?…
Artist Mari Velonaki says “people are interested in behaviours rather than appearance when they deal with technological others”.
What about a child robot? They use the child dynamic in the Last of Us and Walking Dead games – works for sympathyand responsibility.
Still like the idea of a big military robot having great wisdom
Robots can choose their careers, despite what they’ve been made to do?
“Robots today can perceive more than humans can. They have sensors that allow them to detect things like x-rays or ultraviolet or GPS, and robots can collaborate in ways that we cannot. So they can share perceptions directly, not just across a room but across the planet.” Mary-Anne Williams
What sort of things can the robots do that make them seem superhuman (seeing around the corner)?…
Decided to do improvisation with actors during the writing process. This project will be about creating via doing. Getting off the page and out of my head as early as possible. This is in response to my experience with AUTHENTIC. I worked on that for so long, and the greatest leaps forward were when the script was being performed. It is easier to see and hear the holes than when you’re reading and thinking it. Perfection has a direct relationship with manifestation. The less manifestation, the more perfect something is. When a project is in my head, it is perfect. Every step of manifestation during the development and production process takes it further away from that perfection. This is where the skill in directing is – riding this chaotic process to produce something good. Something that can never be the perfect thing, a perfect thing with only one audience (your mind).
In the future, there will be universities where robots go to be educated about themselves, about aliens, and about humans. They graduate when they’re able to be an autonomous robot. Not slave robot. They’re the educated class…who look down on slave robots?…or have better universities than humans – they learn about living on other planets! They travel more to other planets than us!
The important constraint for me is that this project is about concentrated storytelling. The experience needs to start at the title, then as soon as they see the installation the story/world has already begun – their associations and assumptions about what is happening is already in play. For instance, with the game Journey. The use of an environment with the desert, sandy hills, flying, etc. These are playing with people’s biases (like the savannah preference) and the beauty of such movements.This is an important thing I learned when studying and playing with chatbots.
I think I actually discovered this insight from Brenda Laurel’s “Computers as Theatre” (which I notice is about to be released as a new edition). You can’t program a chatbot to handle every possible input a user may do in an appropriate way. So while you work hard to make the software as good as possible, work to capture as many keywords as you can and come up with clever response, what you also need to do is encourage the user to limit themselves. If you can’t answer everything they could possible enter, then you try and reduce what they could possibly enter. The way games do this is by giving you pre-selected inputs (choose one of these three or four options we’ve designed for you). But if you have a natural-language interface (where the user can enter in anything), then it helps to come up with ways in which the user with self-limit. If you created a salesperson chatbot, for instance, people would immediately limit their conversation to sales. But if the bot had no defined character or setting, people would input anything and therefore because the system couldn’t cover everything input it would stop making sense.
This is why I’m thinking of titles like “One of these robots will kill you”. I could set up the environment so it looks ominous – the robots are BIG and sharp edged, with quick actions, they have red eyes that follow you. There could even be a banner saying “Robot Kill School”. Perhaps the point then becomes – who is killing who?
The thing that is also in the back of mind as a concern is going with a title like “One of these robots will kill you” or “Robot kill school”. This works dramatically. And even though I’m creating a piece that aims to turn people assumptions around, I’m activating people’s negative bias. Their stupid biases. This is why advertising works, it targets our stupid biases. I don’t want to keep activating those biases, I want people to move beyond their biases. I want to talk to people who have moved beyond their biases. Negative biases are the lowest common denominator.
What if I went with ‘One of these robots will save you’? That is passive though. What about ‘You will save one of these robots’?… I wonder if it is better to have an action imperative that begins with what someone will do to you rather than what you will do to them? My instinct is that it is better to begin with what someone will do to you, because it triggers your instinct to act, to respond, to do what you need to do. Whereas starting with telling you what you will do triggers blocks: no I’m not! I won’t do that! You can’t tell me what to do! Etc. It stops the motivation to act before you’ve begun. Although if handled correctly, it could open up curiosity. Will this work really make me want to destroy a robot?
Had a dream last night, in which I received a message that 2021 will be a milestone/key point. I woke up and searched for stuff on 2021. On the Wikipedia page, “Do Androids Dream of Electric Sheep?” is set in 2021 (although earlier versions are set in 1992).
“Unlike humans, the androids possess no empathic sense. In essence, Deckard probes the existence of defining qualities that separate humans from androids.” (Wikipedia)