Artificial Life and Believable Characters

“Art + Life: Life made by Man rather than by Nature”. -Langton (Langton)

The AI approach tries to choose emotions depending on certain factors. It can use some fuzzy logic to determine what emotion to display on some fuzzy input. Then there might be models for expressing such emotions, such as the facial animation system used in Half Life 2 (Valve 2004), which works pretty well, but it has one expression for one emotion, so it’s pretty simplified. Also, the matching emotions are pre-set by designers. This can cause problems, for example: Affirming by nodding your head forwards and back is something that we are used to in the West, but in some Eastern cultures affirmation is signalled by a movement from right to left. So these definitions of which expression leads to what emotion should not be generalised and pre-fixed. If you do so, you might have to consider different societies conventions. Expression should be unique to the character itself. In the best case the character would learn which expressions cause what reaction. In that way in a similar society they’d all end up using the same expressions, while in a different society they would learn others. I think this is the basis for language as well, because language is the expression of certain facts about the environment. So for example “chair” would be different in every language, because we all learn it from our social environment. I don’t know if Chomsky and the lot agree, but I think that the development of the same concepts exists (because they exist in our shared reality), but language is dependent on where we are and what we are surrounded by and who we want to affect in what way.

That’s why I think these approaches of assigning certain expressions to certain causes are not right. Another example is the Oz project by Michael Mateas had a similar result. They applied Ortony Collins and Clores binding of expressions to certain emotional situations. These were displayed by little animated creatures called Woggles. They found that the ones where they had made binding errors were deemed to have the most personality by observers: Users reported that the Woggle that repeatedly banged its head against the ground was the most interesting and described it as neurotic, angry or upset. They had all these connotations about the crazy Woggle, while the Woggles that had “correct” or plausible bindings such as “get angry when hit” or “be sad when alone”, the observers just noted that they thought the Woggles were simply programmed to do so. The interesting thing was that the most believable behaviour was due to a programming error in the simulation and not deliberate. This shows that it is often not the expected bindings that seem the most interesting. The individuality of a reaction is more important than the fact that the reaction corresponds to some kind of social convention.

This was using AI systems (in terms of defining specifically, symbolically or logically which input gives which output). You could use fuzzy logic, but it is still deterministic – a given input forms a given output unless you use a random variable somewhere. I don’t like random variables. Random just means “I don’t know”, just like 50:50 or 50% means “I don’t know”. So essentially what Im looking for is some system which allows a creature to perceive certain emotional expressions from its environment and then try to emulate these expressions on the output end of its own … feeling. Basically we want to create a connection between the expression of an emotion that we see externally, to something that we feel internally. So once that connection is made, if I feel that emotion – I will express something similar to what I saw before. That way, through this “adaption” process – you could call it adaption, synchronicity, copying or pattern matching (it’s all in the same vein) – with that kind of process, we would get the individual expression that really forms the character. Interesting is that if you have a character or child grow up different cultures at the same time. Say it spends half of its life in India and the other half in the US or Europe, the expressions used would be very mixed, just like the language would be mixed.

I even speak Denglish (German English) with my old friends (we went to an international school in Germany, where the teaching language was English). For us it doesn’t make a difference if we use German or English words to express what we mean, we often don’t even notice when we use verschiedene Sprachen. For somebody who is not from the same (sub) culture, from either German or English background (but not both), it would sound very odd.

Basically this aspect of subculture comes in, where forming these mixtures of different known stereotypes form a new culture. Therefore trying to define how many cultures there are, is futile, because for every mixture you creating a new subculture. This is one of the reasons why I don’t agree with symbolic AI, because you get this symbol overflow. You can take a few base symbols and find an exhaustive list of combinations, but you will still find things you cannot express with this set of symbols and will have to introduce a new one. So how many symbols do you need to introduce to be able to express all that is around us (really have an exhaustive set of symbols)? I don’t think you can really achieve this – although this is debated. Maybe a non-symbolic approach can also never fathom every possible concept. I am basically more interested in a procedural/process based approach to generating and correlating an experience. And I want to use the actual sensory information, the perception of an emotion to create the emotion, to act it out.

This is where Braitenberg and other Artificial Life based architectures, because they look at biology for inspiration. They found that the Brain and nervous system does a lot of this kind of pattern matching. Some go even further (Hawking) and claim that there is a general processing algorithm that handles temporal pattern matching in every region of the brain and that can be found in us all. Although I do agree with that to some degree, I am more intrigued by Braitenberg’s views. He comes from a purely biological perspective. He had been analysing the neural structures in real brains and nervous systems for 40 years prior to writing his seminal book “Vehicles: Experiments in Synthetic Psychology”, which was based on an Essay He had written 20 years before titled “Taxis, Kinesis and Decaussation”. Taxis is reacting to some stimuli, Kinesis is reaction to the strength of the stimulus and Decaussation is the way that cross connections exist in our brains (Left to Right for arms eyes etc, except nose). Essentially for the biology I am relying on Braitenberg due to his experience.

My idea is that an approach to real believability in these kinds of agents is possible if we look at the low-level pattern recognition algorithm that Braitenberg suggests. I want to build an AI controller that interfaces with, or in some cases replaces systems such as Euphoria and Massive. Euphoria was based mostly on human-like bipeds (which is very useful for games as they usually feature human characters). My system will at some point be able to handle this as well, but I am focusing on little Vehicles with wheels, so there is not really a realistic motion for these (as long as we consider the necessary environmental effects such as friction, inertia, gravity and slip etc.). But there is no reason why this kind of technology that we are employing cannot achieve the kind of result Natural Motion’s has, as their system was biologically inspired as well.

We haven’t really mentioned other Artificial life projects yet:

1. Planning

2. Low level behaviour

3. Cell behaviour – autopoisis (extreme self-sustaining behaviour or self rebuilding) by Varela

When I was talking about Disney, I was talking about the visible thought-process being the main thing that makes animated characters in non-interactive scenarios believable. I wanted to comment that thought process is something that is pretty difficult to tweak so it seems believable with normal AI systems. You could incorporate delays in the decision making process or something in the animation. But that delay in would have to be determined by something – either by a random variable, fixed or another process. Either you end up with a huge hierarchy of sub-processes going on (deliberation) that give some time value – and that would be possible. But what values do you give these things, so my thinking is that if you use Artificial Life or biologically based system, which incorporate time delays at the very low level, then we would get this impression across that a thought has to travel through the brain, and that this takes time. The result is that no action is performed as soon as it is selected. In a sense it is procedural. A process might cause the character to tentatively begin an action and then be convinced that it is the correct action while we do it. So this “testing” happens in real-time, while we perform the action. This is the kind of thing that is very difficult to model using a classic AI system, because time is usually of no intrinsic value in these kind of systems.

Essentially I am thinking the Disney animation criteria is like a benchmark. What we are trying to do s create an artificial animator. An artificial animator with a personality that can generate personalities. So the technique of creating characters by writing of the aspects that are considered there and the aspects that go into the actual animation (movement), so to say which has been done by Euphoria- they flow together to form an individual, believable character that can then be used in interactive scenarios. That’s an important aspect I wanted to tie back to, that the Disney/Pixar animation techniques, which are so successful that they given the robot Wall-E so much character that we were able to identify with him. So we can look at that as our benchmark and think what goes into each frame and into the time delays that are being displayed by the character: Are they calculated? Are they Pre-decided? Are they Human? Are they what the animator felt themselves? Are they being recorded in some cognitive time-delay chart (i.e. 2ms for fear 3ms for love etc). I don’t think those kind of criteria can be determined as constants (it depends how much you love fear something…). All these criteria are incredibly hard to model with an AI system. But they would develop/emerge from AL based or Biologically inspired system, which includes time delays. Because if somebody is unsure about something, the thought would be propelled through the brain for a longer time until a distinct action is performed. Some action may be performed half-way and would flip between them. Like the bunny in front of the car’s headlights. Not sure whether it should run or stay because there are so many possible outcomes (+actions) and they are all being activated. I don’t think even fuzzy logic can do this – they would be equivalent in the decision made (the bunny runs or it doesn’t), but while this decision is being made, while it’s deliberating between the different options – that process is something that is only displayed in the Artificial life model.

Returned from Munich now write!

That that I am back after my weeklong stint in Munich for my Birthday (yes, yesterday!) I now back at the office and need to get my head around a few ideas that have been on my mind.

Firstly its that of the role of anthroporphism in my project and more specifically in the synthetic psychology (SP). Surely SP is an anthropomorphic view of robotics and dynamic systems… Braitenberg uses psychology terminology to describe the behaviour of his vehicles.

The question is whether this is common in ethology and other fields of study of behaviour. The alife connection to ethology is strong, but in traditional robotics attributing notions of “feelings”, “taste”, “experience” etc. are surely less highly regarded (correct me if I am wrong).

The example that came to mind regarding my project is a comparison between a human and a dog. If a human is disgusted, the common facial expression is that we pinch our eyes and close our mouth as to not let any of discusting substance enter our body. We have come to associate different “practical” reactions to stimuli such as fear, anger, disgust, love boredom etc. and take them for granted. Similarly we have learnt to understand the body language of some of the animals closest to us such as dogs, cats and horses.

The interesting bit is where we attribute an emotional state to a specific behaviour. A dog wagging his tail is more likely to be excited or happy, while pinching his tail between its legs is a sure sign of discontent or fear. Dogs don’t have the same facial expressions as we do, yet we attribute our emotions to their behavior. This behaviour is not similar to the behaviour exhibited by humans, it is grounded in the personal experience that this species of creature has accumilated throughout its evolution. Of course, and we know this from humans especially, imitation plays a role – such that some reactions have been abstracted over time and have become less connected to the original reaction (Im guessing laughter is one here).

The crux with this is that if we were to apply the human reaction to an emotional stimuli, such as an angry or happy face, to a robot, we would be making the mistake of failing to “ground” that matching reaction to situation in the previous experience of the “species”. Thus a reaction would not be specific to that species and seem uncanny.

We shouldn’t attribute to much HUMAN emotion to our creatures, instead we should strive to find new emotions from the CREATURES perspective. We can still call them love, hate, disgust, joy or whatever afterwards…