When I think of this issue and those that are in either camp of how an A.I. should work it brings back a childhood memory. I was around 11 years old and my younger brother bought a pet snake, I don’t remember the exact type of snake. My brother was told that he should feed live mice to the snake and so he bought a bag of live mice to feed the snake. We watched in morbid fascination the feasting of the mice by the snake. On one of these feedings something fascinating happened. A poor mouse was thrown into the snake’s aquarium as usual and the mouse quickly took notice of the snake and stood up, trembling with fear. The snake stared at the mouse and the two animals were in this deadlock eye contact. The mouse started to wobble as if hypnotized by the snake’s stare. Then in a blink of an eye the snake pounced onto the poor mouse, opening its mouth ready to swallow it. But the poor mouse in milliseconds woke from its hypnotic state and at the last moment jumped out of the way of the on coming snake. You would think this quick move by the mouse would save it, at least from the initial attack by the snake, but the snake instantly wrapped its body around the mouse and squeezed it to death. The poor mouse’s eyes bulged where then it died.
So what happened? The snake assumed the mouse would simply not move or at least not in time to escape its mouth, and that was the usual case, but that proved to be wrong in this instance! The snake then reacted to the new situation and wrapped its entire body around the mouse! Here is a prefect example of how prediction is likely wrong in an environment that perpetually changes and reacting to change proves to be the better capability than prediction.
With that said: Artificially Intelligent systems can be wrong in their assumptions as they interact in the environment and must be capable of novel reactions to change at a moments notice. This is what natural selection has learned and why the snake proves to be a very adept animal.
Emotions are an enigma and no one has really set in stone what emotions are. There are certain chemical signatures such as oxytocin and noradrenaline, endorphin, dopamine, vasopressin and serotonin that are associated with emotions but there are fundamental issues with emotions that haven’t been answered. For one an emotion doesn’t come about until after a stimuli is processed and interpreted, yet with animals those very same chemical signatures exist as well, so it would seem that since humanity evolved from ape like animals that process exist apes as well as other mammals. Emotions are a matter of interpretation and other animals probably do not interpret such neural signally the way a human does. But this gets much more complicated since cultural influence and personal experience can affect the interpretation of the neural signaling as well. So emotional experiences are not the same from person to person, there are differences, yet we all believe or argue that emotional interpretation are universal to all humanity. To understand emotions we need to monitor neural activity on a connection by connection basis.
One method of doing so is using optogenetics. This process is very exotic and surprisingly effective. Optogenetics involves modifying the genetics of an animal where its neurology can not only output the chemical transmitters to function but also will emit light as a neuron fires! With that said one can build interfaces of fiber optic probes into an animals brain and listen to the neural activity. Not only that but the neurons can be affected by light emitted by the probes as well. This will lead to a much more detailed understanding of the neural code of brains and an understanding of emotions.
However, there has been much work in the concept of emotions by psychologists where there are three tiers of emotions. There has also been work done on how humans feel emotions and one such study demonstrated an almost universal body map of how we feel emotions. With that said can we model emotions?
Plutchik’s wheel of emotions gives us a concept of tiered emotions as emotions have core origins starting with 8 primary emotions that then extend to secondary and tertiary emotions. However, there have been others who argue that Plutchik’s wheel doesn’t capture all human emotions. Parrott, Shaver et al is the model that I decided to use.
Using the OODM Descriptor model we can actually model emotions! Because of the inheritance ability of OODM the relationships of the tiers of emotions and how they are derived can be described.
The image above shows how emotions can be described with Descriptors. Note that only MicroDescriptors are used and also notice that inheritance is used.
The entire Emotions Wheel is structured into classes that all inherit from a “Base Class Emotion”. The base class emotion contains the common descriptors used for all emotions. MicroDescriptors facial expression, secondary, none (will explain this later), and bodymap ProtoVector SubTypes have attributes whose data are actually vectors.
The base Class Emotion contains the basic three MicroDescriptors for all emotions, as shown in the image above.
The MicroDescriptor “none”, due note that all MicroDescriptors are listed as their SubClass Types in this viewer, is actually the arousal Group ProtoVector as shown in the image above. This group along with emotions where created to model emotions and the arousal state can be set to any of the listed subclass types. Each subclass type has vectors associated. So the state of “none” means no vector state has been set yet.
Once all emotions have been described we can build charts of the data as shown in the image above. The list of all emotions is on the left-hand side. Select an emotion and the charts will describe the hierarchy of the emotion any higher tiers represented by the emotion and if you select one of the higher tiers on the right-hand side below are charts that will describe the chemical signature vectors, Arousal, and Valence vectors and the facial muscles activations associated with the emotion as well the body map of where humans feel the emotion. You’ll also notice sliders on the lower left-hand side of the panel where the vectors can be adjusted.
OODM proved to be adequate in modeling or describing emotions so they are more than a word or state but a set of concepts that give the emotion meaning. The need to interpret stimuli to an emotion would be handled by a separate algorithm which could very well be a neural network!