Blog

Should Robots Dream?

I have posted on this blog why robots or AI should dream, but considering some of the plausible scenarios of that ability in humans I’ve come to question the issue. I don’t know how many times I’ve had the experience of waking from a dream and seeing some object like a cup or tool or even a person and as I reach out or even walk over to approach the imagined person they just vanish from thin air. What if a person can be in a state where not all of their cognitive circuits have awakened or better yet interpret the episodic events happening as a dream but they are actually are not. There is a condition known as parasomnia and somnambulism where sleep-walking-like states of individuals become violent, here is a very good paper on the subject. When researching the neural correlates of dreaming I found that REM sleep has activations that are similar to wakefulness! The image below shows what areas of the brain increase in activity and decrease as well during REM sleep.

There is another phenomenon called Dream-Reality confusion which can align with psychotic symptoms! But I don’t think all dream-reality confusion derives from a disorder. As I mentioned earlier there could be cognitive circuits that are placed into a condition where the brain is in a dream state, and the person hasn’t fully awakened. Where actions that otherwise would be inhibited or corrected by such circuits don’t and the person acts out a ridiculous action(s) that could be harmful to themselves or others. This could explain how mass shootings happen. Imagine acting out by shooting people at random because of some paranoid fear or anger that your brain would normally prevent such an action but those circuits are shut down as if in a dream. Then imagine that those circuits eventually do turn back on and you are now fully cognizant of what you’ve done and realized there’s no way to turn things around, you’re confused as to how you could have even committed such an act and then take the quick way out and kill yourself!

I then realized a danger for an AGI to have the ability to dream. Because such an AI would have many computational components and they would operate asynchronously and in parallel ensuring that the AGI doesn’t become ambulatory while dreaming could get tricky and have similar problems distinguishing dreams from reality, which then could become very dangerous. Think of a bodyguard or a military soldier bot that needs to optimize or fine-tune its defense strategies, it engages in its dream state killing anyone or thing that could be a threat or even some kind of companion bot that is working out some conflict or reprimand that it experienced. Those kinds of scenarios can have violent behaviors and the bot acts them out in dreams but would never do so in its wakeful state, at least if it knows it’s awake.

It is believed that dreams in animals inclusive of humans serve to optimize and update memories. But how does that process end up as narratives as we experience in dreams? I’ve thought about how memory updates should be handled by an AGI using a time chunking approach. Below is a diagram of a time chunking implementation I’m using and here’s a link that explains it:

The actual updating of the memory, whether it’s an ANN or symbolic process, or a simple function does not create the narrative alone. I’m thinking of using event triggers of long-term memory updates that post to the time chunker. Those events then affect processes that would interpret the memory updated. This causes processes to use the newly updated information that will associate to ideas that create narratives. This is a good way to integrate the memory updates from dreaming into other memories. This makes sense for human brains as well, where the modified information is posted to the hippocampus and motivates other cortical components to use the now optimal memories in a simulation that tests its efficacy. So updated memories have similar effects on dream narratives from impetuses such as external sounds, indigestion, etc.

Below is an AGI model designed to have characteristics that allow for human-like decision-making. Where choices are arbitrated by emotional states that are a sense of qualia. Emotions and hormonal signaling is done with vector representations of chemistries found in biology. The diagram depicts rhythmic hormonal influences, in an animal body those hormones are distributed through the bloodstream, in this model such a distribution is done through a message sink that various software components can listen to. This is what allows for staging such an AGI to place itself in modes that will bias decision-making. This approach can also place the AGI in a dream mode that signals components like the “Situational Awareness”, “Action Sequencer”, and “Motor Control Unit” to modes to inhibit motor functions.

Notice that in a dream state risk avoidance and/or inhibitions are lowered in order to allow for subject matters that would otherwise be avoided or whose emotional states are very well regulated to be experienced without such regulation in a dream narrative. As long as everything works correctly in terms of inhibiting the bot’s motor controls no danger of acting out will happen, but if for whatever reason it doesn’t because the bot is in a more risk-tolerant mode safety precautions for such situations will not engage. This bot can suffer from dream-reality confusion.

This begs the question of should a robot dream considering that such a stage or state could be dangerous? There may be a need to set standards as to how to ensure such issues are addressed inclusive, perhaps, of certifications of robots that meet those standards to prevent dream-reality confusion.

I highly recommend reading the links I’ve placed in this post, you won’t be disappointed.

Brains work using gate like functions

Effectively we can prove that neurons are gate-like processors whose functions operate differently than digital gates but one can find the logical equivalent of neural processing to digital processes, inclusive of even learning! 

There are various hypotheses of neural spike codes where the complexity of the premise describes a neuron’s ion channels and how various spiking modes are produced. I am going to describe a much simpler and generalized coding scheme for neurons based on anatomical observations and in particular the auditory wiring of mammalian brains. Neurons exhibit firing ranges that directly relate to the degree of inputs they receive relative to some desensitizing of the neuron’s dendritic membrane. If we examine the cochlea and how the hairs stimulate neurons we can effectively see how nature is transforming mechanical energy into electrical signaling. A vital data point of the brain’s auditory system is the positioning of the neurons that are stimulated by the cochlea hairs, where the position of the neuron represents the auditory wavelength that a cochlea hair vibrates at. The second piece of information we can glean from the anatomy of the auditory circuits of the brain is the vibrating hair can control the degree or rate of neurons firing. So mammalian brains codify sound by:

1. The positioning of neurons along the cochlea for a particular wavelength.

2. The firing rate of a neuron indicates the energy or loudness of the auditory signal being heard.

The number of neurons associated to wavelengths along the cochlea improves sensitivity to those wavelengths, the more neurons applied to auditory ranges the more sensitive an animal’s hearing becomes. But the point that is being conveyed is neurons can encode by superimposing two pieces of information, symbolic representation by physical positioning, and two, the degree of signal strength of the symbolic representation. In other words, neurons produce fuzzy bits! Where an output of a neuron is more than true or false it is to a degree of truth represented by its firing rate. The truth table for neurons is dependent on its resistance to inputs or a weighted barrier, as shown in the truth table below:

The combination of the two input neurons A and B has to exceed the weighted barrier of neuron C’s dendritic membrane. With that said the question becomes can one build logic circuits similar to Boolean methods? Below is a diagram of a full adder using the neuron spike coding scheme: You’ll note that there are five neurons in the circuit, the truth table for the circuit is below:

Where the inhibitory neurons usually have 5 times the influence that Excitatory neurons have and where the D output is greater than 0 it is the equivalent to binary 1 and where E outputs are greater than zero are also the equivalent to binary 1. So effectively the representation of binary inputs are through neurons A and B but their outputs are effective only under certain firing rates! Where the minimum output from Neurons A and B have to exceed neuron D’s weighted barrier of 5 and neuron E is dependent on the inhibitory neuron C but is actually an Excitatory transmitter for neuron E. So both neurons A and B must exceed neuron C’s weighted barrier of 9 to force neuron E to output. From the truth table, we can see how neural logic can work, and the combinations of where the neurons operate as boolean adders is only under certain firing rates of the neural circuits. Meaning, unlike discrete logic elements that operate consistently overall input ranges(0 or 1) neural logic will only operate under critical firing rates that can overcome the weighted barriers. From a mere engineering perspective, neural logic might seem useless since there are states where the neurons just don’t work out the logic at all! But neurons have to operate under conditions that discrete logic could not work at all and that is a very noisy environment. You’ll notice that the diagram depicts glial cells and it has been observed that glial cells do discharge. One of the functions of glial cells is to act like little vacuum cleaners sopping up excess neural transmitters or ions of firing neurons. This absorption builds up potential within the glial cells and eventually causes a discharge of the electrical potential within the cell. At first, this may look like simple noise but if you realize that the glial cells absorb excess neural transmitters in direct proportion to the neighboring firing rate of neurons then we can see a very interesting effect from glial cells. Glial cells can actually provide feedback as in the form of a learning impetus that forces neurons to adjust their weights based on their own and surrounding neurons’ activity! With that said: From the neural logic example, the circuits will only operate under certain firing rate conditions, it does imply that neurons learn to ignore noise. This is advantageous since it doesn’t behoove a neuron to fire upon any input given to it. From a perspective of a neuron evolving to survive in a noisy environment firing randomly in response to noise would expend energy needlessly. A neuron adapting to ignore noise is conserving energy and is benefiting its host by being energy parsimonious. The additional benefit to ignoring noise is it forces the neuron to fire at signal strengths that improve the fidelity of information exchange between neurons. In addition to the mentioned advantages of energy conservation of neurons, each neuron will have its own set of inputs along with feedback from glial discharges and therefore uniquely derives its weight barrier in response to those influences. Realize that there really isn’t any way for a neuron to distinguish information from noise, so one could either view the individual neuron learning to ignore noise or is learning the information patterns from its inputs, I am asserting that they are one and the same.

With that said we can conclude that consciousness or mind is a product of emergence from the functional cortical systems of the brain and those systems need not be operating simultaneously, as proven through fMRI research. Below is an attempt at modeling an AGI on the premise of logical equivalency of human-like cortical processing:

Concept model of consciousness.

The components like the “Memory Stream”, “Associations” and “Emotions” have been implemented in code. The software ability of free Association is done through an approach of partial feature fit of data that can be done with neural networks or other digital approaches.

While I’m not claiming that the model is complete or that all components have been developed it is a starting point. So far it is proving to be a daunting task as you can see from the diagram the memory stream is a very critical component and it is proving to take up a lot of resources. And there is a ton of preprocessing for all the sensors, which currently are just virtual sensors within a virtual body or pseudo bot.

Time


Image from the article cited in the post.

Time is a very interesting topic in the understanding of human consciousness. For one; we have a sense of the moment that is of some kind of time depth. Here’s an article that cites a paper that explores this phenomenon. The revelation from this research is brains break time into chunks with hierarchal tiers. The advantage to such a chunking scheme is the ability to correlate multiple temporal events as collections of stimuli and provide a hierarchal order or sense that allows for the concept of presence where there is a link from the past to the present. That paper inspired a machine implementation, below is a concept diagram:

Machine implementation of a time chunking scheme. Each tier T1 to T4 represents a time or event segment, where T1 is the longest and T4 is the shortest period time segment. Each higher event segment is the parent of a lower-level segment.

The chunking model breaks event segments into 4 tiers where T1 is the longest and T4 is the shortest period. You’ll note that T4 is where all the input data is captured. Input data is not just information from sensory data such as visual, audio, olfactory, tactile, and taste but also internal states of the machine inclusive of interpreting stimuli as emotions. The T4 level allows for as much input capture as possible within its period. Additionally, the T4 level only stores inputs if they have changed. One of the benefits of this structure is the ability to correlate input transitions and also correlate across input types or inter-input-type correlation. This allows for inferencing across temporal events, input types, and input transitions.

This diagram depicts the generic architecture for event segment processing. The example is the lowest level T4 processing that continually captures data.

As mentioned before each tier processes inter-correlations across event segments. This would give a machine an equivalent perspective of time as a concept of some structure to its experience to stimuli as a human being! The approach provides that sense of past that correlates to the present and could be used to contemplate or predict future outcomes.

From what the paper describes as tiered hierarchies that build higher-level cognitive synergies machines could apply identical approaches as biological systems. The inference processing as well as higher-level event evaluations can be heuristics or ANNs and/or both.

Symbolic Concept Modeling

Source: MIT-IBM Watson AI Lab

Today AI is emphasized by Convolutional and Adversarial neural networks. One of the most robust and popular today are the GPT tools developed by OpenAI. While neural network approaches can build relationships between concepts or words they don’t actually give meaning or understand the concept or words, they just develop a statistical relationship between words. Another problem, as discussed before, is that Neural Networks take a lot of examples to learn. Because neural networks build statistical maps of words and don’t really understand concepts they can oftentimes reach the uncanny valley and even with the GPT tools the outputs are not always coherent. With a symbolic approach, the machine can learn from one example and it actually develops or understands the real meaning of ideas or words!

However, the symbolic approach was actually the first approach AI scientists pursued. One of the approaches of Symbolic AI is object-oriented programming OOP. With OOP one can define classes with properties and functions. OOP allows for hierarchies where inheritance can be implemented. This allows for instance a class defined as an Animal that has the properties and functions that almost all animals have. Now if you want to define a “Cat” class you can allow it to inherit from the Animal Class where it will have all the properties and functions of the Animal Class. Not only that but OOP allows for functions to be customized for a particular implementation. So the “Cat” class can use some of the functions of the “Animal” class but also override other functions with its own customize function. A problem with OOP is that it is a language and requires a compiler or interpreter to apply its rules.

With symbolic AI one can define or describe concepts or words in various attributes or classifications that can even represent states. The problem however is that using OOP may not be the best approach, but an object-oriented data model (OODM) might be a better choice. With an OODM there is no re-compiling for changes in data structures, it all happens in real-time!

The image above depicts an approach of grouping concepts or classes and having subclasses within each category, or what we call a “ProtoVector”. This approach is not new. All concepts and objects can be described in a general way and if a concept or object has some attribute that isn’t a part of the database of concepts new categories can be added. In fact, the machine can be told to add a new concept or if it deems it appropriate create a new category. Each category and its subclasses are enumerated. The software manages the enumeration so all that needs to be done is to add a category or subclass. However this is not the crux of OODM, but only an attribute to be used by OODM.

An OODM solution must be able to apply OO concepts to establish relationships between concepts.

Above is an image of the data model editor and viewer where OODM objects or concepts are listed. The model is composed of a class object called a “RootDescriptor” which is composed of “MicroDescriptors” and RootDescriptors. The RootDescriptor “Animal” is highlighted in the image above and expanded. The Animal descriptor is composed of RootDescriptors Head, Neck, Torso, Leg, Tail, and Hair.

RootDescriptors always represent complex concepts or objects where as MicroDescriptors are always one of the ProtoVectors that can be assigned other types of data such as audio or video data, vectors, library functions or processes, pretty much any kind of data can be associated to a MicroDescriptor.

The image above shows the RootDescriptor “Head” for the parent RootDescriptor “Animal” where the class is the Group Protovector “features” and the Generalized Type is the SubClass ProtoVector “part”. A head is a part of an Animal.

Also, note that the RootDescriptor for Animal is comprised of many RootDescriptors, there is no limit to how deeply nested a RootDescriptor can be as the example for the RootDescriptor for “Head” is shown above.

The image above shows the MicroDescriptor “quantity” highlighted. The class is the ProtoVector “features” and the Generalized Type is the SubClass ProtoVector “quantity”. An Animal has one head:

Microdescriptor quantity has an attribute value of 1.

So this is what comprises the OODM: RootDescriptors and MicroDescriptors, however, there is also inheritance:

The image above has Dog expanded and depicts in the Basic Attributes Panel that it Inherits from Animal.

Here is another example of inheritance from Animal. Human inherits from Animal but it changes the quantity of Legs to 2 and adds Arms. So OODM allows one to change attributes and add new Descriptors to a RootDescriptor that inherits from another RootDescriptor.

Here’s an example of adding data to a Microdescriptor. The vector object was created using a vector editor, shown below, where the final object is dragged and dropped onto the attributes grid. All that inherit from the Base Class Emotion get a copy of the vectors.
The vector editor can create various types of vector structures. Above is the body map vector structure or object of sensory nerves of the human body. The nerve labels were copied to notepad from various resources that list the nerves and their ranges were setup. From there that listing is pasted right onto the chart where then the vector object is created.

This particular implementation of OODM uses fairly simple OOP classes and is stored using an Object-Oriented Database or NoSQL database. But its simplicity is its strength and while the basic OOP class is simple RootDescriptors can get very complex and do not have any limit in describing concepts or objects. This approach effectively gives words meaning. Because of the generalization from the ProtoVectors this approach can compare concepts and detect their similarities and differences no matter if there are disparities between the words or ideas. Also, because any kind of data can be associated with the MicroDescriptors this approach of Symbolic AI doesn’t suffer from the brittleness of past approaches that would have to code new rules for nuances or differences in new concepts that could be introduced.

Predicting vs Reacting to an environment

When I think of this issue and those that are in either camp of how an A.I. should work it brings back a childhood memory. I was around 11 years old and my younger brother bought a pet snake, I don’t remember the exact type of snake. My brother was told that he should feed live mice to the snake and so he bought a bag of live mice to feed the snake. We watched in morbid fascination the feasting of the mice by the snake. On one of these feedings something fascinating happened. A poor mouse was thrown into the snake’s aquarium as usual and the mouse quickly took notice of the snake and stood up, trembling with fear. The snake stared at the mouse and the two animals were in this deadlock eye contact. The mouse started to wobble as if hypnotized by the snake’s stare. Then in a blink of an eye the snake pounced onto the poor mouse, opening its mouth ready to swallow it. But the poor mouse in milliseconds woke from its hypnotic state and at the last moment jumped out of the way of the on coming snake. You would think this quick move by the mouse would save it, at least from the initial attack by the snake, but the snake instantly wrapped its body around the mouse and squeezed it to death. The poor mouse’s eyes bulged where then it died.

So what happened? The snake assumed the mouse would simply not move or at least not in time to escape its mouth, and that was the usual case, but that proved to be wrong in this instance! The snake then reacted to the new situation and wrapped its entire body around the mouse! Here is a prefect example of how prediction is likely wrong in an environment that perpetually changes and reacting to change proves to be the better capability than prediction.

With that said: Artificially Intelligent systems can be wrong in their assumptions as they interact in the environment and must be capable of novel reactions to change at a moments notice. This is what natural selection has learned and why the snake proves to be a very adept animal.

Why AI should have the ability to dream

What are dreams? According to Wikipedia:

The content and purpose of dreams are not fully understood, although they have been a topic of scientific, philosophical and religious interest throughout recorded history. “

And:


Dreams mainly occur in the rapid-eye movement (REM) stage of sleep—when brain activity is high and resembles that of being awake. REM sleep is revealed by continuous movements of the eyes during sleep. At times, dreams may occur during other stages of sleep. However, these dreams tend to be much less vivid or memorable.

Brain MRI scans of mice during rem sleep reveal some interesting aspects about what dreaming really is. Where scientist see 3D grid maps of past spacial experiences of the mice. Now you may ask how do they know that it was an experience the mouse lived? Well the 3D spacial grids actually show neurons firing in patterns that resemble the mazes the mice had walk through earlier! So it would appear, at least for mice, dreaming is actually reliving past experiences. So what is the benefit of reliving past experiences?

If you recall the description of the snake and mouse in the “Predicting vs Reacting” post where the snake shifted its mode of attack instantly by reacting to the change in the mouses behavior form other mice. Mammals have more sophisticated brains than snakes and it would appear that even mice can emulate a virtual reality of sorts to learn new adaptations or reactions to their environment. By re-living events some animals can learn new novel adaptations to their environment.

Human brains also have those kinds of 3D spacial maps like the mice. So our dreaming brings about a virtual reality that can experiment with ideas and past experiences. This is where human dreaming is different to animals like mice. Humans use abstract ideas where such notions while never experienced can prompt a virtual experience in our imaginations or dreams. This explains why not all dreams in human beings are re-lived experiences but actual novel concoctions of worlds or scenarios not even possible in the real world! So what is the advantage of dreaming in humans?

One could argue that it is very advantageous to have the ability to work out scenarios not experienced. It allows for ideas to be explored in a way that feels real and therefore solicits the kind of reaction or prompting of resources of the brain to cope with the imagined just as if it were real. In other words dreams allow us to gain experiences with issues we haven’t literally lived but we could apply to our real lives!

So too could an A.I. benefit from an means to reenact past and imagined experiences and learn in virtual environments just as they can learn from real experiences.

Mimicking Arousal

What is arousal? According to the APA dictionary of psychology:

1. a state of physiological activation or cortical responsiveness, associated with sensory stimulation and activation of fibers from the reticular activating system.

2. a state of excitement or energy expenditure linked to an emotion. Usually, arousal is closely related to a person’s appraisal of the significance of an event or to the physical intensity of a stimulus. Arousal can either facilitate or debilitate performance. See also catastrophe theory—arouse vb.

The key component for arousal is the reticular activating system (RAS). This is responsible for alertness and focus in mammals. Here is another feature of brain activity that is responsible for real-time adaptations in the environment. But for a machine that can be relatable to people RAS is also critical. Imagine how much more empathy or anthropomorphic a machine becomes when it conveys something that all humans experience, feeling sleepy, tired and/or feeling very active with energy!

To Mimic RAS involves signalling or processing that captures things such as battery levels, time of day, feelings of exhaustion. These signals have to be integrated into the information processing of the machine in such a way that it affects its choices and interpretations of information both externally and its internal states.

The Power of Free Association

The ability of the human mind to associate information is a critical feature of its creative abilities. Why? While many anthropologist believe human brain size was driven by tool building the fact is that socialization proves to be a greater stimulus for creativity. If we look at the rate of innovation before agriculture and city states, tools did not change much in tens of thousands of years! So tool making is not the driver for bigger brains.

Why nature is naturally selecting for free association in humans is due to the social demands of the species, which is something Dr. Richard Leaky has hinted at. Free association of information allows us to invent interesting topics of conversation and in fact, would reinforce the ability to sensationalize! The more interesting you can make a topic or invent a myth or personal experience the more attention one can get from the troop or tribe. Such attention can then lead to greater influence in a group that can realize such individuals to solicit more mates and collaboration from peers. It is not until recently that the social creative impetus of humans has been applied to sophisticated tool making.

No one has looked at the ability of the human brain to freely associate information as a product of information processing and data structuring. When trying to engineer the equivalent in current software development paradigm it is impossible or is it?

Below is a software tool that can analyze sentences, paragraphs, pages, and even books and relates them to topics of information through an ontological framework. It effectively allows the software to have an impression!

Figure depicts first and second levels of Roget’s classification scheme.
Figure depicts levels two and three of Roget’s Classification scheme.

The ontological framework is Roget’s Thesaurus and it has been formatted into a form that allows for highly paralleled O1 searches. It allows for a machine to gain an impression by being able to relate to the information as if it were Roget himself, well almost. Roget did not document some critical personal views of his in the thesaurus. But for the most part the software relates to information from the perspective of a 19th century mindset!

Depicts Roget's ontological framework in his thesaurus.
The center block represents a core high level class that has satellites who have satellites, who have satellites, who have satellites. The figure above is displaying Roget’s “Words Expressing Abstract Relations” class.
This is another Roget Class: “Words Relating To The Sentient And Moral Powers” class.

So how does this relate to or explain the human brain’s power of free association? The software operates on a data structure that is self similar. So we can do something very interesting and that is do a partial feature match. So by doing this we can solicit data that otherwise, because of its subject matter, would not be addressed. So what good is this?

Well by indexing partially related information the machine can move a conversation or problem solving to other disciplines that otherwise would not be addressed. We can see the efficacy of this kind of data querying or retrieval in human socialization. Conversations can roam in ways that seem chaotic as say two people start talking about “Star Wars” but the conversation later on is about digging ditches in the backyard. By doing something like the software described above topics can be landed on that do not completely or directly relate to the current topic of conversation. So we can get similar types of conversations with machines as we do with humans, where what we start with is not how the conversation ends.

This type of partial feature matching can also explain creativity as in that Zen like experience where one see’s a water drop hanging from a flower pedal that then leads to the theory of relativity! This concept, again, hinges on the principle of partial feature matching. So something in the water drop or flower pedal, along with chemical brain states solicited or queried information that partially matched that moment in time and space that then revealed concepts that could be evaluated and formalized into a new and novel idea.

So the tools and technology to make machines creative and even social are in place but their current application are not used for a social machine. Remember that to be social the machine or human must be creative. As Dr. Richard Leaky stated: “Humans have the highest social demands of any other animal.” For A.I. to be relatable to human beings it must meet the social expectations of humanity.

The consequences of an Emotional Artificial Intelligence

The most shocking truth of human reality is that we are a form of biology and as such are driven by the physics that makes the chemistry happen. What can and does make us unpredictable in certain ways is how such chemistry can be altered. We’re talking about subtle influences such as thermal currents in a brain cell! When realizing that qualia of experiences relate to emotional reactions, we see how important this feature of our brains is. To give a machine something similar as to how it would make decisions based on the emotional reaction it anticipates or has experienced might give some pause to create such a machine. After all who wants a car that doesn’t feel like driving you to work on a particular morning?

So what’s the point of building an emotional machine? The notion goes towards the objective of our goal and that is to build a machine that can relate to human beings. To relate to humans the machine should have emotions or at least mimic the signal patterns that emotions respond to as we interact in the environment. Currently, those software tools or applications that try to recognize human facial expressions and associate them to words that identify or describe emotions can not relate to humans! They are no different than your PC where you type on your keyboard and it responds according to its programming with a specific response. Some think that’s all machines need to do and that humans will then anthropomorphize how the machine responds. But such strategies quickly fall into the uncanny valley as the response becomes very repetitive and many times incoherent.

Ultimately emotional machines will be much more relatable to people and perhaps to a degree that’s not comfortable, a flip of the uncanny valley if you will, and because it’s so human-like it causes an adverse reaction. On the other hand, they could be integrated into family or be guardians of the elderly that provide emotional support similarly to pets.

What are Emotions

Emotions are an enigma and no one has really set in stone what emotions are. There are certain chemical signatures such as oxytocin and noradrenaline, endorphin, dopamine, vasopressin and serotonin that are associated with emotions but there are fundamental issues with emotions that haven’t been answered. For one an emotion doesn’t come about until after a stimuli is processed and interpreted, yet with animals those very same chemical signatures exist as well, so it would seem that since humanity evolved from ape like animals that process exist apes as well as other mammals. Emotions are a matter of interpretation and other animals probably do not interpret such neural signally the way a human does. But this gets much more complicated since cultural influence and personal experience can affect the interpretation of the neural signaling as well. So emotional experiences are not the same from person to person, there are differences, yet we all believe or argue that emotional interpretation are universal to all humanity. To understand emotions we need to monitor neural activity on a connection by connection basis.

One method of doing so is using optogenetics. This process is very exotic and surprisingly effective. Optogenetics involves modifying the genetics of an animal where its neurology can not only output the chemical transmitters to function but also will emit light as a neuron fires! With that said one can build interfaces of fiber optic probes into an animals brain and listen to the neural activity. Not only that but the neurons can be affected by light emitted by the probes as well. This will lead to a much more detailed understanding of the neural code of brains and an understanding of emotions.

People drew maps of body locations where they feel basic emotions (top row) and more complex ones (bottom row). Hot colors show regions that people say are stimulated during the emotion. Cool colors indicate deactivated areas.

However, there has been much work in the concept of emotions by psychologists where there are three tiers of emotions. There has also been work done on how humans feel emotions and one such study demonstrated an almost universal body map of how we feel emotions.  With that said can we model emotions?

Plutchik’s wheel of emotions gives us a concept of tiered emotions as emotions have core origins starting with 8 primary emotions that then extend to secondary and tertiary emotions. However, there have been others who argue that Plutchik’s wheel doesn’t capture all human emotions. Parrott, Shaver et al is the model that I decided to use.

Example of an Emotion Wheel that starts with 6 primary emotions

Using the OODM Descriptor model we can actually model emotions! Because of the inheritance ability of OODM the relationships of the tiers of emotions and how they are derived can be described.

The image above shows how emotions can be described with Descriptors. Note that only MicroDescriptors are used and also notice that inheritance is used.

The entire Emotions Wheel is structured into classes that all inherit from a “Base Class Emotion”. The base class emotion contains the common descriptors used for all emotions. MicroDescriptors facial expression, secondary, none (will explain this later), and bodymap ProtoVector SubTypes have attributes whose data are actually vectors.

The base Class Emotion contains the basic three MicroDescriptors for all emotions, as shown in the image above.

The MicroDescriptor “none”, due note that all MicroDescriptors are listed as their SubClass Types in this viewer, is actually the arousal Group ProtoVector as shown in the image above. This group along with emotions where created to model emotions and the arousal state can be set to any of the listed subclass types. Each subclass type has vectors associated. So the state of “none” means no vector state has been set yet.

Once all emotions have been described we can build charts of the data as shown in the image above. The list of all emotions is on the left-hand side. Select an emotion and the charts will describe the hierarchy of the emotion any higher tiers represented by the emotion and if you select one of the higher tiers on the right-hand side below are charts that will describe the chemical signature vectors, Arousal, and Valence vectors and the facial muscles activations associated with the emotion as well the body map of where humans feel the emotion. You’ll also notice sliders on the lower left-hand side of the panel where the vectors can be adjusted.

OODM proved to be adequate in modeling or describing emotions so they are more than a word or state but a set of concepts that give the emotion meaning. The need to interpret stimuli to an emotion would be handled by a separate algorithm which could very well be a neural network!