Should Robots Dream?

I have posted on this blog why robots or AI should dream, but considering some of the plausible scenarios of that ability in humans I’ve come to question the issue. I don’t know how many times I’ve had the experience of waking from a dream and seeing some object like a cup or tool or even a person and as I reach out or even walk over to approach the imagined person they just vanish from thin air. What if a person can be in a state where not all of their cognitive circuits have awakened or better yet interpret the episodic events happening as a dream but they are actually are not. There is a condition known as parasomnia and somnambulism where sleep-walking-like states of individuals become violent, here is a very good paper on the subject. When researching the neural correlates of dreaming I found that REM sleep has activations that are similar to wakefulness! The image below shows what areas of the brain increase in activity and decrease as well during REM sleep.

There is another phenomenon called Dream-Reality confusion which can align with psychotic symptoms! But I don’t think all dream-reality confusion derives from a disorder. As I mentioned earlier there could be cognitive circuits that are placed into a condition where the brain is in a dream state, and the person hasn’t fully awakened. Where actions that otherwise would be inhibited or corrected by such circuits don’t and the person acts out a ridiculous action(s) that could be harmful to themselves or others. This could explain how mass shootings happen. Imagine acting out by shooting people at random because of some paranoid fear or anger that your brain would normally prevent such an action but those circuits are shut down as if in a dream. Then imagine that those circuits eventually do turn back on and you are now fully cognizant of what you’ve done and realized there’s no way to turn things around, you’re confused as to how you could have even committed such an act and then take the quick way out and kill yourself!

I then realized a danger for an AGI to have the ability to dream. Because such an AI would have many computational components and they would operate asynchronously and in parallel ensuring that the AGI doesn’t become ambulatory while dreaming could get tricky and have similar problems distinguishing dreams from reality, which then could become very dangerous. Think of a bodyguard or a military soldier bot that needs to optimize or fine-tune its defense strategies, it engages in its dream state killing anyone or thing that could be a threat or even some kind of companion bot that is working out some conflict or reprimand that it experienced. Those kinds of scenarios can have violent behaviors and the bot acts them out in dreams but would never do so in its wakeful state, at least if it knows it’s awake.

It is believed that dreams in animals inclusive of humans serve to optimize and update memories. But how does that process end up as narratives as we experience in dreams? I’ve thought about how memory updates should be handled by an AGI using a time chunking approach. Below is a diagram of a time chunking implementation I’m using and here’s a link that explains it:

The actual updating of the memory, whether it’s an ANN or symbolic process, or a simple function does not create the narrative alone. I’m thinking of using event triggers of long-term memory updates that post to the time chunker. Those events then affect processes that would interpret the memory updated. This causes processes to use the newly updated information that will associate to ideas that create narratives. This is a good way to integrate the memory updates from dreaming into other memories. This makes sense for human brains as well, where the modified information is posted to the hippocampus and motivates other cortical components to use the now optimal memories in a simulation that tests its efficacy. So updated memories have similar effects on dream narratives from impetuses such as external sounds, indigestion, etc.

Below is an AGI model designed to have characteristics that allow for human-like decision-making. Where choices are arbitrated by emotional states that are a sense of qualia. Emotions and hormonal signaling is done with vector representations of chemistries found in biology. The diagram depicts rhythmic hormonal influences, in an animal body those hormones are distributed through the bloodstream, in this model such a distribution is done through a message sink that various software components can listen to. This is what allows for staging such an AGI to place itself in modes that will bias decision-making. This approach can also place the AGI in a dream mode that signals components like the “Situational Awareness”, “Action Sequencer”, and “Motor Control Unit” to modes to inhibit motor functions.

Notice that in a dream state risk avoidance and/or inhibitions are lowered in order to allow for subject matters that would otherwise be avoided or whose emotional states are very well regulated to be experienced without such regulation in a dream narrative. As long as everything works correctly in terms of inhibiting the bot’s motor controls no danger of acting out will happen, but if for whatever reason it doesn’t because the bot is in a more risk-tolerant mode safety precautions for such situations will not engage. This bot can suffer from dream-reality confusion.

This begs the question of should a robot dream considering that such a stage or state could be dangerous? There may be a need to set standards as to how to ensure such issues are addressed inclusive, perhaps, of certifications of robots that meet those standards to prevent dream-reality confusion.

I highly recommend reading the links I’ve placed in this post, you won’t be disappointed.

Brains work using gate like functions

Effectively we can prove that neurons are gate-like processors whose functions operate differently than digital gates but one can find the logical equivalent of neural processing to digital processes, inclusive of even learning! 

There are various hypotheses of neural spike codes where the complexity of the premise describes a neuron’s ion channels and how various spiking modes are produced. I am going to describe a much simpler and generalized coding scheme for neurons based on anatomical observations and in particular the auditory wiring of mammalian brains. Neurons exhibit firing ranges that directly relate to the degree of inputs they receive relative to some desensitizing of the neuron’s dendritic membrane. If we examine the cochlea and how the hairs stimulate neurons we can effectively see how nature is transforming mechanical energy into electrical signaling. A vital data point of the brain’s auditory system is the positioning of the neurons that are stimulated by the cochlea hairs, where the position of the neuron represents the auditory wavelength that a cochlea hair vibrates at. The second piece of information we can glean from the anatomy of the auditory circuits of the brain is the vibrating hair can control the degree or rate of neurons firing. So mammalian brains codify sound by:

1. The positioning of neurons along the cochlea for a particular wavelength.

2. The firing rate of a neuron indicates the energy or loudness of the auditory signal being heard.

The number of neurons associated to wavelengths along the cochlea improves sensitivity to those wavelengths, the more neurons applied to auditory ranges the more sensitive an animal’s hearing becomes. But the point that is being conveyed is neurons can encode by superimposing two pieces of information, symbolic representation by physical positioning, and two, the degree of signal strength of the symbolic representation. In other words, neurons produce fuzzy bits! Where an output of a neuron is more than true or false it is to a degree of truth represented by its firing rate. The truth table for neurons is dependent on its resistance to inputs or a weighted barrier, as shown in the truth table below:

The combination of the two input neurons A and B has to exceed the weighted barrier of neuron C’s dendritic membrane. With that said the question becomes can one build logic circuits similar to Boolean methods? Below is a diagram of a full adder using the neuron spike coding scheme: You’ll note that there are five neurons in the circuit, the truth table for the circuit is below:

Where the inhibitory neurons usually have 5 times the influence that Excitatory neurons have and where the D output is greater than 0 it is the equivalent to binary 1 and where E outputs are greater than zero are also the equivalent to binary 1. So effectively the representation of binary inputs are through neurons A and B but their outputs are effective only under certain firing rates! Where the minimum output from Neurons A and B have to exceed neuron D’s weighted barrier of 5 and neuron E is dependent on the inhibitory neuron C but is actually an Excitatory transmitter for neuron E. So both neurons A and B must exceed neuron C’s weighted barrier of 9 to force neuron E to output. From the truth table, we can see how neural logic can work, and the combinations of where the neurons operate as boolean adders is only under certain firing rates of the neural circuits. Meaning, unlike discrete logic elements that operate consistently overall input ranges(0 or 1) neural logic will only operate under critical firing rates that can overcome the weighted barriers. From a mere engineering perspective, neural logic might seem useless since there are states where the neurons just don’t work out the logic at all! But neurons have to operate under conditions that discrete logic could not work at all and that is a very noisy environment. You’ll notice that the diagram depicts glial cells and it has been observed that glial cells do discharge. One of the functions of glial cells is to act like little vacuum cleaners sopping up excess neural transmitters or ions of firing neurons. This absorption builds up potential within the glial cells and eventually causes a discharge of the electrical potential within the cell. At first, this may look like simple noise but if you realize that the glial cells absorb excess neural transmitters in direct proportion to the neighboring firing rate of neurons then we can see a very interesting effect from glial cells. Glial cells can actually provide feedback as in the form of a learning impetus that forces neurons to adjust their weights based on their own and surrounding neurons’ activity! With that said: From the neural logic example, the circuits will only operate under certain firing rate conditions, it does imply that neurons learn to ignore noise. This is advantageous since it doesn’t behoove a neuron to fire upon any input given to it. From a perspective of a neuron evolving to survive in a noisy environment firing randomly in response to noise would expend energy needlessly. A neuron adapting to ignore noise is conserving energy and is benefiting its host by being energy parsimonious. The additional benefit to ignoring noise is it forces the neuron to fire at signal strengths that improve the fidelity of information exchange between neurons. In addition to the mentioned advantages of energy conservation of neurons, each neuron will have its own set of inputs along with feedback from glial discharges and therefore uniquely derives its weight barrier in response to those influences. Realize that there really isn’t any way for a neuron to distinguish information from noise, so one could either view the individual neuron learning to ignore noise or is learning the information patterns from its inputs, I am asserting that they are one and the same.

With that said we can conclude that consciousness or mind is a product of emergence from the functional cortical systems of the brain and those systems need not be operating simultaneously, as proven through fMRI research. Below is an attempt at modeling an AGI on the premise of logical equivalency of human-like cortical processing:

Concept model of consciousness.

The components like the “Memory Stream”, “Associations” and “Emotions” have been implemented in code. The software ability of free Association is done through an approach of partial feature fit of data that can be done with neural networks or other digital approaches.

While I’m not claiming that the model is complete or that all components have been developed it is a starting point. So far it is proving to be a daunting task as you can see from the diagram the memory stream is a very critical component and it is proving to take up a lot of resources. And there is a ton of preprocessing for all the sensors, which currently are just virtual sensors within a virtual body or pseudo bot.