literature

Rants on Neuro and Tech for Sci-Fi AI (Updated)

Deviation Actions

kArA-Redwing's avatar
By
Published:
2.1K Views

Literature Text

New text denoted with the :new:.  There are two tag-ons to old sections and two completely new sections, one on a hormonal pathway for controlling memory salience and the over my overview of conscious (sorry for being hard on Orch-OR... but everyone kinda is and I agree with the majority in the case)

Forward:


This is more or less me thinking.  I'm trying to keep the sections in some sort of coherent organization, but the parts inside each section only makes as much sense as my mad conceptual mind does on a raw draft.  Let me state my wild opinions based on my body of knowledge up here:  Whatever consciousness and intelligence are, I see no reason why we cannot (theoretically) replicate these phenomena in machines if we ever do understand them well enough.  If this can be done, then there is no reason why machine should be excluded from human perceptions such as pattern recognition, executive decision making and even emotions.  That said, to make such a perfect machine replica of a human, though theoretically possible, would be at such a cost that it makes no logical sense to do so.  Besides, it would have most of the flaws of the organism.  It's way cheaper to hire an organism that exists, can potentially be bred cheaply, self repairs and is disposable and replaceable.  This would be especially true if the machinery required to replicate certain aspects of organisms has to be larger than that of the organisms, possibly precluding actually replacing the organisms.  My suggestion is that all AI should be created with a limited and specific set of tasks in mind, (e.i. there are purely social AIs, weapon AIs, job-specific AIs ect.)  and be specialized for that task.  In this case, though they may be of invaluable use and worth the cost and effort to create, their optimization for their task and limitations means that you still need organisms (the adaptive, jacks-of-all-trades) to assist them in actually accomplishing anything in the long run.  Coexistence achieved, Matrix avoided. Further typing will be added over time, possibly in multiple documents if I surpass the document limits.  I will also have to go back and reread this at some point, because I am certain it is riddled with typos, but cannot deal with them after writing this (5K words in one morning and early afternoon, cheers).

Major Differences:

I'll keep updating this as I go along
:bulletblack:Flexibility-Organic memories are subject to change over time, machine "memories" are not.  Artificial networks struggle to learn contradictory tasks, but likely learn initial tasks faster
:bulletblack:Forgetting-Organic memories that are deemed irrelevant are discarded.  No such filter has been developed for AI, and if they are recording "memories," they record everything.  This leads to a huge difference in the data load each carries.
:bulletblack:Differential Organization-AI that records everything stories stores all information as a chronological stream of recall and not discrete memories.  It makes keyword searching next to impossible. It also prevents certain memories from become stronger than others.  All moments in time have the same weight.
:bulletblack:Energy Utilization-There is a negligible difference between a human brain at rest and one in the midst of intense computation tasks.  In contrast, the more complex the task a computer is given, the more energy a computer will require to complete it.
:bulletblack:Processing Method-Computers do tasks one at a time.  This is why they almost never err.  Humans, though terrible at multitasking, can, within one task, account for multiple concepts at the same time.  This increases the error rate, but decreases the time required.*Caveat below with newer circuitry methods (not yet written)
:bulletblack:Renewal and Repair-Organisms can continue to create new neurons (in a limited number in limited regions) and can repair themselves as well as move functions from one area to another in response to damage.  Machines do not have this ability.


Brief Neuron Overview:


You may have already heard this a million times (I certainly have), but it is always worth repeating.  I'm also oversimplifying, but I'm more than happy to be more direct in explaining any concepts you want to know more about.  I don't want to force too much jargon or redundant info on you, but I do think understanding the basic mechanism by which neurons work gives you a better idea of how the more advanced ideas work.

Neurons (also called nerve cells) have three major structures: the soma, the dendrites and the axon.  The soma is the "cell body."  This is where all your typical cell machinery resides (E.g. the nucleus with the DNA).  Off of the soma, you have a series of thin, tube-like projections (neurites).  One neurite is the axon, and the rest are the dendrites.  Dendrites receive signals (from cells or external stimuli), while axons send signals.

Axons are more or less straightforward.  When the cell accumulates enough "go" signals in a short period of time, an action potential (AP) is created at the start of the axon where it connects to the soma.  An AP is basically just an electrical current.   Once generated, it is propagated ("sent," in a sense) down the axon.  The end of the axon branches into multiple termini/buttons/boutans (I'll use boutan) and these will all be effected by the AP and told to send a chemical signal (neurotransmitters) to the dendrites.  The neurotransmitters that the axon releases is critical for determining how a neuron behaves in a circuit.

Dendrites are more complicated, and we know less about them (this is what I study).  While axons more or less look similar in all neurons, dendrites are strikingly different across different subtypes of neurons.  Axons only branch at their terminus, but dendrites branch across the entire length of the dendritic shaft (the original neurite).  Furthermore, the branches can also branch and so on.  Additionally, the dendrites can have very small structures studded across the shaft and the branches known as dendritic spines.  Spines are critical sites for excitatory synapses (go signals), though inhibitory synapses (stop signals) can also be present on spines.  Branches are believed to be important for information processing of the signals generated in spines and directly on the branches.  It is likely that the structure of the entire dendritic arbor (the branches and spines and shaft synapses) is also critical for determining a neuron's role in a neural circuit.

As for the synapse itself, this is a location where an axonal boutan and some part of the dendrite that has a post-synaptic density (PSD: a protein rich structure that is able to sense and respond to neurotransmitters) get extremely close together, but do not actually touch.  The space between them in the synaptic cleft.  When the boutan is stimulated by an AP, this triggers the release of neurotransmitters (I'll save you the mechanism, unless you are interested).  Technically, I said that neurotransmitters (NTs) were chemicals before, which is true of many NTs (glutamate, GABA, dopamine, serotonin), but some are also very small proteins (or peptides) such as neuropeptide y.  The kings of the neurotransmitters are GABA (gama-aminobutyric acid) and glutamate.  Glutamate is excitatory, GABA is inhibitory.  The rest of the neurotransmitters act as signal modulators (ramp it up, or ramp it down).  This is not to say that they aren't important, they certainly are, but you're one dead organism without glutamate and GABA.  So the NTs are in the synaptic cleft.  The PSD on the dendrite senses these through proteins known as "receptors."  The receptor can bind temporarily to the NT it is tuned for (e.g. AMPA receptors bind glutamate, but not GABA) and then the receptor leads to the generation a new electrical signal in the dendrite (also saving you this mechanism).  This is an extremely rapid process, and the NTs are very quickly cleared from the synaptic cleft by the axon and any neighboring glia cells "reuptaking" the NTs for degradation or recycling them for further use.

Other Cells in the Nervous System:


They are all thrown into the junk box otherwise known as "glia."  I'll just bullet them off, because some I may be mentioning along the way and make a difference in your AI versus organics paradigm.

-Astroglia: Neuron feeders. Maintain the blood/brain barrier.  Involved in repair. Same original cell lineage as neurons.
-Radial glia: Neuro progenitor cells (stem cells-ish depending on your definition of a stem cell).  They also give rise to many (if not most, I would have to check) of the glia subtypes.  They are most critical in development, but do persist into adulthood.  In humans they are critical for the hippocampus (involved in forming short term memories, more to follow below).  It's still controversial if the sub ventricular zone (critical for creating new olfactory neurons for one's sense of smell) continues to generate new neurons in adult humans, but it certainly does in rodents.  Same original lineage as neurons.
-Oligodendrocytes/Schwann cells:  I neglected to mention above that the axons in many mammalian neurons are covered by a fatty tissue called myelin.  Myelin insulates axons to reduce electrical loss in APs and to speed the rate of travel.  In nonmyelinated neurons, the signal has to propagated along every nanometer of the axon.  In myelinated cells, the current "jumps" (saltatory conduction).  This is because myelin is put down in a tiled fashion, where you have one section wrapped in myelin, a tiny space (a node of ranvier) and then the next section wrapped in myelin.  The AP jumps from node of ranvier to node of ranvier.  Oligodendrocytes and Schwann cells produce myelin.  Oligos do it in the brain and spinal cord, while Schwanns do it in the peripheral nervous system (think about the axons that go from your brain down to your toes).  Same original lineage as neurons.
-Microglia: These are macrophages (immune cells) that reside in the brain.  They may be critical for forgetting, as newer research suggests, and they also remove dead cells and debris from the nervous system.  NOT the same lineage as neurons.  Again, these are immune cells, and thus of hematopoietic origin (bone marrow).

There are certainly more subtypes, but these are the big players you hear about.

The Basics of Network Theory:


Not my area, but I do have a basic overview from trying to understand the connetome paper I presented in a journal club last year.  Long story short, there are three types or "layers" of nodes in networks.  There is the input layer, the hidden layer and the output layer.  I think the input and output layers are pretty self explanatory: the input is the layer that first receives information, and the output layer is what spits out that information after it has been processed.  Thus, the hidden layer (or layers more likely) is what does the information processing.  We have NO idea how the hidden layers work, not when we apply network theory to the brain, and not when we apply it to artificial computer networks.  We do have some measures of node importance though.  For example, you can define important nodes as being the ones that are connected to the most other nodes.  In another scenario maybe you're interested in nodes that are critical information conduits.  So if you have two separate networks that are connected by one node, the connector node is critical for transporting information across the two networks.  There are other measures of course, but the point is, we can figure out which nodes are important, and perhaps hazard a guess as to why the matter to the network as a whole, but as for what each nodes actually does?  No idea.

Computational neuroscientists are hard at work creating artificial neurons, trying to distill out the more important aspects of what makes a neuron work.  Here's what I've picked up from mostly  skimming what I can understand of these networks and one or two reviews: These networks can learn.  They can be trained to perform, for example, a discrimination task and get very good at it.  The problem is with retraining.

So in organisms, if you teach a rat to associate a bell with a foot shock, they can learn to anticipate a shock when they hear the bell. That's called conditioning.  Let's say you take these conditioned rats and now start playing the bell, but no foot shock occurs.  Over a short period of time, the rats are going to learn that the bell no longer precedes a foot shock, and they will no longer fear the bell. This is called extinction.  Despite the name, it's not actually a process of forgetting or destroying the old memory, but rather learning a new one.  If you then take the rats that no longer fear the bell and start shocking them after they hear the bell again, these rats will relearn that the bell is associated with a shock far faster than a naive rat (that has never heard the bell or been shocked) will learn the association.  In other words, organisms are excellent at both learning, and relearning; they can switch between two contradictory memories (bell means shock, bell means nothing) very quickly.

Artificial networks don't do this.  Once optimized for one task, it is very difficult for them to learn a new contradictory task.  They end up becoming less proficient at the new task than they were at the old task.  I don’t know what happens if you then give them the original task again, as the review I read didn't cover this. I'm not sure if they can relearn the original task to equal efficiency as before, or if learning the second task continues to interfere with efficiency.:bulletred::bulletred::bulletred:  Anyway, the point is that artificial neurons do not change their programming as well as real, organic neurons do (points for organics), but the artificial ones may learn the initial task faster and do it more efficiently.  That, again, I am unsure of, but it would not surprise me (potential points for AI?).

Memories: Writing, Storing, Recalling:


Are memories stored in dendritic spines (just "spines" from here on out)?  Perhaps.  As far as I am aware, no one has tested this directly, because it would require (in a live animal) finding which spines changed before and after learning a task, on which neurons in which brain regions, hyper-specifically just destroying that spines or spines, and then testing recall.  This may be an outright impossible task.  Plus, what about the shaft synapses?  Changes there must also be important, but we don't study them.  Furthermore, in some cases, we see a WEAKENING of synapses in learning (if I am remembering correctly, a study looking at cocaine addiction found that synapses were weakened in D(1 or 2?  I don't remember) neurons after cocaine exposure).  I would imagine that memories are not so much a concrete structure, but rather a pattern of activation, but that's me speaking for myself.  I'm not sure what the field thinks at large

Okay, so what do we know?  Changes in spines absolutely correlate with learning.  Most of our research is done in cortical (long term memory) and hippocampal (short term memory) neurons, and in these cases, spines and synapses look larger and more "mature" (more mushroom shaped, less long and thin) after learning.  To breach a little bit on Orch-OR and MTs, cytoskeletal changes are critical for spine maturation.  Actin is the primary cytoskeletal element in spines, and mature spines have more of it to make them larger.  Microtubules (MTs), though not a stable structure in spines, seem to also be important.  When you block dynamic entry of MTs into spines using MT drugs, spines shrink or disappear, but recover as soon as the drugs are removed (Hoogenraad did the foundational work on this subject).  So what other evidence is there for the importance of spine shape in cognition?  Getting back to the overall structure of spines, in autism and Fragile-X, we find that the number of spines is decreased, and that spines are long and immature, respectively.  There's a loss of dendritic spines in Schizophrenia as well.  Naturally, you see spine loss in neurodegenerative disorders such as Alzheimer's disease.  So spines are undoubtedly important for memory formation, but refer to the caveats in the first paragraph.  Furthermore, you also see changes in branching in these conditions, usually a loss of complexity, so spines and how they integrate on branches may be part of the story.

Let's talk more about the basic information flow from the psychology perspective.  In order to form a memory, you first need to experience something.  No, duh right?  I won't go into details, but before information reaches your conscious, working memory (whether it's a sight, a sound, a smell, whatever) it undergoes an IMMENSE amount of unconscious processing to increase the "signal to noise" ratio (remove unnecessary information, and make the important parts clearer).  In vision, this includes sorting the information from both eyes, sensing motion, creating sharp edges and outlines, applying color, filling in the retinal blind spot, object recognition, ect.  It's wildly computation heavy, and is something that robotics currently struggle with.  Anyway, the information then pips into our consciousness in working memory.  Fun fact here, the working memory limit in humans is 7 items with a standard deviation of 2.  So 60% of people can hold 5-9 piece of information at once.  I'm on the low end and can only hold 3-4 pieces. :<  That said, the size of the piece of information varies.  So four random numbers is four pieces of information, but the numbers 1-10 is only one piece of information, because it's one pattern.  This is called "chunking."  I can't hold many pieces of information, but I can chunk like nobody's business.

Anyway, back to the story: information in conscious working memory.  Once you're finished using the information, you can either store the information, or throw it away.  Storage requires protein synthesis, so your hippocampal neurons definitely change when you are creating memories.  From short term memory, information is easily called back into working memory with little effort (recall).  The final move is from short term to long term memory, which can take years in humans.  This seems to be a result of memories being "handed off" from the hippocampus into the cortex.  How this occurs, I'm not sure.  I don't think hippocampal neurons are actually migrating into the cortex, but rather the cortical neurons are somehow recreating the memory in themselves, and then I'm also not sure what happens to the hippocampal neuron.  Since you need neurogenesis to continue to form new memories, I would imagine that the hippocampal neuron dies once the cortical neurons take a hold of its memories, but I'm not sure.  :new: I didn't find the specific answer yet, but I did find a review that went over how the prefrontal cortex and the hippocampus interaction; the cortical cells display a unique firing pattern in sequence with the hippocampal cells they are connected to.  Based on this, I imagine memories are "handed off" by that firing pattern in the cortex stabilizing such that it doesn't need teh hippocampus for storage anymore, and won't talk to it again until you try to recall that memory stored in the cortex.   That said, it is more difficult to retrieve memories from long term memory back into working memory.

I'm not really sure how artificial networks handle working, short term and long term memory, but I'm pretty sure they only have long term memory and working memory.  That would mean it is harder to retrieve recent information (the beauty of short term is it's smaller and there's less area to search).  As for working memory, it's probably only one piece of information at a time (with the way computers are designs now), but the chunk size is probably infinite, so with the right cues, they can bring up far more detailed pieces of information into working information.

Malleability and Forgetting:


When a computer stores a piece of information, a file, it's permanent in a sense.  You can command it to be deleted, but otherwise (if it doesn't get corrupted) that piece of data never changes.  You can also bring the file up for use (working memory in a sense) and modify it all you want, but unless you hit save (or have it autosave), the original document does not change.  So "forgetting" and "modifying" are processes that must be commanded.  If we think more in terms of networks, if your artificial neurons are pulling a past memory into working memory, you probably don't want that old memory to change, so it can always be recovered perfectly as it was originally recorded, so you will disable the ability to save over it.  It will also save all original unique memories (maybe in 5 minutes chucks?) unless told not to.   So how do organisms handle the same situations? 

We do not record the vast majority of what we experience.  How is a memory selected for storage?  Maybe conscious effort and repetition (studying) or accidental repetition (telling stories about what happened) saves a memory.  Maybe it's so emotional, that it triggers a hormonal response so you have to record it (flash-blub memories, I know the pathway activated if you're interested).  Maybe it's something else.  The point is, there's a lot of filters deciding what gets saved and what doesn't, and we don't know what most of them are, nor do we know how a distinctive unit of time becomes the memory, while the five minutes before or after are forgotten or a part of another memory.  So what gets saved and how much is an unknown, but a lot of things get saved without consciously meaning to, and we can decide to "save" information by engaging in behavior to make it happen (repetition).  If your AI can remember everything, it lacks these filters: everything it records will be of flash-blub memory clarity, with no more importance in one second than in another.  This also brings up the point of varying strength of memories.  Memories are not all created the same in organisms.  Some memories are so intense you cannot block them out (PTSD) and some you not be able to access when you want to for whatever reason (tip-of-the-tongue phenomenon) and some you may never recall until the right trigger brings it back.  Unless you are delineating individual memories and placing emotional and contextual importance on them, you lose this ability to make some memories stronger than others.

Another difference is in "saving over" the old memory.  In your AI, you don't want that.  You want all memories to be perfect and pristine and unchanged.  This allows it to always be correct.  We don't follow that system.  Let me use the example my college PI gave us in class once.  Imagine you're at a high school reunion.  You see the nasty bully who tormented everyone in high school.  Those memories of the bully are now active and in working memory.  Let's say he comes over and starts talking to you.  It turns out that the bully has really grown up and is a pretty nice guy now.  Not only are you recording a new memory of how the bully is now, but you are modifying the old memories of him.  What he did before won't seem so heinous and bad, because look who he is now.  So the next time you recall how he was in high school, that memory will be altered from what it was before the reunion.  The new information is interfering with the old information.  This is why eyewitness testimony is so fraught with errors in the courts, and why police and detectives will do anything to get the statement from the witness as soon as possible.  They don't care what you think happened after you heard about an incident on the news 4 times, they want to know what you think happened immediately after it happened.  From what I can tell, we tend to give the newest information precedence (possibility this is an evolutionary survival advantage, because it doesn’t matter what the circumstances were yesterday if they have changed today), but I don't have any research to back that up.  Just speculation.  There are indeed times when old information will instead interfere with new information.

Forgetting.  Here's something we all wish we did less of, but it also exists for a critical biological reason.  The mechanisms are largely unknown.  Some research suggests that microglia can eat synapses to potentially destroy memories.  When a memory is recalled, not only can it be changed, as stated above, but it must go through the storage process again.  If it fails some filter or something goes wrong, the memory will not be restored, thus weakening it (a mechanism we some research suggests we can take advantage of in treating PTSD).  Simple cell death could lead to memory loss.  There are certainly other mechanisms I am not aware of, and that the scientific community is not yet aware of.  Why do we forget?  A lack of space to store all of that information? Seems possible.  We certainly have to filter out vast amount of it before it even reaches conscious perception.  This shouldn't be a problem for a machine with a sufficiently large amount of memory space.  But I think the internet makes the problem clear.  Sometimes it's next to impossible to find the information you want, because you are bombarded with so much useless crap that the good stuff is all but lost.  If you do not know EXACTLY what to search for, you may never find it.  Human memory would have the exact same problem if we recorded everything.  Furthermore, there is a case study I am aware of about a man (I believe a Russian) who remembers literally everything.  He cannot forget, and everything that reaches his conscious perception is stored. You might think he would have a major advantage over the rest of us, but he can barely deal with his life.  Ask him when an event in his life happened, and he doesn't know the answer.  Now if you know the date of that event, you can ask him what happened on that date.  He can tell you everything he did that day including the event you asked for earlier.  Ask him again when did the event occur, and he still can't tell you the answer.  In other words, he accesses his memories in a linear, chronological fashion, but cannot access memories by keywords.  I would imagine (speculation alert) this going back to what I mentioned before out delineating the boundaries of a memory.  His remembering is infinitely continuous, with no boundaries.  His experience would be what an AI that remembers everything goes through.

So how does this come together for the point of interest?  AI, if it remembers everything perfectly and without change, will struggle to recall information in any manner besides chronologically.  It cannot change past memories.  So when a new variant of a situation arises, it will likely continue to make decisions based on the original version of the situation, until the bulk of the data is about the new variant of the situation, and then it will adapt.  This may be next impossible if it has too many memories of the old situation or is the situation is continually changing.  This is not a problem for organism which learn and relearn easily, can modify old memories to be better in line with the current world, and can delineate memories by incident and event, allowing for a "keyword" search, rather than the laborious chronological organization of a continuously recorded memory.  I think you can imagine on your own circumstances when this type of AI would be far superior to an organic mind, and others were it will be far inferior.

:new:

Example of Hormonal Regulation:

This is probably one of the better know pathways, and certainly one we focused on a lto in my undergrad research.  Cortisol is a hormone related to maintaining homeostasis (baseline) as well as regulating stress responses.  The example my PI always used was you're old in the forest and there's a bear.  The first thing to respond in our pathway of interest is the hypothalamus.  This bit of brain tissue regulates part of the pituitary gland (at the base of the brain), which in turn releases hormones into the blood.  The hypothalamus sends out CRH (corticotropin releasing hormone) into a very small set of capillaries that go to the pituitary.  The pituitary senses this and then sends out ACTH (adrenocoricotropic hormone) into the blood stream.  As the name implied, this hormone acts on the adrenal glands (you can probably guess what thing that the adrenal release based on the name).  The Adrenal glands are what ultimately release cortisol.  Why did I mention all of these steps?  Regulation.  Biological pathways tend to have a lot of steps to allow for fine-tuning of responses and to make sure things only happen when you really want them to.  Okay, so cortical is what we care about (corticosterone in rats and mice if you want to look into the research at any point).  Cortisol is transported through the blood stream back to the brain.  In the brain, there are two protein receptors for Cortisol: the mineralocorticoid receptors and the glucacorticoid receptors.  The mineralocorticoids have a much higher affinity for cortisol.  Basically this means that cortisol, at low concentrations, is far more likely to bind to the mineralacorticoids.  So when cortisol is low, it's probably more important for regulating glucose than it is in memory.  Now in the bear scenario, cortisol levels are up, so you're going to saturate the mineralocorticoids (all the receptors will be binding to cortisol) and there's enough cortisol left over free that it will start binding to the glucacorticoids.  Now from what I remember, the mechanism gets a little sketchier here (or it was back in 2010-2013).  If I am remmebering this all correctly, the rise in binding to the mineralicorticoids signals for the cells to start making a stronger memory.  But once you start really ramping up the glucacoticoids, this actually prevents memory formation.  The end result is that at the onset of stress, the first few minutes, you form very powerful (salient) memories, but then if the stress is prolonged, the remained of the stressful event will not be as well recorded, nor will memories after the stresser is removed be well recorded until you recover to baseline.  This creates a very strong memory within a short temporal boundary, but makes memories after this point weak.  Evolutionarily speaking, the system probably works this way so that you can devote your memory to the stressful event (and how you survived it) which may be very important later, while the memories afterwards probably just aren't as important, so you don't save any protein machinery for that later time point.  An interesting side note: PTSD is characterized by extremely strong and intrusive memories and difficulties forming new memories.  We cannot do causative studies, because it would be highly unethical, but there is a well established link between PTSD and cortisol, where the levels of cortisol are abnormal (I've typically seen studies where they find low cortisol, but the research is mixed with high and low results, and apparently you can even find altered levels of the receptors in the brains of PTSD patients). 

 

Connectomes:


The idea behind a connectome is to map all of the neurons' position in 3D space and which neuron is connected to what.  The paper that I read was this lovely little thing published in Neuron: dx.doi.org/10.1016/j.cell.2012… Much of what I'm going to say here will be in relationship to this paper and, as usual, my speculations to push into the sci-fi realm.  Let's start with a quick summary.

Bamburg wanted to see if you can find differences at the connectome level between two related species of nematode (also referred to as worms from here on out).  Species one is our typical, run-of-the-mill research C. elegans, which eats bacteria.  Species two is a more recently discovered relative that can eat both bacteria and other worms (paficicus).  The group hypothesized that since these worms have different eating habits, maybe the neurons controlling their mouth (the pharyngeal neurons) are connected differently.  So they took specimens from both species (3 each), froze them, and cut just the pharynx into 3000 tiny sections.  Nemoatodes are microscopic to begin with, so imagine how tiny these things are.  Having done that, they viewed every single section on an electron microscope (EM).  Do the math: 3000x3x2 = 18,000 sections.  How many neurons did this account for?  20 neurons.  And again, this is from 6 dead worms.  Imagine the challenge of trying to map an entire unique connectome for one individual human brain (which as things are now, must be dead) containing an average of 86 billion neurons (based on the brain soup method.  That is a fabulous name).   Hopefully the challenges with putting an individualized connectome into a machine are readily apparent.  Moving forward, based on the images they took form the worms, they created 3D reconstructions of the average connections in elegans and in pacificus.  They then simplified this into 2D maps that showed what was connected to what.  Please note: in these maps, all contributions of branching to this system is lost.  It represents neurons as a ball with connections going directly in and out.  No additional processing mentioned.  Cue lots of cool data analysis and some interesting conclusions and stuff.

What did we know from data before this paper: Over the course of evolutionary time, you can animal behavior with WITHOUT changing the connections in a neural circuit.  How?  Changing the NTs they are using.  There's also some potential that gap junctions (physical connections between neurons) are important.  Certainly other mechanisms may be at play.  What did this paper show?  That connection changes can also potentially lead to behavioral changes.  How?  We have NO idea.  In this case it looked like changing the information flow to focusing more on a different set of muscles did it, but they haven't been able to functionally test it without ripping the live worms to tiny(er) pieces yet.

The big point is that while the connections are important, they are not sufficient to define an individual's behavior.  You need to know but the structural "what is connected to what and where" as well as the functional "what NTs are they using."  I don't think a connectome exists with all of this information yet, though it would certainly be possible to do it in worms, which only have 302 neurons total.  Even with all of that, there's probably some things we're still missing that are critical for defining a functional connectome.  Sure there's the human connectome project going on right now, but the point of that is to get the average circuit for all humans.  The time and effort required to make and individual connectome makes it completely not feasible. :new: Update since I've started researching super-resolution.  It is theoretically possible to image an entire brain's synapses in a high throughput manner.  There are groups that are developing this right now.  The paper is out on how far they've optimized it (Sigal 2015 in Cell).  That said, we're talking about the mouse retina in this case.  An extremely tiny and thin piece of tissue.  I still hold to my guns about imaging the entire of the human brain with all synapses labeled for the right receptors being beyond feasibility, even with a high through put technique and some computational magic.  And that's not even getting into reconstructing it in the machine.  I can imagine how one does that short of getting into nanobots which you're already iffy up without some serious limitations on them.


I'm going to write these sections at some point.

Energy and size limitations of the Brain:


Biological Circuits in Computers:


:new:

Concepts of Consciousness:


1-Orch-OR: Okay, let me be honest.  I cannot get through this review.  Every time I try, it makes me angry for so many reasons.   This theory is not mainstream.  The majority of its proponents are not trained in physics (which makes no sense considering it is meant to be based on quantum mechanics).  The theory is contrary to the majority of biology and psychology. The theory is by and large untestable, and thus not scientific.  This theory is largely considered to be pseudoscience and quackery.  Finally, it makes no logical sense to me from a neuroscience background.  I do not understand why they were, at one point, Nobel prize candidates.  I certainly know why they did not win.  Thus it's very hard for me to be unbaised about this, but I can be straightforward on what I read.

The theory first requires that consciousness, not cognitive consciousness as we know it, but something else, exists as a physical force like gravity (this is untestable and sounds a hell of a lot like god to me and a lot of the scientific community).  Organisms thus evolved to make use of consciousness, rather than consciousness be a product of evolution, and this is what grants us free will (free will being a philosophical debate in of itself).  The way we sense the force of non-cognitive consciousness is via microtubules (MTS) (no sensing mechanism is proposed, nor is any mechanism by with the cell interprets this MT signal is proposed).  They also say something bizarre about MTs being used to store long term memories that made no sense to me, and I hope I misinterpreted.  That's the theory as far as I can tell. MTs magically sense an unknown force at the quantum level or something and this is consciousness.

Now, I can totally see a place for quantum scariness in the brain.  That's the only reason this theory survives I think, because the idea of quantum mechanics altering how the brain works does seem feasible and intriguing.  But no one believes Orch Or itself.  If the theory is true, then here's the start and the endof the process.  The force of consciousness gives off a signal that can be interpreted (though what it is, how is works or if it controls us [thus removing free will again] is unknown) and this has GOT to change how neurons fire.  Neuron firing is what, in the end, allows us to think and behave.  So they say MTs are sensing the quantum magics (how? we have no idea, stop questioning it).  This needs to in turn modulate the ion concentration in cells in order to alter neuronal firing.  I am aware of not mechanism by which you can sense quantum changes in MTs (indeed there are no known mechanisms), and then somehow convert this into a change in ion concentration.  I'm not even aware of microtubule binding proteins that can also change ion concentrations.  This is what ion channels are for, and MAP (MT-Associated Protein)s don't control ion channels, as far as I am aware.

In contrast, the research looking at the role of quantum mechanics in bird migration navigation, identified the proteins involved in the process first, and that's how they came to the quantum model which they could test.  In this case, we're starting with "Quantum consciousness!" and then forcing it onto biology, rather than starting with biology and trying to explain it.  That is not how you science.  Again, I can see the quantum world playing in this game, but you can't just try to shove it in.  We need to instead look at the biology, and let it inform us of where the quantum world makes its mark.  Besides my dislike of it, the rebuttals of Orch OR are scathing.  They say the math doesn't work out, that it doesn't fit the biology, ect.  I would get the hell away from this and probably stay away.  This is "quantum mysticism" according to the majority.  There is no real evidence to back this theory up right now.  Maybe one day we'll all turn out to be wrong, but for now, don't be putting your money on it.  The theory will probably die with Penrose and Hameroff.

2-Okay that out of the way, let's talk about the accepted science/theories.  What exactly is consciences?  Is it self-awareness?  The P+H paper states that some people consider only humans to be "conscious."  I can only assume they are focusing on the "self-aware" aspect which may or may not be what you're interested in.  That said, dolphins and primates, among other species I do not remember off hand, are self-aware by our definition.  So saying only humans are conscious by even the self-aware definition is silly, and I can only assume it is a religious argument. Let's not look at those since we're talking science.  A more biological and psychological concept of consciousness is that of an organism that is alert, aware (though not necessarily self-aware) and able to behave.  This gets more at the idea behind consciousness versus unconsciousness.  Those who are unconscious are not alert, aware or able to behave on their own (think comas or simply being asleep).  After this we start getting into the network realm of consciousness which is cool, but insane.  This is the idea that any network with a sufficient number of nodes and a sufficiently complex set of connections is conscious.  So something like the internet would be conscious (computers and/or servers all connected by links).   Society (where humans are the nodes and their relationships are the connections) would be conscious.  This, you said, is not what you're interested in, because it is not cognitive consciousness as we know it.  I'm certain there are more theories in existences, but these are the big categories I am aware of that scientists look into, and not just philosophers (last I checked, no one was studying Descartes' dualism in biology, physics, psychology or any of the computational offshoots).

Do you need consciousness to achieve complex ends?  Certainly not.  Our computers prove this.  Single celled organisms show it quite well also.  They have exhibit impressive responses to the stimuli they encounter.  So when do tasks actually require consciousness?  I'm not sure.  Planning probably requires consciousness.  Making decisions based on abstract criterion should.  I would imagine anything that requires memory or response in the absence of a discreet, immediate physical cue (think about our senses, touch, taste, hearing, vision and olfaction) requires consciousness.  What about self-aware consciousness?  Animals can certainly plan out their actions, or at least vertebrates can.  Any predator demonstrates this.  So you should ask yourself, does your AI need self-awareness, or merely consciousness?  Self-awareness is even more ethereal than consciousness, and the cut off for tasks requiring self awareness and those that don't is more fuzzy.  It certainly seems like self-aware animals are the smartest, but that may just be a human-perspective bias.

3-Where does conscious in vertebrates come from though?  This is the kind of consciousness we are aware of and understand and wish to utilize for AI, rather than anthropod consciousness for example.  It looks like we have a brain structure that is responsible for consciousness.  Your new keyword is claustrum.  The clastrum was originally suggested to be important for consciousness by Francis Crick (of the DNA structure duo Watson and Crick).  Why?  Blindess.

The vision pathway is extremely complex, and bugs in any part of the circuitry can cause blindness.  Retinal blindness is what people typically think of, where the problem is at the start of the pathway in the eye.  But there is also cortical blindness, where the processing stage is perturbed.  In these cases, it has been reported that the blind can respond to visual stimuli (e.g. you throw a ball at them and the catch it).  This is called Blindsight.  They don't consciously see the stimuli, but they can respond.  This suggested to Crick and Christof Koch that there must be a brain region important for conscious awareness, that was not receiving the visual stimuli in Blightsight.  The body could still unconsciously respond, but the person could not because their consciousness brain region never got the message.  Fascinating, but what could the brain region be?  They proposed the claustrum as the "conductor of consciousness."

The claustrum has a bunch of fascinating features, such as its integration in multiple sensory modalities, the fact that the claustrum in one hemisphere receives input from both hemispheres, but then only outputs again to one hemisphere, and its high density of opioid receptions (think about altered states of consciousness in drug use).  But until recently, this was just a fascinating idea.  In the last few years, some compelling evidence is rising to support this theory.  The piece that I think is most interesting is the case of a woman with epilepsy.  In order to treat her, the doctors needed to identify the origin of the seizures and were using electrodes to do this.  They stuck an electrode into her claustrum (which has apparently not be done before, or at least not intentionally) to stimulate it and viola: altered state of consciousness.  The woman is awake, but unaware, and has no memory of the event after the fact.  This happened every time they stimulated the area.  If she was actively behaving before and during stimulation, her activity did not suddenly halt, but rather it came to a slow and languid stop.  (see the case study: Koubeissi 2014 in Epilepsy and Behavior) Clearly more research needs to be done (I know of a collaboration at Vanderbilt that has a scheme to do just that;  I'm only aware of the claustrum, because I rotated through one of the labs, and was working at bit on that project, though it was more of a back burner thing sadly), but this is biology informing us of a fascinating new aspect to consciousness, rather than us forcing a theory on biology.  That's why I like this theory better.  The evidence supports it in vivo.

So how does all of this get back to AI.  Let's say Orch-OR is real.  Now you need to get microtubules, a protein, into your machines.  That makes them low grade cyborgs, that have to maintain biological components.  Ick.  What if the claustrum is what creatures conscious awareness?  Figure out how the organ works, make a machine component that acts the same way.  problem solved.  No MTs needed.  Sure you can't explain the mechanism yet, because science can't.  But saying that conscious AI have a modual based on the workings on the human claustrum is an easy idea to propose, and very believable and much easier to maintain than the MT problem (you could even avoid the word claustrum to be on the safe side).  If SciFi already likes the idea of copying the connectome into robots to recreate people, then that scheme would create consciousness, no extra pieces needed.

Otherwise, you're stuck with getting into the non-cognitive types of consciousness that you're not interested in.  Unless you have specific questions, this is probably the limit of my knowledge on the subject.  If you would like a way more in depth ("encyclopedic" as it has been called) run down , the .pdf of the book "a cognitive theory of consciousness" can be found via Google for free, or bought used for 50$ on Amazon.  It has over 3000 citations, so it seems that people think it is a good resource. ;p  A short review thereof here: www.theassc.org/files/assc/234…

Brain Regions Related to "Intelligence" and Consciousness:


Nanobot Capabilities:


**will require more research.  What I do know is that much of sci fi has them doing things that are beyond their capabilities.

So a Sci-fi interested deviant space-commander read my Beta-reader profile and was intrigued by my neuro background, because he's trying to setting up an AI and intelligence paradigm for his project :iconone-planet-at-a-time:.  His primary interest was in consciousness, which I will get to, but I decided to start from the more concrete and less philosophical angle of what neurons really do versus artificial networks and what I imagine one would want of AI, and the strengths and limitations of each.  It's more or less meant to be readable for anyone, since I try to make everything simple and clear and avoid using jargon I don't define in the work itself, but I'm pretty deep in this stuff.  Sometimes it's hard to know what other people already know or don't.  Either way, the information is by and large accurate or else stated as my speculation and thoughts.  Anything hyper specific that I know the authors for, I cite, but most of this is just text book information thus far, or things I learned second hand as examples and do not know where to find the original source.  That said, if you see any inaccuracies, certainly correct me.  I do over simplify in places, but I'm not perfect and don't remember perfectly and don't like to distribute misinformation.

I might add a few drawings later.

Otherwise, questions welcome.
© 2015 - 2024 kArA-Redwing
Comments4
Join the community to add your comment. Already a deviant? Log In
kArA-Redwing's avatar
I did that thing again where I didn't reply to the comment. -_-  I'm getting old and stupid.