Cell Assemblies and Concept Formation (10/19/2016)

Topic: How are concepts represented by the brain?

Computers can represent concepts as programs, in a chosen programming language. But how about brains?

Questions:

1. How is a concept represented?
2. How does a directed, weighted, dynamic graph represent/form a memory/concept?
3. What is a memory?
• interrelated
• reproducible (with stimulus or without stimulus)
• distinguishable
• corresponds to a sampleable distribution
• hierarchical

Concrete requirements for memory

1. Similar response for similar stimuli
2. Distinguishable
3. Hierarchical (concepts of concepts)

Simple Connectome Model

Neurons are modeled as a directed, weighted and dynamic graph:

$W_{ij}$ = strength of edge (i, j)
$X_i$ = activation level of a neuron i

Suppose W is fixed and only X changes with time t. Then,

$W_{ij}$ = edge weight
$X_i (t) =$ activation level i at time t
$X_j (t+1) = \sigma (\sum\limits_{i} W_{ij} X_i (t) )$

If we use a linear threshold function for $\sigma$:

$X(t+1) = W^T x(t) = W^T W^T x(t-1) = W^T W^T ... W^T x(0)$

At equilibrium, it converges to $\lambda X = W^T X$ (eigenvector of X).

This model (with a linear threshold function) is not satisfactory because no matter which stimulus we give, it converges to the same X!

There are 2 detailed models in the literature:

1. Neuroidal Model (Valiant)
2. Cell Assemblies (Hebb)

Neuroidal Model

Concept is stored as an “item” (a subset of neurons).

Each concept is memorized as a subset of neurons of size r, and if k out of r neurons fire, that concept is recalled.

Using this model, we can memorize up to ${N}\choose{r}$ number of concepts.

Since each concept should be distinguishable, overlaps between subsets should be small!

Assumption:

1. Base graph is random ($C_{n,p}, D_{n,p}$) and support is fixed (weights will be changed).
2. Output is random

How do we represent hierarchical concepts in neuroidal model?

If $C_3$ is a concept that is composed of $C_1$ and $C_2$, we want $C_3$ to fire when $C_1$ and $C_2$ both fire.

1. Use union:
• $C_3$ is simply an union of $C_1$ and $C_2$
• Problem: concept size doubles for every union (not stable)!
2. Create another subset of size r:
• We want to set up $C_3$ so that if k neurons in $C_1$ fire and k neurons in $C_2$ fire, then k neurons in $C_3$ also fire.
• Pick (“recruit”) neurons that is connected to both $C_1$ and $C_2$
• P(neuron l fires when $C_1$ fires) = P (on r tosses of p biased coin, we get at least $\geq$ k heads) = q(r,p,k). Since it should happen for both $C_1$ and $C_2$, it should be $q^2$. We want $q^2 = \frac{r}{N}$

Cell Assemblies

A concept is stored as an “assembly” of highly interconnected neurons. Because of such high interconnectivity, some assembly member neurons can activate the entire assembly.

Hypothesis:

2. Rules (neural syntax)
3. “Synapsembles”: weights are dynamically changing all the time

Model:

Suppose an external stimulus X(0) is given. Then,

$X(t+1) \propto X(0) + \alpha W(t)^T X(t)$

How should we change weights W? We should strengthen the connection between two neurons if both keep firing:

$W(t+1) \propto (I + \beta X(t) X(t)^T ) W(t)$
$W(t+1)_{i,j} \propto W(t)_{i,j} + \beta X(t)_i (W(t)^T X(t))_j$

Also, we normalize the pre-synaptic weights at each neuron by keeping the sum of all incoming weights at 1.

Note that $W(t)_{i,j}$ changes depending on both $X(t)_i$ and $X(t)_j$ both fire.

Georgia Tech