Connectomics and Nonrandom Features (10/26/2016)

Connectome

  1. Structure:
    1. Directed, weighted, dynamic graph
    2. Simple model: a random directed graph D_{n,p}
  2. Dynamics:
    • Edge weights change over time
    • New connections form and existing connections break

Simple random model

A random, directed graph D_{n,p} where n = # of neurons, p = probability that a connection exists between two neurons.

connectome.png

Question: what can we predict using this model?

  1. What is the probability of having a path of length 2 between a and b a direct connection?
    -> \Pr (having a length 2 path without a direct connection) = (1-p)(1-(p(1-p) + (1-p)p + (1-p)^2)^{n-2}) = (1-p)(1-(1-p^2)^{n-2}) 2path
  2. What is the number of bidirected cycles?
    -> \mathbf{E} (# of bidirected cycles) = {{n}\choose{2}}p^2
  3. What is the number of paths of length k?
    -> \mathbf{E} (# of paths of length k) =  {{n}\choose{k+1}}(k+1)!p^k \approx (\frac{ne}{k+1})^{k+1}p^k \approx n(np)^k
  4. When does the graph become connected?
    -> \mathbf{E} (degree(i)) = p(n-1).When p > log(n)/n, the graph quickly becomes connected:graph
    Every monotone property (i.e. connectivity, perfect matching, Hamiltonian cycle) has such a sharp transition point.

Nonrandom Features

How different is a human brain from a random network?

Study of rat visual cortex (Song et al., 2005) revealed several nonrandom features in synaptic connectivity: bidirectional connections and several other three-neuron connectivity patterns (“motifs”) are more common than expected in a random network.

nonrandom.png

The study also suggests that the strong connections are more clustered than the weak ones, which can be viewed as “a skeleton of stronger connections in a sea of weaker ones”.

sea

Connectomes also change dynamically:

  1. Connections that are used more are strengthened
  2. Connections that are used less are decayed
  3. If there is a connection A -> B, then new connection B -> A is formed (reciprocity)
  4. If there is a connection A -> B -> C, then new connection A -> C is formed (transitivity)
  5. If two neurons keep firing together, the connection between them are strengthened over time:
    X(t+1) = \frac{X(0) + \alpha W(t)^T X(t)}{ \lVert X(0)^T + \alpha W(t)^T X(t) \rVert }
    W(t+1) = \frac{1}{1+\beta} (I + \beta X(t) X(t)^T ) W(t)
Advertisements

Author: Suk Hwan Hong

Georgia Tech

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s