20160719

Wilson-Cowan neural fields in WebGL

Update: Dropbox no longer serves live pages from their public folders, so hosting has moved to Github. Basic WebGL example have moved to the "webgpgpu" repository, including the psychedelic example pictured to the left. Additional neural field models are hosted at the "neuralfield" repository

WebGL offers a simple way to write portable general-purpose GPU (GPGPU) code with visualization. The Github repository WebGPGPU walks through a series of examples to bring up a port of Wilson-Cowan neural field equations in WebGL.

The Wilson-Cowan equations are a mean-field approximation of neural activity under the assumption that spiking is asynchronous, and that observations are averaged over a large number of uncorrelated neurons. Wilson-Cowan neural fields have been used to describe visual hallucinations, flicker phosphines, and many other wave phenomena in the brain.


The Wilson-Cowan simulation demo is rudimentary at the moment, but contains a couple presets and rich and diverse dynamics are possibly by varying parameters. For entertainment, there is also an example of a full-screen pattern forming system mapped to (approximate) retinal coordinates using the log-polar transform.

Other cool WebGL implementations pattern-forming systems include the Gray-Scott reaction diffusion equations by pnmeila and Felix Woitzel's reaction diffusion and fluid dynamics simulations. Robin Houston's reaction diffusion implementation is also a good reference example for learning WebGL coding, and Robter Muth's smoothlife implementation is mesmerizing. The hope is that the examples in the WebGPGPU project provide a walk-through for how to build similar simulations in WebGL, something that is not immediately obvious if following existing WebGL tutorials.

Technical notes


I've avoided using extensions like floating point textures that aren't universally supported. Hopefully the examples in WebGPGPU will run on most systems, but compatibility issues likely remain.

Most of the examples store floating point values in 8-bit integers. This works for the firing rate version of the Wilson-Cowan equations because the firing rate is bounded between 0 and 1, so fixed precision is adequate. There are some caveats here. For example, how floating point operations are implemented, and how floats are rounded when being stored in 8-bit values, are implementation dependent. These simulations a quite sensitive to parameters and rounding errors, so different implementations can lead to different emergent dynamics. A partial workaround is to manually control how floats are quantized into 8-bit values. Floats can also be stored in 16 or 32 bit fixed-point precision, at the cost of performance. My understanding is that the default precision for fast floating point operations on the GPU is fairly low, such that 16 bits can capture all of the available precision for the [0,1] range.

The quantization of state variables can lead to numerical errors if the integration step size is too small relative to the timescales of the system. In short, if states are changing too slowly, their increments becomes smaller than 1/256, and numerical accuracy degrades because state changes are rounded-out. Moving to 16-bit precision lessens this issue, but at a cost of an approximately 2-fold slowdown, and so is not implemented except in one example.

Working with the WebGL library feels quite clumsy, because in essence the API exposes the GPU as a state machine. WebGPGPU includes some libraries that hide a lot of the boilerplate and expose a more functional interface for GPU shaders (kernels). For simulations on a grid, there is additional boilerplate to gain access to single pixes. One must tile the viewport with polygons, provide a default fragment shader, and set the viewport and canvas sizes to match the on-screen size. WebGPGPU hides this difficulty, but also walks through this process in the early examples to make it clear what is really happening.

Passing parameters to shaders involves a fair amount of overhead in Javascript, which becomes prohibitive for shaders with a large number of parameters. However, compiling shaders has relatively little Javascript overhead. In practice it is faster to include scalar parameters as #defines rather than pass them as uniforms. WebGPGPU contains some routines to phrase shader compilation as a sort of 'partial evaluation'.

This is a work in progress.

20160708

Numerically stable Viterbi algorithm in Python for hidden markov model state inference

The Python demonstration of the Viterbi algorithm on Wikipedia is numerically unstable for moderate to large problem sizes. This implementation is phrased in terms of log probabilities, and so has better numerical properties. Please reuse, creative commons.

def hmm_viterbi(Y,logP,logA,logB):
    '''
    See https://en.wikipedia.org/wiki/Viterbi_algorithm

    Parameters
    ----------
    Y : 1D array
        Observations (integer states)
    logP : array shape = (nStates ,)
        1D array of priors for initial state
        given in log probability
    logA : array (nStates,nStates)
        State transition matrix given in log probability
    logB : ndarray K x N
        conditional probability matrix
        log probabilty of each observation given each state
    '''
    K = len(logP)         # Number of states
    T = len(Y)            # Number of observations
    N = np.shape(logB)[1] # Number of states
    Y = np.int32(Y)

    assert np.shape(logA)==(K,K)
    assert np.shape(logB)==(K,N)

    # The initial guess for the first state is initialized as the
    # probability of observing the first observation given said 
    # state, multiplied by the prior for that state.
    logT1 = np.zeros((K,T),'float') # Store probability of most likely path
    logT1[:,0] = logP + logB[:,Y[0]]

    # Store estimated most likely path
    T2 = np.zeros((K,T),'float')

    # iterate over all observations from left to right
    for i in range(1,T):
        # iterate over states 1..K (or 0..K-1 with zero-indexing)
        for s in range(K):
            # The likelihood of a new state is the likelihood of 
            # transitioning from either of the previous states.
            # We incorporate a multiplication by the prior here
            log_filtered_likelihood = logT1[:,i-1] + logA[:,s] + logB[s,Y[i]]
            best = np.argmax(log_filtered_likelihood)
            logT1[s,i] = log_filtered_likelihood[best]
            # We save which state was the most likely
            T2[s,i] = best

    # At the end, choose the most likely state, then
    # Iterate backwards over the data and fill in the state estimate
    X     = np.zeros((T,) ,'int'  ) # Store our inferred hidden states
    X[-1] = np.argmax(logT1[:,-1])
    for i in range(1,T)[::-1]:
        X[i-1] = T2[X[i],i]
    return X