Every now and then, I get confused by the conversion of spike trains to continuous signals. A lot of existing signal processing algorithms have assumptions that make them unsuitable for spike-like data. To circumvent this, we usually smooth or bin the spike trains in some way. This conversion is always arbitrary, and I still haven't wrapped my head around what it means when we do it.

In effort to learn more about this, I asked myself ( and some friends ) three questions :

Given a continuous signal and a spike source encoding part of said input, how do you optimally reconstruct the continuous signal from the spikes ?

Given the same as above, how might you optimally reconstruct the spikes from the signal ?

Given a single-channel analog data stream, how might you best encode it in a neural spike train ?

While its possible to write down mathematical expressions for all of the above, we don't really have algorithms to optimize or solve for the raw expressions.

We can do a couple of things instead :

Given a continuous signal and a spike source encoding part of said input, how do you optimally reconstruct the continuous signal from the spikes ?

- Reverse correlation can give you a reconstruction of a continuous stimulus or response based on spikes. Similar but more advanced algorithms also exist to perform this conversion. Of the three questions, I believe this one is most nearly solved.

Given the same as above, how might you optimally reconstruct the spikes from the signal ?

- Rather than model spikes, model the conditional intensity function ( variable rate in a poisson process ). This has the benefit of capturing the variability of spiking response. Inidivual spike trains can be drawn from this conditional intensity function. If your spike trains are highly reliable, the conditional intensity function will become obviously spike-like and this will effectively reconstruct spike times. If your spike trains are unreliable, the conditional intensity will look more like a smoothly fluctuating rate. There is a lot of flexibility in how you fit this model, and algorithms in "point process modeling" are subject of current research.

Given a single-channel analog data stream, how might you best encode it in a neural spike train ?

- Perform a decomposition with a sparseness constraint in both space and time. This will give you a basis where functions are sparsely and briefly activated. Conceivably, you could convert a time course in the new basis to spikes with less error than say, asuming rate coding. This research is more machine learning. Specifically, Olhousen has done work in this area.