http://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00408
Ever since I started my PhD program I always wanted to do a weekly review, post on a science paper to keep me up to date, focused, and refine my scientific knowledge, reading, and writing skills.
Well, that never happened.
But this time I did write something up. I hope I can have enough motivation to do this again some time, though. Review/Summary of the paper is as follows.
=====
Long
term effects on synapses are currently a hot topic in the neuroscience
community as it is associated with learning and memory. Two of the major
forms of long-term plasticity, known as Long Term Potentiation (LTP)
and Long Term Depression (LTD), are studied extensively to understand
its underlying mechanisms. In computational neuroscience, we try to use
models to further our understanding of LTP and LTD; this paper is just
one example of how one may try to go about this.
The
schematic of this paper seems rather simple: conduct experiments which
will induce LTD, then use System Identification to model it.
Nevertheless the work required for such a paper is extensive. This model
in particular focuses on mGluR-dependent LTD, in contrast to
NMDA-dependent LTD (as is most other models currently available out
there). For the experimental portion of this paper they use a chemical
known as dihydroxyphenylglycine (DHPG) to stimulate mGluR-dependent
LTD... I’ll assume that the references support that this is indeed
mGluR-dependent.
For
modeling they use a Systems Identification approach, a top-down method
where they attempt to extract the dynamics of LTD using transfer
functions. They used four different datasets with differing DHPG
concentration, duration of DHPG application and sampling rates; the
input was defined as the DHPG concentration while the output was the
fEPSP (post synaptic current) slope percent change as a result of
application.
The
transfer function used is of particular interest to me as it is more
abstract and thus difficult to grasp than the experimental portion. From
the paper, it seems the focus of the transfer function is the
polynomial expansion using a “backward shift operator” z^-1. Basic
definition of a backward shift operator is:
z^-1 * Y(t) = Y(t-1)
essentially,
then, z^-1 = Y(t-1)/Y(t), a ratio of the past inputs compared with the
current input. The polynomial coefficients are then estimated depending
on a range on order (z^-n represents nth order) and time delays (1-10).
The best model was determined through 3 statistical criteria (R^2,
Akaike, Young) which each have their own criterion and reason for use.
The
results of this paper are simple yet significant. For lower sampling
rate (0.0033 Hz)Applying DHPG for 5 minutes at 15 uM gives about 20%
reduction in slope size; application for 15 min give about 30%
reduction; and applying 30 uM DHPG for 25 min also gives, on average,
30% reduction, but the effects take longer to stabilize. The authors
state that the models approximate an integrator, meaning that time of
application for input (5, 15, or 25 minutes) changes the response of the
system.
The
paper then discusses the results of the higher sampling rate (0.033
Hz). It gives a reasoning on why higher sampling rate should be used,
but raises the question as to why they did not use this higher sampling
rate for their previous experiments, as well, and instead have the
higher sampling rate to be separate from the results of the others.
In
all their results the highest order with the best fit turns out to be
2nd order. Judging from how the pattern of LTD looks, this seems
accurate since there does not appear to be complex nonlinearities in
LTD. The paper goes on to describe LTD as a certain subprocesses
(serial, parallel, feedback), second order meaning there are two
subprocesses, one with fast time constants and one with slow time
constants. The results they give suggest there could be either parallel
and/or feedback structures for their model.
In
terms of the subprocesses, there is still some lack of understanding on
behalf of myself to completely know how the coefficients and time
constants relate to them, therefore I may need to determine how to work
with them better.
As
the experiment itself was rather conceptually simple, the discussion
doesn’t seem to add much more to what is already known. Dynamics of the
system is briefly mentioned. Additionally, they present evidence to
support the experimental protocols which they used and explain a bit
more about the modeling results with how it compares to the currently
know physiological processes.
Overall
the paper is a useful reference to see how System Identification can be
used to model physiological data and how to interpret the model. As for
its significance and usage, it seems a bit too rudimentary to have such
a model in a larger scale modeling scheme such as EONS. The conditions
are too specific and the experimental data to support the model is not
enough. Nevertheless such a model can prove useful, not only through
analyzing the dynamics, but as more is learned about mGluR related LTD,
it can serve as one of the stepping stones to bring the entire process
of synaptic plasticity together, which would then serve use to EONS in
the future.
No comments:
Post a Comment