Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tutorial 4: Synaptic neuron not matching expected results #320

Open
jeshraghian opened this issue Apr 25, 2024 Discussed in #319 · 3 comments
Open

Tutorial 4: Synaptic neuron not matching expected results #320

jeshraghian opened this issue Apr 25, 2024 Discussed in #319 · 3 comments

Comments

@jeshraghian
Copy link
Owner

Discussed in #319

Originally posted by prntechnologies April 25, 2024
I've been going through all of the tutorials and I'm having an issue with Tutorial #4. It seems that I'm not getting the same results using the Synaptic neural model as what is described in the tutorial?

Anyone else having issues following along, or is it just me?

@MoerAI
Copy link

MoerAI commented May 6, 2024

I also confirmed that there was a problem. The output of Synaptic Conductance-based Neuron Model With Input Spikes is strange. If you modify the hyperparameters alpha and beta defined in Markdown 1.2, you can get the spike values. I'm working on translating the SNN Torch tutorials into Korean, and would appreciate it if anyone could let me know here if this issue is resolved.

@mpgl
Copy link

mpgl commented Jul 24, 2024

Hi,

I am also having the same issue with Tutorial 4. When I manually implement the synaptic LIF model as described in the tutorial it works as expected. But when I try to reproduce the same simulation with snn.Synaptic, the membrane voltage for some reason goes to infinity. I also noticed that the input spikes are not converted into synaptic current. Are there any updates regarding this issue?

Thank you!

snntorch version: 0.9.1
pytorch version: 2.3.1+cu121

Here are the two versions of my code and the simulation results:

def LIF_second_order(voltage, i_synaptic, X, w, alpha, beta, threshold=1):
  spike = (voltage > threshold)
  i_synaptic = alpha * i_synaptic + X * w
  voltage = beta * voltage + i_synaptic - spike*threshold
  return spike, i_synaptic, voltage

# Simulation parameters
num_steps = 200
alpha = 0.9
beta = 0.8
w = 0.2

# Create input and monitor lists
X = torch.zeros(num_steps)
X[::10] = 1
v_monitor = []
spike_monitor = []
isyn_monitor = []

# Run simulation
voltage = torch.zeros(1) # Initialize voltage
isynaptic = torch.zeros(1) # Initialize synaptic current
for step in range(num_steps):
    spike, isynaptic, voltage = LIF_second_order(voltage, isynaptic, X[step], w, alpha, beta)
    spike_monitor.append(spike)
    v_monitor.append(voltage)
    isyn_monitor.append(isynaptic)

# Plot results
fig, axes = plt.subplots(3, 1, sharex=True)
ax0, ax1, ax2 = axes

ax0.plot(torch.nonzero(X), torch.ones_like(torch.nonzero(X)), 'k|')
ax0.set_yticks([])
ax0.set_ylabel('Input Spikes')

ax1.plot(isyn_monitor)
ax1.set_ylabel('Synaptic Current')

ax2.plot(v_monitor)
ax2.set_ylabel('Membrane Voltage')
ax2.set_xlabel('Time')
plt.plot()

image

# Simulation parameters
num_steps = 200
alpha = 0.9
beta = 0.8
w = 0.2

# Create input and monitor lists
X = torch.zeros(num_steps)
X[::10] = w
v_monitor = []
spike_monitor = []
isyn_monitor = []

# Create synaptic LIF neuron
synaptic_lif = snn.Synaptic(alpha=alpha, beta=beta)

# Run simulation
isynaptic, voltage = synaptic_lif.init_synaptic()
for step in range(num_steps):
    spike, isynaptic, voltage = synaptic_lif(X[step], isynaptic, voltage)
    spike_monitor.append(spike)
    v_monitor.append(voltage)
    isyn_monitor.append(isynaptic)

# Plot results
fig, axes = plt.subplots(3, 1, sharex=True)
ax0, ax1, ax2 = axes

ax0.plot(torch.nonzero(X), torch.ones_like(torch.nonzero(X)), 'k|')
ax0.set_yticks([])
ax0.set_ylabel('Input Spikes')

ax1.plot(isyn_monitor)
ax1.set_ylabel('Synaptic Current')

ax2.plot(v_monitor)
ax2.set_ylabel('Membrane Voltage')
ax2.set_xlabel('Time')
plt.plot()

image

@Roiwa
Copy link

Roiwa commented Oct 21, 2024

I have the same issue while making tutorial 4. The problem is in this call:
spk_out, syn, mem = lif1(spk_in[step],syn,mem).

It has a bug in the first sentence of the forward function of the Synaptic neuron, it overrides the syn value with the mem one:

if not syn == None:
            self.syn = mem

Nevertheless, I think the correct way to use Synaptic is to instantiate the neuron:
lif1 = snn.Synaptic(alpha=alpha, beta=beta)
Then initialize the neuron:
syn, mem = lif1.init_synaptic()
In fact, nowadays, we have to use syn, mem = lif1.reset_mem()
and while calling the neuron we need to use:
spk_out, syn, mem = lif1(spk_in[step]),
instead of spk_out, syn, mem = lif1(spk_in[step],syn,mem), as it uses syn and mem attributes (self.syn and self.mem)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants