Currently when performing a Wilcoxon analysis, the spike_analysis script’ n=# of events, since it’s just taking the average firing rate of pre-event/during-event, and doing a Wilcoxon on that. Technically, as far as I can tell (textbook source), the minimum samples required to perform a Wilcoxon is n=6. scipy.stats.wilcoxon will technically let you input fewer, but the smallest possible p-value is still higher than 0.05.

This partly explains the occurrence*significance relation. Notice that nothing <6 is significant

Untitled

Because wilcoxon looks at paired comparisons, I tried performing wilcoxon on long arrays of firing rates of every occurrence of an event and the same events’ 10s baseline, and it worked, but almost every neuron was significant for almost every event (~89%), with ~1/2 of the p-values being in the range of X*10^-100

So then I tried combining the p-values of separate wilcoxon tests with Fisher’s Method, instead of performing one large wilcoxon, but that provided me even smaller p-values:

Actual correlation is lower, but still can’t rule out that Wilcoxon might bias higher sample numbers.

Untitled