Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Spiking Neural Network (SNN) simulation is very important for studying brain function and validating the hypotheses for neuroscience, and it can also be used in artificial intelligence. Recently, GPU-based simulators have been developed to support the real-time simulation of SNN. However, these simulators’ simulating performance and scale are severely limited, due to the random memory access pattern and the global communication between devices. Therefore, we propose an efficient distributed heterogeneous SNN simulator based on the Sunway accelerators (including SW26010 and SW26010pro), named SWsnn, which supports accurate simulation with small time step (1/16 ms), randomly delay sizes for synapses, and larger scale network computing. Compared with existing GPUs, the Local Dynamic Memory (LDM) (similar to cache) in Sunway is much bigger (4 MB or 16 MB in each core group). To improve the simulation performance, we redesign the network data storage structure and the synaptic plasticity flow to make most random accesses occur in LDM. SWsnn hides Message Passing Interface (MPI)-related operations to reduce communication costs by separating SNN general workflow. Besides, SWsnn relies on parallel Compute Processing Elements (CPEs) rather than serial Manage Processing Element (MPE) to control the communicating buffers, using Register-Level Communication (RLC) and Direct Memory Access (DMA). In addition, SWsnn is further optimized using vectorization and DMA hiding techniques. Experimental results show that SWsnn runs 1.4−2.2 times faster than state-of-the-art GPU-based SNN simulator GPU-enhanced Neuronal Networks (GeNN), and supports much larger scale real-time simulation.
J. Jordan, T. Ippen, M. Helias, I. Kitayama, M. Sato, J. Igarashi, M. Diesmann, and S. Kunkel, Extremely scalable spiking neuronal network simulation code: From laptops to exascale computers, Front. Neuroinform., vol. 12, p. 2, 2018.
S. Kunkel, M. Schmidt, J. M. Eppler, H. E. Plesser, G. Masumoto, J. Igarashi, S. Ishii, T. Fukai, A. Morrison, M. Diesmann et al., Spiking network simulation code for petascale computers, Front. Neuroinform., vol. 8, p. 78, 2014.
S. Kunkel, T. C. Potjans, J. M. Eppler, H. E. Plesser, A. Morrison, and M. Diesmann, Meeting the memory challenges of brain-scale network simulation, Front. Neuroinform., vol. 5, p. 35, 2012.
J. Igarashi, O. Shouno, T. Fukai, and H. Tsujino, Real-time simulation of a spiking neural network model of the basal Ganglia circuitry using general purpose computing on graphics processing units, Neural Netw., vol. 24, no. 9, pp. 950–960, 2011.
M. Migliore, C. Cannia, W. W. Lytton, H. Markram, and M. L. Hines, Parallel network simulations with NEURON, J. Comput. Neurosci., vol. 21, no. 2, pp. 119–129, 2006.
M. Stimberg, D. F. M. Goodman, and T. Nowotny, Brian2GeNN: Accelerating spiking neural network simulations with graphics hardware, Sci. Rep., vol. 10, no. 1, p. 410, 2020.
E. Yavuz, J. Turner, and T. Nowotny, GeNN: A code generation framework for accelerated brain simulations, Sci. Rep., vol. 6, no. 1, p. 18854, 2016.
R. Brette and D. F. M. Goodman, Simulating spiking neural networks on GPU, Netw. Comput. Neural Syst., vol. 23, no. 4, pp. 167–182, 2012.
S. Henker, J. Partzsch, and R. Schüffny, Accuracy evaluation of numerical methods used in state-of-the-art simulators for spiking neural networks, J. Comput. Neurosci., vol. 32, no. 2, pp. 309–326, 2012.
N. Imam and T. A. Cleland, Rapid online learning and robust recall in a neuromorphic olfactory circuit, Nat. Mach. Intell., vol. 2, no. 3, pp. 181–191, 2020.
J. Pei, L. Deng, S. Song, M. Zhao, Y. Zhang, S. Wu, G. Wang, Z. Zou, Z. Wu, W. He et al., Towards artificial general intelligence with hybrid Tianjic chip architecture, Nature, vol. 572, no. 7767, pp. 106–111, 2019.
P. Qu, Y. Zhang, X. Fei, and W. Zheng, High performance simulation of spiking neural network on GPGPUs, IEEE Trans. Parallel Distrib. Syst., vol. 31, no. 11, pp. 2510–2523, 2020.
J. Lin, Z. Xu, L. Cai, A. Nukada, and S. Matsuoka, Evaluating the SW26010 many-core processor with a micro-benchmark suite for performance optimizations, Parallel Comput., vol. 77, pp. 128–143, 2018.
H. Markram, W. Gerstner, and P. J. Sjöström, Spike-timing-dependent plasticity: A comprehensive overview, Front. Synaptic Neurosci., vol. 4, p. 2, 2012.
A. Morrison, A. Aertsen, and M. Diesmann, Spike-timing-dependent plasticity in balanced random networks, Neural Comput., vol. 19, no. 6, pp. 1437–1467, 2007.
W. Gerstner, R. Kempter, J. L. van Hemmen, and H. Wagner, A neuronal learning rule for sub-millisecond temporal coding, Nature, vol. 383, no. 6595, pp. 76–81, 1996.
H. Fu, J. Liao, J. Yang, L. Wang, Z. Song, X. Huang, C. Yang, W. Xue, F. Liu, F. Qiao, et al., The Sunway TaihuLight supercomputer: System and applications, Sci. China Inf. Sci., vol. 59, no. 7, p. 072001, 2016.
X. Li, W. Wang, F. Xue, and Y. Song, Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity, Phys. A Stat. Mech. Appl., vol. 491, pp. 716–728, 2018.
H. Markram, W. Gerstner, and P. J. Sjöström, Spike-timing-dependent plasticity: A comprehensive overview, Front. Syn. Neurosci., vol. 4, p. 2, 2012.
M. J. Shelley and L. Tao, Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks, J. Comput. Neurosci., vol. 11, no. 2, pp. 111–119, 2001.
R. D. Stewart and W. Bair, Spiking neural network simulation: Numerical integration with the Parker-Sochacki method, J. Comput. Neurosci., vol. 27, no. 1, pp. 115–133, 2009.
E. M. Izhikevich and G. M. Edelman, Large-scale model of mammalian thalamocortical systems, Proc. Natl. Acad. Sci. U. S. A., vol. 105, no. 9, pp. 3593–3598, 2008.
M. J. Skocik and L. N. Long, On the capabilities and computational costs of neuron models, IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 8, pp. 1474–1483, 2014.
T. C. Potjans and M. Diesmann, The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model, Cereb. Cortex, vol. 24, no. 3, pp. 785–806, 2014.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).