AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Submit Manuscript
Show Outline
Show full outline
Hide outline
Show full outline
Hide outline
Regular Paper

SIES: A Novel Implementation of Spiking Convolutional Neural Network Inference Engine on Field-Programmable Gate Array

College of Computer Science and Technology, National University of Defense Technology, Changsha 430041, China
Show Author Information


Neuromorphic computing is considered to be the future of machine learning, and it provides a new way of cognitive computing. Inspired by the excellent performance of spiking neural networks (SNNs) on the fields of low-power consumption and parallel computing, many groups tried to simulate the SNN with the hardware platform. However, the efficiency of training SNNs with neuromorphic algorithms is not ideal enough. Facing this, Michael et al. proposed a method which can solve the problem with the help of DNN (deep neural network). With this method, we can easily convert a well-trained DNN into an SCNN (spiking convolutional neural network). So far, there is a little of work focusing on the hardware accelerating of SCNN. The motivation of this paper is to design an SNN processor to accelerate SNN inference for SNNs obtained by this DNN-to-SNN method. We propose SIES (Spiking Neural Network Inference Engine for SCNN Accelerating). It uses a systolic array to accomplish the task of membrane potential increments computation. It integrates an optional hardware module of max-pooling to reduce additional data moving between the host and the SIES. We also design a hardware data setup mechanism for the convolutional layer on the SIES with which we can minimize the time of input spikes preparing. We implement the SIES on FPGA XCVU440. The number of neurons it supports is up to 4000 while the synapses are 256000. The SIES can run with the working frequency of 200 MHz, and its peak performance is 1.5625 TOPS.

Electronic Supplementary Material

Download File(s)
jcst-35-2-475-Highlights.pdf (310.2 KB)



Akopyan F, Sawada J, Cassidy A, Alvarez-Icaza R, Arthur J, Merolla P, Imam N, Nakamura Y, Datta P, Nam G J. TrueNorth: Design and tool flow of a 65mW 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2015, 34(10): 1537-1557.

Geddes J, Lloyd S, Simpson A C et al. NeuroGrid: Using grid technology to advance neuroscience. In Proc. the 18th IEEE Symposium on Computer-Based Medical Systems, June 2005, pp.570-572.
Schemmel J, Grübl A, Hartmann S et al. Live demonstration: A scaled-down version of the BrainScaleS wafer-scale neuromorphic system. In Proc. the 2012 IEEE International Symposium on Circuits Systems, May 2012, p.702.

Furber S B, Lester D R, Plana L A, Garside J D, Painkras E, Temple S, Brown A D. Overview of the spiNNaker system architecture. IEEE Transactions on Computers, 2013, 62(12): 2454-2467.


Davies M, Jain S, Liao Y et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, 2018, 38(1): 82-99.

Diehl P U, Neil D, Binas J, Cook M, Liu S C, Pfeiffer M. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Proc. the 2015 International Joint Conference on Neural Networks, July 2015.

Rueckauer B, Lungu I A, Hu Y, Pfeiffer M, Liu S C. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience, 2017, 11: Article No. 682.

Rueckauer B, Lungu L A, Hu Y H, Pfeiffer M. Theory and tools for the conversion of analog to spiking convolutional neural networks. arXiv: 1612.04052, 2016., Nov. 2019.
Du Z D, Fasthuber R, Chen T S, Ienne P, Li L, Luo T, Feng X B, Chen Y J, Temam O. ShiDianNao: Shifting vision processing closer to the sensor. In Proc. the 42nd ACM/IEEE International Symposium on Computer Architecture, June 2015, pp.92-104.
Guan Y J, Yuan Z H, Sun G Y, Cong J. FPGA-based accelerator for long short-term memory recurrent neural networks. In Proc. the 22nd Asia and South Pacific Design Automation Conference, January 2017, pp.629-634.
Zhou Y M, Jiang J F. An FPGA-based accelerator implementation for deep convolutional neural networks. In Proc. the 4th International Conference on Computer Science Network Technology, December 2016, pp.829-832.

Neil D, Liu S C. Minitaur, an event-driven FPGA-based spiking network accelerator. IEEE Transactions on Very Large Scale Integration Systems, 2014, 22(12): 2621-2628.


Wang R, Thakur C S, Cohen G, Hamilton T J, Tapson J, van Schaik A. Neuromorphic hardware architecture using the neural engineering framework for pattern recognition. IEEE Trans. Biomed Circuits Syst., 2017, 11(3): 574-584.

Glackin B, Mcginnity T M, Maguire L P, Wu Q X, Belatreche A. A novel approach for the implementation of large scale spiking neural networks on FPGA hardware. In Lecture Notes in Computer Science 3512, Cabestany J, Prieto A, Sandoral (eds.), Springer, 2005, pp.552-563.
Cheung K, Schultz S R, Luk W. A large-scale spiking neural network accelerator for FPGA systems. In Proc. the 22nd International Conference on Artificial Neural Networks, September 2012, pp.113-130.

Benton A L. Foundations of physiological psychology. Neurology, 1968, 18(6): 609-612.


Hodgkin A L, Huxley A F, Katz B. Measurement of current-voltage relations in the membrane of the giant axon of Loligo. J. Physiol., 1952, 116(4): 424-448.


Izhikevich E M. Simple model of spiking neurons. IEEE Transactions on Neural Networks, 2003, 14(6): 1569-1572.


Brunel N, van Rossum M C W. Lapicque’s 1907 paper: From frogs to integrate-and-fire. Biological Cybernetics, 2007, 97(5/6): 337-339.


Liu Y H, Wang X J. Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron. Journal of Computational Neuroscience, 2001, 10(1): 25-45.


Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology, 2005, 94(5): 3637-3642.


Paninski L, Pillow J W, Simoncelli E P. Maximum likelihood estimation of a stochastic integrate-and-fire neural encoding model. Neural Computation, 2014, 16(12): 2533-2561.


Tsumoto K, Kitajima H, Yoshinaga T, Aihara K, Kawakami H. Bifurcations in Morris-Lecar neuron model. Neurocomputing, 2006, 69(4-6): 293-316.


Linares-Barranco B, Sanchez-Sinencio E, Rodriguez-Vazquez A, Huertas J L. A CMOS implementation of the Fitzhugh-Nagumo neuron model. IEEE Journal of Solid-State Circuits, 1991, 26(7): 956-965.


Yadav R N, Kalra P K, John J. Time series prediction with single multiplicative neuron model. Applied Soft Computing, 2007, 7(4): 1157-1163.


Maguire L P, Mcginnity T M, Glackin B, Ghani A, Belatreche A, Harkin J. Challenges for large-scale implementations of spiking neural networks on FPGAs. Neurocomputing, 2007, 71(1): 13-29.

Gerstner W, Kistler W. Spiking Neuron Models: Single Neurons, Populations, Plasticity (1st edition). Cambridge University Press, 2002.
Gerstner W. Spiking neuron models. In Encyclopedia of Neuroscience, Squire L R (ed.), Academic Press, 2009, pp.277-280.

Lopresti D P. P-NAC: A systolic array for comparing nucleic acid sequences. Computer, 1987, 20(7): 98-99.


Samajdar A, Zhu Y, Whatmough P, Mattina M, Krishna T. SCALE-Sim: Systolic CNN accelerator simulator. Distributed, Parallel, and Cluster Computing, 2018.

Jouppi N P, Young C, Patil N et al. In-datacenter performance analysis of a tensor processing unit. In Proc. International Symposium on Computer Architecture, May 2017.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In Proc. the 3rd International Conference on Learning Representations, May 2015, Article No. 4.

Shen J C, Ma D, Gu Z H, Zhang M, Zhu X L, Xu X Q, Xu Q, Shen Y J, Pan G. Darwin: A neuromorphic hardware co-processor based on spiking neural networks. SCIENCE CHINA Information Sciences, 2016, 59(2): Article No. 023401.

Kousanakis E, Dollas A, Sotiriades E et al. An architecture for the acceleration of a hybrid leaky integrate and fire SNN on the convey HC-2ex FPGA-based processor. In Proc. the 25th IEEE International Symposium on Field-programmable Custom Computing Machines, April 2017, pp.56-63.
Fang H, Shrestha A, Ma D et al. Scalable NoC-based neuromorphic hardware learning and inference. arXiv: 1810.09233, 2018., Dec. 2019.

Cheung K, Schultz S R, Luk W. NeuroFlow: A general purpose spiking neural network simulation platform using customizable processors. Frontiers in Neuroscience, 2015, 9: Article No. 516.


Albericio J, Judd P, Hetherington T et al. Cnvlutin: Ineffectual-neuron-free deep neural network computing. ACM SIGARCH Computer Architecture News, 2016, 44(3): 1-13.


Guo S, Wang L, Chen B, Dou Q. An overhead-free max-pooling method for SNN. IEEE Embedded Systems Letters. doi:10.1109/LES.2019.2919244.

Sengupta A, Ye Y T, Wang R, Liu C, Roy K. Going deeper in spiking neural networks: VGG and residual architectures. arXiv: 1802.02627, 2018., Dec. 2019.

LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324.

Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng A Y. Reading digits in natural images with unsupervised feature learning. In Proc. the NIPS Workshop on Deep Learning and Unsupervised Feature Learning, December 2011.
Krizhevsky A. Learning multiple layers of features from tiny images. Technical Report, University of Toronto, 2009., Dec. 2019.
Journal of Computer Science and Technology
Pages 475-489
Cite this article:
Wang S-Q, Wang L, Deng Y, et al. SIES: A Novel Implementation of Spiking Convolutional Neural Network Inference Engine on Field-Programmable Gate Array. Journal of Computer Science and Technology, 2020, 35(2): 475-489.






Web of Science






Received: 01 May 2019
Revised: 13 February 2020
Published: 27 March 2020
©Institute of Computing Technology, Chinese Academy of Sciences 2020