Journal Home > Volume 25 , Issue 4

Inspired by real biological neural models, Spiking Neural Networks (SNNs) process information with discrete spikes and show great potential for building low-power neural network systems. This paper proposes a hardware implementation of SNN based on Field-Programmable Gate Arrays (FPGA). It features a hybrid updating algorithm, which combines the advantages of existing algorithms to simplify hardware design and improve performance. The proposed design supports up to 16 384 neurons and 16.8 million synapses but requires minimal hardware resources and archieves a very low power consumption of 0.477 W. A test platform is built based on the proposed design using a Xilinx FPGA evaluation board, upon which we deploy a classification task on the MNIST dataset. The evaluation results show an accuracy of 97.06% and a frame rate of 161 frames per second.


menu
Abstract
Full text
Outline
About this article

Hardware Implementation of Spiking Neural Networks on FPGA

Show Author's information Jianhui HanZhaolin Li( )Weimin ZhengYouhui Zhang( )
Institute of Microelectronics, Tsinghua University, Beijing 100084, China.
Research Institute of Information Technology, Tsinghua University, Beijing 100084, China.
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.

Abstract

Inspired by real biological neural models, Spiking Neural Networks (SNNs) process information with discrete spikes and show great potential for building low-power neural network systems. This paper proposes a hardware implementation of SNN based on Field-Programmable Gate Arrays (FPGA). It features a hybrid updating algorithm, which combines the advantages of existing algorithms to simplify hardware design and improve performance. The proposed design supports up to 16 384 neurons and 16.8 million synapses but requires minimal hardware resources and archieves a very low power consumption of 0.477 W. A test platform is built based on the proposed design using a Xilinx FPGA evaluation board, upon which we deploy a classification task on the MNIST dataset. The evaluation results show an accuracy of 97.06% and a frame rate of 161 frames per second.

Keywords: Spiking Neural Network (SNN), Field-Programmable Gate Arrays (FPGA), digital circuit, low-power, MNIST

References(21)

[1]
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[2]
E. M. Izhikevich, Simple model of spiking neurons, IEEE Trans. Neural Netw., vol. 14, no. 6, pp. 1569-1572, 2003.
[3]
P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, et al., A million spiking-neuron integrated circuit with a scalable communication network and interface, Science, vol. 345, no. 6197, pp. 668-673, 2014.
[4]
M. M. Khan, D. R. Lester, L. A. Plana, A. Rast, X. Jin, E. Painkras, and S. B. Furber, SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor, in 2008 IEEE Int. Joint Conf. on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 2008, pp. 2849-2856.
DOI
[5]
S. W. Moore, P. J. Fox, S. J. T. Marsh, A. T. Markettos, and A. Mujumdar, Bluehive—A field-programable custom computing machine for extreme-scale real-time neural network simulation, in 2012 IEEE 20th Int. Symp. on Field-Programmable Custom Computing Machines, Toronto, Canada, 2012, pp. 133-140.
DOI
[6]
D. Neil and S. C. Liu, Minitaur, an event-driven FPGA-based spiking network accelerator, IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 22, no. 12, pp. 2621-2628, 2014.
[7]
K. Cheung, S. R. Schultz, and W. Luk, A large-scale spiking neural network accelerator for FPGA systems, in Artificial Neural Networks and Machine Learning - ICANN 2012, A. E. P. Villa, W. Duch, P. Érdi, F. Masulli, and G. Palm, eds. Springer, 2012, pp. 113-120.
DOI
[8]
E. Farquhar, C. Gordon, and P. Hasler, A field programmable neural array, in 2006 IEEE Int. Symp. on Circuits and Systems, Island of Kos, Greece, 2006, pp. 4114-4117.
[9]
M. Liu, H. Yu, and W. Wang, FPAA based on integration of CMOS and nanojunction devices for neuromorphic applications, in Int. Conf. on Nano-Networks, M. Cheng, ed. Springer, 2009, pp. 44-48.
DOI
[10]
B. V. Benjamin, P. R. Gao, E. McQuinn, S. Choudhary, A. R. Chandrasekaran, J. M. Bussat, R. Alvarez-Icaza, J. V. Arthur, P. A. Merolla, and K. Boahen, Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations, Proc. IEEE, vol. 102, no. 5, pp. 699-716, 2014.
[11]
T. Pfeil, J. Jordan, T. Tetzlaff, A. Grübl, J. Schemmel, M. Diesmann, and K. Meier, Effect of heterogeneity on decorrelation mechanisms in spiking neural networks: A neuromorphic-hardware study, Phys. Rev. X, vol. 6, no. 2, p. 021023, 2016.
[12]
G. E. Hinton and R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, vol. 313, no. 5786, pp. 504-507, 2006.
[13]
D. C. Cireşan, U. Meier, L. M. Gambardella, and J. Schmidhuber, Deep, big, simple neural nets for handwritten digit recognition, Neural Comput., vol. 22, no. 12, pp. 3207-3220, 2010.
[14]
A. R. Mohamed, G. E. Dahl, and G. Hinton, Acoustic modeling using deep belief networks, IEEE Trans. Audio Speech Lang. Process., vol. 20, no. 1, pp. 14-22, 2012.
[15]
P. O’Connor, D. Neil, S. C. Liu, T. Delbruck, and M. Pfeiffer, Real-time classification and sensor fusion with a spiking deep belief network, Front. Neurosci., vol. 7, p. 178, 2013.
[16]
P. U. Diehl, D. Neil, J. Binas, M. Cook, S. C. Liu, and M. Pfeiffer, Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing, in 2015 Int. Joint Conf. on Neural Networks (IJCNN), Killarney, Ireland, 2015, pp. 1-8.
DOI
[17]
Xilinx Inc., Xilinx Zynq-7000 SoC ZC706 evaluation kit, https://www.xilinx.com/products/boards-and-kits/ek-z7-zc706-g.html, 2019.
[18]
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. M. Lin, A. Desmaison, L. Antiga, and A. Lerer, Automatic differentiation in PyTorch, in Proc. 31st Conf. on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 1-4.
[19]
NVIDIA Corporation, NVIDIA Tesla P100: The world’s first AI supercomputing data center GPU, https://www.nvidia.com/en-us/data-center/tesla-p100/, 2019.
[20]
NVIDIA Corporation, NVIDIA system management interface, https://developer.nvidia.com/nvidia-system-management-interface, 2019.
[21]
S. Han, H. Z. Mao, and W. J. Dally, Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding, arXiv preprint: 1510.00149, 2015.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 26 March 2019
Revised: 25 April 2019
Accepted: 05 May 2019
Published: 13 January 2020
Issue date: August 2020

Copyright

© The author(s) 2020

Acknowledgements

This work was supported in part by the Beijing Innovation Center for Future Chip, Tsinghua University, in part by the Science and Technology Innovation Special Zone project, China, and in part by the Tsinghua University Initiative Scientific Research Program (No. 2018Z05JDX005).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return