Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Brain-inspired computing is a new technology that draws on the principles of brain science and is oriented to the efficient development of artificial general intelligence (AGI), and a brain-inspired computing system is a hierarchical system composed of neuromorphic chips, basic software and hardware, and algorithms/applications that embody this technology. While the system is developing rapidly, it faces various challenges and opportunities brought by interdisciplinary research, including the issue of software and hardware fragmentation. This paper analyzes the status quo of brain-inspired computing systems. Enlightened by some design principle and methodology of general-purpose computers, it is proposed to construct “general-purpose” brain-inspired computing systems. A general-purpose brain-inspired computing system refers to a brain-inspired computing hierarchy constructed based on the design philosophy of decoupling software and hardware, which can flexibly support various brain-inspired computing applications and neuromorphic chips with different architectures. Further, this paper introduces our recent work in these aspects, including the ANN (artificial neural network)/SNN (spiking neural network) development tools, the hardware agnostic compilation infrastructure, and the chip micro-architecture with high flexibility of programming and high performance; these studies show that the “general-purpose” system can remarkably improve the efficiency of application development and enhance the productivity of basic software, thereby being conductive to accelerating the advancement of various brain-inspired algorithms and applications. We believe that this is the key to the collaborative research and development, and the evolution of applications, basic software and chips in this field, and conducive to building a favorable software/hardware ecosystem of brain-inspired computing.
Roy K, Jaiswal A, Panda P. Towards spike-based machine intelligence with neuromorphic computing. Nature , 2019, 575(7784): 607–617. DOI: 10.1038/s41586-019-1677-2.
Waldrop M M. The chips are down for Moore’s law. Nature , 2016, 530(7589): 144–147. DOI: 10.1038/530144a.
Maass W. Networks of spiking neurons: The third generation of neural network models. Neural Networks , 1997, 10(9): 1659–1671. DOI: 10.1016/S0893-6080(97)00011-7.
Qu P, Yang L, Zheng W M, Zhang Y H. A review of basic software for brain-inspired computing. CCF Trans. High Performance Computing , 2022, 4(1): 34–42. DOI: 10.1007/s42514-022-00092-1.
Kass R E, Amari S I, Arai K, Brown E N, Diekman C O, Diesmann M, Doiron B, Eden U T, Fairhall A L, Fiddyment G M, Fukai T, Grün S, Harrison M T, Helias M, Nakahara H, Teramae J N, Thomas P J, Reimers M, Rodu J, Rotstein H G, Shea-Brown E, Shimazaki H, Shinomoto S, Yu B M, Kramer M A. Computational neuroscience: Mathematical and statistical perspectives. Annual Review of Statistics and Its Application , 2018, 5: 183–214. DOI: 10.1146/annurev-statistics-041715-033733.
Plana L A, Clark D, Davidson S, Furber S, Garside J, Painkras E, Pepper J, Temple S, Bainbridge J. SpiNNaker: Design and implementation of a GALS multicore system-on-chip. ACM Journal on Emerging Technologies in Computing Systems , 2011, 7(4): 17. DOI: 10.1145/2043643.2043647.
Zhang W B, Yao P, Gao B, Liu Q, Wu D, Zhang Q T, Li Y K, Qin Q, Li J M, Zhu Z H, Cai Y, Wu D B, Tang J S, Qian H, Wang Y, Wu H Q. Edge learning using a fully integrated neuro-inspired memristor chip. Science , 2023, 381(6663): 1205–1211. DOI: 10.1126/science.ade3483.
Yao P, Wu H Q, Gao B, Tang J S, Zhang Q T, Zhang W Q, Yang J J, Qian H. Fully hardware-implemented memristor convolutional neural network. Nature , 2020, 577(7792): 641–646. DOI: 10.1038/s41586-020-1942-4.
Kim C H, Lee S, Woo S Y, Kang W M, Lim S, Bae J H, Kim J, Lee J H. Demonstration of unsupervised learning with spike-timing-dependent plasticity using a TFT-type NOR flash memory array. IEEE Trans. Electron Devices , 2018, 65(5): 1774–1780. DOI: 10.1109/TED.2018.2817266.
Shouval H Z, Wang S S H, Wittenberg G M. Spike timing dependent plasticity: A consequence of more fundamental learning rules. Frontiers in Computational Neuroscience , 2010, 4: 19. DOI: 10.3389/fncom.2010.00019.
Akopyan F, Sawada J, Cassidy A, Alvarez-Icaza R, Arthur J, Merolla P, Imam N, Nakamura Y, Datta P, Nam G J, Taba B, Beakes M, Brezzo B, Kuang J B, Manohar R, Risk W P, Jackson B, Modha D S. TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip. IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems , 2015, 34(10): 1537–1557. DOI: 10.1109/TCAD.2015.2474396.
Neckar A, Fok S, Benjamin B V, Stewart T C, Oza N N, Voelker A R, Eliasmith C, Manohar R, Boahen K. Braindrop: A mixed-signal neuromorphic architecture with a dynamical systems-based programming model. Proceedings of the IEEE , 2019, 107(1): 144–164. DOI: 10.1109/JPROC.2018.2881432.
Davies M, Srinivasa N, Lin T H, Chinya G, Cao Y Q, Choday S H, Dimou G, Joshi P, Imam N, Jain S, Liao Y Y, Lin C K, Lines A, Liu R K, Mathaikutty D, Mccoy S, Paul A, Tse J, Venkataramanan G, Weng Y H, Wild A, Yang Y, Wang H. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro , 2018, 38(1): 82–99. DOI: 10.1109/MM.2018.112130359.
Lin C K, Wild A, Chinya G N, Cao Y Q, Davies M, Lavery D M, Wang H. Programming spiking neural networks on Intel’s Loihi. Computer , 2018, 51(3): 52–61. DOI: 10.1109/MC.2018.157113521.
Pei J, Deng L, Song S, Zhao M G, Zhang Y H, Wu S, Wang G R, Zou Z, Wu Z Z, He W, Chen F, Deng N, Wu S, Wang Y, Wu Y J, Yang Z Y, Ma C, Li G Q, Han W T, Li H L, Wu H Q, Zhao R, Xie Y, Shi L P. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature , 2019, 572(7767): 106–111. DOI: 10.1038/s41586-019-1424-8.
Deng L, Wang G R, Li G Q, Li S C, Liang L, Zhu M H, Wu Y J, Yang Z Y, Zou Z, Pei J, Wu Z Z, Hu X, Ding Y F, He W, Xie Y, Shi L P. Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation. IEEE Journal of Solid-State Circuits , 2020, 55(8): 2228–2246. DOI: 10.1109/JSSC.2020.2970709.
Beniaguev D, Segev I, London M. Single cortical neurons as deep artificial neural networks. Neuron , 2021, 109(17): 2727–2739.e3. DOI: 10.1016/j.neuron.2021.07.002.
Zhang Y C, He G, Ma L, Liu X F, Hjorth J J J, Kozlov A, He Y T, Zhang S J, Kotaleski J H, Tian Y H, Grillner S, Du K, Huang T J. A GPU-based computational framework that bridges neuron simulation and artificial intelligence. Nature Communications , 2023, 14(1): 5798. DOI: 10.1038/s41467-023-41553-7.
Bicknell B A, Häusser M. A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron , 2021, 109(24): 4001–4017.e10. DOI: 10.1016/j.neuron.2021.09.044.
Rueckauer B, Lungu I A, Hu Y H, Pfeiffer M, Liu S C. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in Neuroscience , 2017, 11: 682. DOI: 10.3389/fnins.2017.00682.
Gao H R, He J X, Wang H B, Wang T X, Zhong Z Q, Yu J Y, Wang Y, Tian M, Shi C. High-accuracy deep ANN-to-SNN conversion using quantization-aware training framework and calcium-gated bipolar leaky integrate and fire neuron. Frontiers in Neuroscience , 2023, 17: 1141701. DOI: 10.3389/fnins.2023.1141701.
Lobov S, Mironov V, Kastalskiy I, Kazantsev V. A spiking neural network in sEMG feature extraction. Sensors , 2015, 15(11): 27894–27904. DOI: 10.3390/s151127894.
Chancán M, Hernandez-Nunez L, Narendra A, Barron A B, Milford M. A hybrid compact neural architecture for visual place recognition. IEEE Robotics and Automation Letters , 2020, 5(2): 993–1000. DOI: 10.1109/LRA.2020.2967324.
Zhao R, Yang Z Y, Zheng H, Wu Y J, Liu F Q, Wu Z Z, Li L K, Chen F, Song S, Zhu J, Zhang W L, Huang H Y, Xu M K, Sheng K F, Yin Q B, Pei J, Li G Q, Zhang Y H, Zhao M G, Shi L P. A framework for the general design and computation of hybrid neural networks. Nature Communications , 2022, 13(1): 3427. DOI: 10.1038/s41467-022-30964-7.
Roxin A, Brunel N, Hansel D, Mongillo G, Vreeswijk C V. On the distribution of firing rates in networks of cortical neurons. Journal of Neuroscience , 2011, 31(5): 16217–16226. DOI: 10.1523/JNEUROSCI.1677-11.2011.
Qu P, Lin H, Pang M, Liu X F, Zheng W M, Zhang Y H. ENLARGE: An efficient SNN simulation framework on GPU clusters. IEEE Trans. Parallel and Distributed Systems , 2023, 34(9): 2529–2540. DOI: 10.1109/TPDS.2023.3291825.
Fang W, Chen Y Q, Ding J H, Yu Z F, Masquelier T, Chen D, Huang L W, Zhou H H, Li G Q, Tian Y H. SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence. Science Advances , 2023, 9(40): eadi1480. DOI: 10.1126/sciadv.adi1480.
Hines M L, Carnevale N T. The NEURON simulation environment. Neural Computation , 1997, 9(6): 1179–1209. DOI: 10.1162/neco.1997.9.6.1179.
Turing A M. On computable numbers, with an application to the entscheidungsproblem. Journal of Mathematics , 1936, 58: 345–363. DOI: 10.112/plms/s2-42.1.230.
Zhang Y H, Qu P, Ji Y, Zhang W H, Gao G R, Wang G R, Song S, Li G Q, Chen W G, Zheng W M, Chen F, Pei J, Zhao R, Zhao M G, Shi L P. A system hierarchy for brain-inspired computing. Nature , 2020, 586(7829): 378–384. DOI: 10.1038/s41586-020-2782-y.
Ji Y, Liu Z X, Zhang Y H. A reduced architecture for ReRAM-based neural network accelerator and its software stack. IEEE Trans. Computers , 2021, 70(3): 316–331. DOI: 10.1109/TC.2020.2988248.
Liu F Q, Zhao R. Enhancing spiking neural networks with hybrid top-down attention. Frontiers in Neuroscience , 2022, 16: 949142. DOI: 10.3389/fnins.2022.949142.
Tian L, Wu Z Z, Wu S, Shi L P. Hybrid neural state machine for neural network. Science China Information Sciences , 2021, 64(3): 132202. DOI: 10.1007/s11432-019-2988-1.
Wu Y J, Deng L, Li G Q, Zhu J, Shi L P. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience , 2018, 12: 331. DOI: 10.3389/fnins.2018.00331.
Grossberg S. Competitive learning: From interactive activation to adaptive resonance. Cognitive Science , 1987, 11(1): 23–63. DOI: 10.1016/S0364-0213(87)80025-3.
Gewaltig M O, Diesmann M. NEST (neural simulation tool). Scholarpedia , 2007, 2(4): 1430. DOI: 10.4249/scholarpedia.1430.
Ma S C, Pei J, Zhang W H, Wang G R, Feng D H, Yu F W, Song C H, Qu H Y, Ma C, Lu M S, Liu F Q, Zhou W H, Wu Y J, Lin Y H, Li H Y, Wang T Y, Song J R, Liu X, Li G Q, Zhao R, Shi L P. Neuromorphic computing chip with spatiotemporal elasticity for multi-intelligent-tasking robots. Science Robotics , 2022, 7(67): eabk2948. DOI: 10.1126/scirobotics.abk2948.
Zhang B, Shi L P, Song S. Creating more intelligent robots through brain-inspired computing. Science Robotics , 2016, 354(6318): 1445.
Merolla P A, Arthur J V, Alvarez-Icaza R, Cassidy A S, Sawada J, Akopyan F, Jackson B L, Imam N, Guo C, Nakamura Y, Brezzo B, Vo I, Esser S K, Appuswamy R, Taba B, Amir A, Flickner M D, Risk W P, Manohar R, Modha D S. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science , 2014, 345(6197): 668–673. DOI: 10.1126/science.1254 642.
Pehle C, Billaudelle S, Cramer B, Kaiser J, Schreiber K, Stradmann Y, Weis J, Leibfried A, Müller E, Schemmel J. The BrainScaleS-2 accelerated neuromorphic system with hybrid plasticity. Frontiers in Neuroscience , 2022, 16: 795876. DOI: 10.3389/fnins.2022.795876.
Modha D S, Akopyan F, Andreopoulos A, Appuswamy R, Arthur J V, Cassidy A S, Datta P, DeBole M V, Esser S K, Otero C O, Sawada J, Taba B, Amir A, Bablani D, Carlson P J, Flickner M D, Gandhasri R, Garreau G J, Ito M, Klamo J L, Kusnitz J A, Mcclatchey N J, Mckinstry J L, Nakamura Y, Nayak T K, Risk W P, Schleupen K, Shaw B, Sivagnaname J, Smith D F, Terrizzano I, Ueda T. Neural inference at the frontier of energy, space, and time. Science , 2023, 382(6668): 329–335. DOI: 10.1126/ science.adh1174.
Yu F W, Wu Y J, Ma S C, Xu M K, Li H Y, Qu H Y, Song C H, Wang T Y, Zhao R, Shi L P. Brain-inspired multimodal hybrid neural network for robot place recognition. Science Robotics , 2023, 8(78): eabm6996. DOI: 10.1126/scirobotics.abm6996.