AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (584.2 KB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Towards "General Purpose" Brain-Inspired Computing System

Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Show Author Information

Abstract

Brain-inspired computing refers to computational models, methods, and systems, that are mainly inspired by the processing mode or structure of brain. A recent study proposed the concept of "neuromorphic completeness" and the corresponding system hierarchy, which is helpful to determine the capability boundary of brain-inspired computing system and to judge whether hardware and software of brain-inspired computing are compatible with each other. As a position paper, this article analyzes the existing brain-inspired chips’€™ design characteristics and the current so-called "general purpose" application development frameworks for brain-inspired computing, as well as introduces the background and the potential of this proposal. Further, some key features of this concept are presented through the comparison with the Turing completeness and approximate computation, and the analyses of the relationship with "general-purpose" brain-inspired computing systems (it means that computing systems can support all computable applications). In the end, a promising technical approach to realize such computing systems is introduced, as well as the on-going research and the work foundation. We believe that this work is conducive to the design of extensible neuromorphic complete hardware-primitives and the corresponding chips. On this basis, it is expected to gradually realize "general purpose" brain-inspired computing system, in order to take into account the functionality completeness and application efficiency.

References

[1]
D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick, Neuroscience-inspired artificial intelligence, Neuron, vol. 95, no. 2, pp. 245–258, 2017.
[2]
A. H. Marblestone, G. Wayne, and K. P. Kording, Toward an integration of deep learning and neuroscience, Front. Comput. Neurosci., DOI: .
[3]
B. A. Richards, T. P. Lillicrap, P. Beaudoin, Y. Bengio, R. Bogacz, A. Christensen, C. Clopath, R. P. Costa, A. de Berker, S. Ganguli, et al., A deep learning framework for neuroscience, Nat. Neurosci., vol. 22, no. 11, pp. 1761–1770, 2019.
[4]
K. Roy, A. Jaiswal, and P. Panda, Towards spike-based machine intelligence with neuromorphic computing, Nature, vol. 575, no. 7784, pp. 607–617, 2019.
[5]
J. Pei, L. Deng, S. Song, M. G. Zhao, Y. H. Zhang, S. Wu, G. R. Wang, Z. Zou, Z. Z. Wu, W. He, et al., Towards artificial general intelligence with hybrid Tianjic chip architecture, Nature, vol. 572, no. 7767, pp. 106–111, 2019.
[6]
M. M. Waldrop, The chips are down for Moore’s law, Nature, vol. 530, no. 7589, pp. 144–147, 2016.
[7]
G. S. Wu, Ten fronties for big data technologies (Part B), (in Chinese), Big Data Res., vol. 1, no. 3, pp. 113–123, 2015.
[8]
G. S. Wu, Ten fronties for big data technologies (Part A), (in Chinese), Big Data Res., vol. 1, no. 2, pp. 109–117, 2015.
[9]
J. D. Kendall and S. Kumar, The building blocks of a brain-inspired computer, Appl. Phys. Rev., vol. 7, no. 1, p. 011305, 2020.
[10]
C. Mead, Neuromorphic electronic systems, Proc. IEEE, vol. 78, no. 10, pp. 1629–1636, 1990.
[11]
C. D. Schuman, T. E. Potok, R. M. Patton, J. D. Birdwell, M. E. Dean, G. S. Rose, and J. S. Plank, A survey of neuromorphic computing and neural networks in hardware, arXiv preprint arXiv: 1705.06963, 2017.
[12]
N. Wang, G. G. Guo, B. N. Wang, and C. Wang, Traffic clustering algorithm of urban data brain based on a hybrid-augmented architecture of quantum annealing and brain-inspired cognitive computing, Tsinghua Sci. Technol., vol. 25, no. 6, pp. 813–825, 2020.
[13]
W. Maass, Networks of spiking neurons: The third generation of neural network models, Neural Netw., vol. 10, no. 9, pp. 1659–1671, 1997.
[14]
M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev, and D. B. Strukov, Training and operation of an integrated neuromorphic network based on metal-oxide memristors, Nature, vol. 521, no. 7550, pp. 61–64, 2015.
[15]
C. Q. Yin, Y. X. Li, J. B. Wang, X. F. Wang, Y. Yang, and T. L. Ren, Carbon nanotube transistor with short-term memory, Tsinghua Sci. Technol., vol. 21, no. 4, pp. 442–448, 2016.
[16]
X. Lagorce and R. Benosman, STICK: Spike time interval computational kernel, a framework for general purpose computation using neurons, precise timing, delays, and synchrony, Neural Comput., vol. 27, no. 11, pp. 2261–2317, 2015.
[17]
J. B. Aimone, W. Severa, and C. M. Vineyard, Composing neural algorithms with Fugu, in Proc. Int. Conf. Neuromorphic Systems, Knoxville, TN, USA, 2019, pp. 1–8.
[18]
P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, et al., A million spiking-neuron integrated circuit with a scalable communication network and interface, Science, vol. 345, no. 6197, pp. 668–673, 2014.
[19]
S. K. Esser, A. Andreopoulos, R. Appuswamy, P. Datta, D. Barch, A. Amir, J. Arthur, A. Cassidy, M. Flickner, P. Merolla, et al., Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores, in Proc. 2013 Int. Joint Conf. on Neural Networks, Dallas, TX, USA, 2013, pp. 1–10.
[20]
A. Amir, P. Datta, W. P. Risk, A. S. Cassidy, J. A. Kusnitz, S. K. Esser, A. Andreopoulos, T. M. Wong, M. Flickner, R. Alvarez-Icaza, et al., Cognitive computing programming paradigm: A corelet language for composing networks of neurosynaptic cores, in Proc. 2013 Int. Joint Conf. on Neural Networks, Dallas, TX, USA, 2013, pp. 1–10.
[21]
F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G. J. Nam, et al., TrueNorth: Design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip, IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., vol. 34, no. 10, pp. 1537–1557, 2015.
[22]
B. V. Benjamin, P. R. Gao, E. McQuinn, S. Choudhary, A. R. Chandrasekaran, J. M. Bussat, R. Alvarez-Icaza, J. V. Arthur, P. A. Merolla, and K. Boahen, Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations, Proc. IEEE, vol. 102, no. 5, pp. 699–716, 2014.
[23]
P. Merolla, J. Arthur, R. Alvarez, J. M. Bussat, and K. Boahen, A multicast tree router for multichip neuromorphic systems, IEEE Trans. Circuits Syst. I Reg. Pap., vol. 61, no. 3, pp. 820–833, 2014.
[24]
C. Eliasmith and C. H. Anderson, Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems. Cambridge, MA, USA: The MIT Press, 2004.
[25]
A. Neckar, S. Fok, B. V. Benjamin, T. C. Stewart, N. N. Oza, A. R. Voelker, C. Eliasmith, R. Manohar, and K. Boahen, Braindrop: A mixed-signal neuromorphic architecture with a dynamical systems-based programming model, Proc. IEEE, vol. 107, no. 1, pp. 144–164, 2019.
[26]
M. Davies, N. Srinivasa, T. H. Lin, G. Chinya, Y. Q. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, et al., Loihi: A neuromorphic manycore processor with on-chip learning, IEEE Micro, vol. 38, no. 1, pp. 82–99, 2018.
[27]
C. K. Lin, A. Wild, G. N. Chinya, Y. Q. Cao, M. Davies, D. M. Lavery, and H. Wang, Programming spiking neural networks on Intel’s Loihi, Computer, vol. 51, no. 3, pp. 52–61, 2018.
[28]
K. Wendt, M. Ehrlich, and R. Schüffny, A graph theoretical approach for a multistep mapping software for the facets project, in Proc. 2nd WSEAS Int. Conf. on Computer Engineering and Applications, Capulco, Mexico, 2008, pp. 189–194.
[29]
S. B. Furber, D. R. Lester, L. A. Plana, J. D. Garside, E. Painkras, S. Temple, and A. D. Brown, Overview of the SpiNNaker system architecture, IEEE Trans. Comput., vol. 62, no. 12, pp. 2454–2467, 2013.
[30]
O. Rhodes, P. A. Bogdan, C. Brenninkmeijer, S. Davidson, D. Fellows, A. Gait, D. R. Lester, M. Mikaitis, L. A. Plana, A. G. D. Rowley, et al., sPyNNaker: A software package for running PyNN simulations on SpiNNaker, Front. Neurosci., vol. 12, p. 816, 2018.
[31]
Y. H. Zhang, P. Qu, Y. Ji, W. H. Zhang, G. R. Gao, G. R. Wang, S. Song, G. Q. Li, W. G. Chen, W. M. Zheng, et al., A system hierarchy for brain-inspired computing, Nature, vol. 586, no. 7829, pp. 378–384, 2020.
[32]
O. Rhodes, Brain-inspired computing boosted by new concept of completeness, Nature, vol. 586, no. 7829, 364–366, 2020.
[33]
B. Steinbach and R. Kohut, Neural networks – A model of Boolean functions, in Proc. 5th Int. Workshop on Boolean Problems, Freiberg, Germany, 2002.
[34]
G. Lample and F. Charton, Deep learning for symbolic mathematics, in Proc. 8th Int. Conf. on Learning Representations, Addis Ababa, Ethiopia, 2020.
[35]
P. Qu, Y. H. Zhang, X. Fei, and W. M. Zheng, High performance simulation of spiking neural network on GPGPUs, IEEE Trans. Parallel Distrib. Syst., vol. 31, no. 11, pp. 2510–2523, 2020.
[36]
Y. Ji, Y. H. Zhang, S. C. Li, P. Chi, C. H. Jiang, P. Qu, Y. Xie, and W. G. Chen, NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints, in Proc. 49th Ann. IEEE/ACM Int. Symp. on Microarchitecture, Taipei, China, 2016, pp. 1–13.
[37]
Y. Ji, Y. H. Zhang, W. G. Chen, and Y. Xie, Bridge the gap between neural networks and neuromorphic hardware with a neural network compiler, in Proc. 23rd Int. Conf. on Architectural Support for Programming Languages and Operating Systems, Williamsburg, VA, USA, 2018, pp. 448–460.
[38]
Y. Ji, Y. Y. Zhang, X. F. Xie, S. C. Li, P. Q. Wang, X. Hu, Y. H. Zhang, and Y. Xie, FPSA: A full system stack solution for reconfigurable ReRAM-based NN accelerator architecture, in Proc. 24th Int. Conf. on Architectural Support for Programming Languages and Operating Systems, New York, NY, USA, 2019, pp. 733–747.
[39]
Y. Ji, Z. X. Liu, and Y. H. Zhang, A reduced architecture for ReRAM-based neural network accelerator and its software stack, IEEE Trans. Comput., vol. 70, no, 3, pp. 316–331, 2021.
[40]
J. H. Han, Z. L. Li, W. M. Zheng, and Y. H. Zhang, Hardware implementation of spiking neural networks on FPGA, Tsinghua Sci. Technol., vol. 25, no. 4, pp. 479–486, 2020.
[41]
X. Fei, Y. H. Zhang, and W. M. Zheng, XB-SIM*: A simulation framework for modeling and exploration of ReRAM-based CNN acceleration design, Tsinghua Sci. Technol., vol. 26, no. 3, pp. 322–334, 2021.
[42]
J. L. Hennessy and J. L. Hennessy, A new golden age for computer architecture: Domain-specific hardware/software co-design, enhanced security, open instruction sets, and agile chip development, in Proc. 2018 ACM/IEEE 45th Ann. Int. Symp. on Computer Architecture, Los Angeles, CA, USA, 2018, pp. 27–29.
[43]
W. M. Zheng, Research trend of large-scale supercomputers and applications from the TOP500 and Gordon Bell Prize, Sci. China Inf. Sci., vol. 63, no. 7, p. 171001, 2020.
Tsinghua Science and Technology
Pages 664-673
Cite this article:
Zhang Y, Qu P, Zheng W. Towards "General Purpose" Brain-Inspired Computing System. Tsinghua Science and Technology, 2021, 26(5): 664-673. https://doi.org/10.26599/TST.2021.9010010

833

Views

92

Downloads

9

Crossref

6

Web of Science

9

Scopus

0

CSCD

Altmetrics

Received: 18 January 2021
Accepted: 04 February 2021
Published: 20 April 2021
© The author(s) 2021

© The author(s) 2021. The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return