1
N. Stephenson, Snow Crash, New York, NY, USA: Spectra, 2000.
7
J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and F. Li, ImageNet: A large-scale hierarchical image database, in Proc. 2009 IEEE Conf. on Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248–255.
8
X. Wang, X. Zhang, Y. Zhu, Y. Guo, X. Yuan, L. Xiang, Z. Wang, G. Ding, D. Brady, Q. Dai, et al., PANDA: A gigapixel-level human-centric video dataset, in Proc. 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 3268–3278.
10
X. Ding, Y. Guo, G. Ding, and J. Han, ACNet: Strengthening the kernel skeletons for powerful CNN via asymmetric convolution blocks, in Proc. 2019 IEEE/CVF Int. Conf. on Computer Vision, Seoul, Republic of Korea, 2019, pp. 1911–1920.
11
X. Ding, T. Hao, J. Tan, J. Liu, J. Han, Y. Guo, and G. Ding, ResRep: Lossless CNN pruning via decoupling remembering and forgetting, in Proc. 2021 IEEE/CVF Int. Conf. on Computer Vision, Montreal, Canada, 2021, pp. 4510–4520.
15
T. Yu, Z. Zheng, K. Guo, P. Liu, Q. Dai, and Y. Liu, Function4D: Real-time human volumetric capture from very sparse consumer RGBD sensors, in Proc. 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021, pp. 5746–5756.
16
Z. Zheng, T. Yu, Q. Dai, and Y. Liu, Deep implicit templates for 3D shape representation, in Proc. 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021, pp. 1429–1439.
18
Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, Video from a single coded exposure photograph using a learned over-complete dictionary, in Proc. 2011 Int. Conf. on Computer Vision, Barcelona, Spain, 2011, pp. 287–294.
24
J. Ma, X. Liu, Z. Shou, and X. Yuan, Deep tensor ADMM-Net for snapshot compressive imaging, in Proc. 2019 IEEE/CVF Int. Conf. on Computer Vision, Seoul, Republic of Korea, 2019, pp. 10222–10231.
26
X. Miao, X. Yuan, Y. Pu, and V. Athitsos, Lambda-net: Reconstruct hyperspectral images from a snapshot measurement, in Proc. 2019 IEEE/CVF Int. Conf. on Computer Vision, Seoul, Republic of Korea, 2019, pp. 4058–4068.
27
Z. Meng, J. Ma, and X. Yuan, End-to-end low cost compressive spectral imaging with spatial-spectral self-attention, in Proc. 16th European Conf. on Computer Vision, Glasgow, UK, 2020, pp. 187–204.
28
T. Huang, W. Dong, X. Yuan, J. Wu, and G. Shi, Deep gaussian scale mixture prior for spectral compressive imaging, in Proc. 2021 IEEE Conf. on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021, pp. 16211–16220.
29
Y. Li, M. Qi, R. Gulve, M. Wei, R. Genov, K. N. Kutulakos, and W. Heidrich, End-to-end video compressive sensing using anderson-accelerated unrolled networks, in Proc. 2020 IEEE Int. Conf. on Computational Photography, St. Louis, MO, USA, 2020, pp. 1–12.
30
Z. Cheng, R. Lu, Z. Wang, H. Zhang, B. Chen, Z. Meng, and X. Yuan, BIRNAT: Bidirectional recurrent neural networks with adversarial training for video snapshot compressive imaging, in Proc. 16th European Conf. on Computer Vision, Glasgow, UK, 2020, pp. 258–275.
33
Z. Meng, Z. Yu, K. Xu, and X. Yuan, Self-supervised neural networks for spectral snapshot compressive imaging, in Proc. 2021 IEEE/CVF Int. Conf. on Computer Vision, Montreal, Canada, 2021, pp. 2602–2611.
34
Z. Cheng, B. Chen, G. Liu, H. Zhang, R. Lu, Z. Wang, and X. Yuan, Memory-efficient network for large-scale video compressive sensing, in Proc. 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021, pp. 16241–16250.
35
Z. Wang, H. Zhang, Z. Cheng, B. Chen, and X. Yuan, MetaSCI: Scalable and adaptive reconstruction for video compressive sensing, in Proc. 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Nashville, TN, USA, 2021, pp. 2083–2092.
36
J. Chang and G. Wetzstein, Deep optics for monocular depth estimation and 3D object detection, in Proc. 2019 IEEE/CVF Int. Conf. on Computer Vision, Seoul, Republic of Korea, 2019, pp. 10193–10202.
39
Y. Inagaki, Y. Kobayashi, K. Takahashi, T. Fujii, and H. Nagahara, Learning to capture light fields through a coded aperture camera, in Proc. 15th European Conf. on Computer Vision, Munich, Germany, 2018, pp. 418–434.
40
U. Akpinar, E. Sahin, and A. Gotchev, Learning optimal phase-coded aperture for depth of field extension, in Proc. 2019 IEEE Int. Conf. on Image Processing, Taipei, China, 2019, pp. 4315–4319.
43
P. A. Shedligeri, S. Mohan, and K. Mitra, Data driven coded aperture design for depth recovery, in Proc. 2017 IEEE Int. Conf. on Image Processing, Beijing, China, 2017, pp. 56–60.
44
M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, Flexible voxels for motion-aware videography, in Proc. 11th European Conf. on Computer Vision, Heraklion, Greece, 2010, pp. 100–114.
48
S. Rao, K. Y. Ni, and Y. Owechko, Context and task-aware knowledge-enhanced compressive imaging, in Proc. SPIE 8877, Unconventional Imaging and Wavefront Sensing 2013, San Diego, CA, USA, 2013, p. 88770E.
53
Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, T. Luo, X. Feng, Y. Chen, and O. Temam, ShiDianNao: Shifting vision processing closer to the sensor, in Proc. ACM/IEEE 42nd Annu. Int. Symp. on Computer Architecture (ISCA), Portland, OR, USA, 2015, pp. 92–104.
54
R. LiKamWa, Y. Hou, Y. Gao, M. Polansky, and L. Zhong, RedEye: Analog convnet image sensor architecture for continuous mobile vision, in Proc. ACM/IEEE 43rd Annu. Int. Symp. on Computer Architecture (ISCA), Seoul, Republic of Korea, 2016, pp. 255–266.
58
H. M. Said, I. El Emary, B. A. Alyoubi, and A. A. Alyoubi, Application of intelligent data mining approach in securing the cloud computing, Int. J. Adv. Comput. Sci. Appl. , vol. 7, no. 9, 2016,doi: 10.14569/IJACSA.2016.070921.
59
X. Yuan, C. Li, and X. Li, DeepDefense: Identifying DDoS attack via deep learning, in Proc. 2017 IEEE Int. Conf. on Smart Computing (SMARTCOMP), Hong Kong, China, 2017, pp. 1–8.
62
J. Gao, Machine learning applications for data center optimization,https://static.googleusercontent.com/media/research.google.com/zh-CN//pubs/archive/42542.pdf, 2014.
63
C. Coleman, D. Narayanan, D. Kang, T. Zhao, J. Zhang, L. Nardi, P. Bailis, K. Olukotun, C. Ré, and M. A. Zaharia, DAWNBench: An end-to-end deep learning benchmark and competition,https://cs.stanford.edu/~deepakn/assets/papers/dawnbench-sysml18.pdf, 2017.
64
J. Lin, R. Men, A. Yang, C. Zhou, M. Ding, Y. Zhang, P. Wang, A. Wang, L. Jiang, X. Jia, et al., M6: A Chinese multimodal pretrainer, arXiv preprint arXiv: 2103.00823, 2021.
65
J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv: 1810.04805, 2019.
70
Y. Huang, Z. Song, K. Li, and S. Arora, InstaHide: Instance-hiding schemes for private distributed learning, in Proc. 37th Int. Conf. on Machine Learning, 2020, pp. 4507–4518.
75
S. Srinivas and R. V. Babu, Data-free parameter pruning for deep neural networks, in Proc. 2015 British Machine Vision Conf., Swansea, UK, 2015, pp. 31.1–31.12.
76
S. Han, J. Pool, J. Tran, and W. J. Dally, Learning both weights and connections for efficient neural networks, in Proc. 28th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2015, pp. 1135–1143.
77
W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, Compressing neural networks with the hashing trick, in Proc. 32nd Int. Conf. on Machine Learning, Lille, France, 2015, pp. 2285–2294.
78
A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko, Semi-supervised learning with ladder networks, in Proc. 28th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2015, pp. 3546–3554.
79
Y. Gong, L. Liu, M. Yang, and L. Bourdev, Compressing deep convolutional networks using vector quantization, arXiv preprint arXiv: 1412.6115, 2014.
80
J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng, Quantized convolutional neural networks for mobile devices, in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 4820–4828.
81
V. Vanhoucke, A. Senior, and M. Z. Mao, Improving the speed of neural networks on CPUs, http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf, 2011.
82
R. Rigamonti, A. Sironi, V. Lepetit, and P. Fua, Learning separable filters, in Proc. 2013 IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 2754–2761.
83
E. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, Exploiting linear structure within convolutional networks for efficient evaluation, in Proc. 27th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2014, pp. 1269–1277.
84
V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, Speeding-up convolutional neural networks using fine-tuned CP-decomposition, in Proc. 3rd Int. Conf. on Learning Representations, San Diego, CA, USA, 2015.
85
S. Zhai, Y. Cheng, W. Lu, and Z. Zhang, Doubly convolutional neural networks, in Proc. 30th Int. Conf. on Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 1090–1098.
86
W. Shang, K. Sohn, D. Almeida, and H. Lee, Understanding and improving convolutional neural networks via concatenated rectified linear units, in Proc. 33rd Int. Conf. on Machine Learning, New York City, NY, USA, 2016, pp. 2217–2225.
87
T. Cohen and M. Welling, Group equivariant convolutional networks, in Proc. 33rd Int. Conf. on Machine Learning, New York, NY, USA, 2016, pp. 2990–2999.
88
L. J. Ba and R. Caruana, Do deep nets really need to be deep? in Proc. 27th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2014, pp. 2654–2662.
89
G. Hinton, O. Vinyals, and J. Dean, Distilling the knowledge in a neural network, arXiv preprint arXiv: 1503.02531, 2015.
90
A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, FitNets: Hints for thin deep nets, in Proc. 3rd Int. Conf. on Learning Representations, San Diego, CA, USA, 2014.
91
X. Jiang, H. Wang, Y. Chen, Z. Wu, L. Wang, B. Zou, Y. Yang, Z. Cui, Y. Cai, T. Yu, et al., MNN: A universal and efficient inference engine, in Proc. Machine Learning and Systems 2020, Austin, TX, USA, 2020.
92
Tencent/ncnn, https://github.com/Tencent/ncnn, 2017.
93
T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan, L. Wang, Y. Hu, L. Ceze, et al., TVM: An automated end-to-end optimizing compiler for deep learning, in Proc. 13th USENIX Symp. on Operating Systems Design and Implementation, Carlsbad, CA, USA, 2018, pp. 578–594.
95
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770–778.
97
A. Gholami, Z. Yao, S. Kim, M. W. Mahoney, and K. Keutzer, AI and memory wall, https://medium.com/riselab/ai-and-memory-wall-2cb4265cb0b8, 2021.
98
Y. Ma, Y. Du, L. Du, J. Lin, and Z. Wang, In-memory computing: The next-generation AI computing paradigm, in Proc. 2020 on Great Lakes Symp. on VLSI, China, 2020, pp. 265–270.
102
T. Yoo, H. Kim, Q. Chen, T. T. H. Kim, and B. Kim, A logic compatible 4T dual embedded DRAM array for in-memory computation of deep neural networks, in Proc. 2019 IEEE/ACM Int. Symp. on Low Power Electronics and Design (ISLPED), Lausanne, Switzerland, 2019, pp. 1–6.
103
F. Merrikh Bayat, X. Guo, M. Klachko, N. Do, K. Likharev, and D. Strukov, Model-based high-precision tuning of NOR flash memory cells for analog computing applications, in Proc. 74th Annu. Device Research Conf. (DRC), Newark, DE, USA, 2016, pp. 1–2.
104
J. F. Kang, P. Huang, R. Z. Han, Y. C. Xiang, X. L. Cui, and X. Y. Liu, Flash-based computing in-memory scheme for IOT, in Proc. IEEE 13th Int. Conf. on ASIC (ASICON), Chongqing, China, 2019, pp. 1–4.
107
T. S. Moise, S. R. Summerfelt, H. McAdams, S. Aggarwal, K. R. Udayakumar, F. G. Celii, J. S. Martin, G. Xing, L. Hall, K. J. Taylor, et al. , Demonstration of a 4 MB, high density ferroelectric memory embedded within a 130 nm, 5 LM Cu/FSG logic process, in Int. Electron Devices Meeting, San Francisco, CA, USA, 2002, pp. 535–538.
108
S. J. Ahn, Y. J. Song, C. W. Jeong, J. M. Shin, Y. Fai, Y. N. Hwang, S. H. Lee, K. C. Ryoo, S. Y. Lee, J. H. Park, et al. , Highly manufacturable high density phase change memory of 64Mb and beyond, in Proc. 2004 IEEE Int. Electron Devices Meeting, San Francisco, CA, USA, 2004, pp. 907–910.
110
P. Chi, S. Li, C. Xu, T. Zhang, J. Zhao, Y. Liu, Y. Wang, and Y. Xie, PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory, in Proc. ACM/IEEE 43rd Ann. Int. Symp. on Computer Architecture, Seoul, Republic of Korea, 2016, pp. 27–39.
117
M. Grieves, Digital twin: Manufacturing excellence through virtual factory replication, 03 White paper, https://www.researchgate.net/publication/275211047_Digital_Twin_Manufacturing_Excellence_through_Virtual_Factory_Replication, 2014.
118
E. Glaessgen and D. Stargel, The digital twin paradigm for future NASA and U. S. air force vehicles, in Proc. 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and materials Conf. , Honolulu, HI, USA, 2012, p. 2012-1818.
123
M. Dahnert, J. Hou, M. Nießner, and A. Dai, Panoptic 3D scene reconstruction from a single RGB image, in Proc. 35th Conf. on Neural Information Processing Systems, 2021, pp. 8282–8293.
126
B. Curless and M. Levoy, A volumetric method for building complex models from range images, in Proc. 23rd Annu. Conf. on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 1996, pp. 303–312.
127
R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, KinectFusion: Real-time dense surface mapping and tracking, in Proc. 10th IEEE Int. Symp. on Mixed and Augmented Reality, Basel, Switzerland, 2011, pp. 127–136.
128
J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, DeepSDF: Learning continuous signed distance functions for shape representation, in Proc. 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 165–174.
129
L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, Occupancy networks: Learning 3D reconstruction in function space, in Proc. 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 4455–4465.
131
S. Peng, M. Niemeyer, L. M. Mescheder, M. Pollefeys, and A. Geiger, Convolutional occupancy networks, in Proc. 16th European Conf. on Computer Vision (ECCV), Glasgow, UK, 2020, pp. 523–540.
134
2022 Global networking trends report, https://www.cisco.com/c/en/us/solutions/enterprise-networks/2022-networking-report-preview.html, 2021.
139
D. Bega, M. Gramaglia, M. Fiore, A. Banchs, and X. Costa-Perez, DeepCog: Cognitive network management in sliced 5G networks with deep learning, in Proc. 2019 IEEE Conf. on Computer Communications, Paris, France, 2019, pp. 280–288.
145
R. Q. Shaddad, E. M. Saif, H. M. Saif, Z. Y. Mohammed, and A. H. Farhan, Channel estimation for intelligent reflecting surface in 6G wireless network via deep learning technique, in Proc. 1st Int. Conf. on Emerging Smart Technologies and Applications (eSmarTA), Sana'a, Yemen, 2021, pp. 1–5.
146
T. Gruber, S. Cammerer, J. Hoydis, and S. ten Brink, On deep learning-based channel decoding, in Proc. 51st Annu. Conf. on Information Sciences and Systems (CISS), Baltimore, MD, USA, 2017, pp. 1–6.
150
S. Nakamoto, Bitcoin: A peer-to-peer electronic cash system,https://bitcoin.org/bitcoin.pdf, 2008.
158
H. Bao, H. He, Z. Liu, and Z. Liu, Research on information security situation awareness system based on big data and artificial intelligence technology, in Proc. 2019 Int. Conf. on Robots & Intelligent System (ICRIS), Haikou, China, 2019, pp. 318–322.
163
N. M. N. Leite, E. T. Pereira, E. C. Gurjão, and L. R. Veloso, Deep convolutional autoencoder for EEG noise filtering, in Proc. 2018 IEEE Int. Conf. on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 2018, pp. 2605–2612.
168
O. Rudovic, M. Zhang, B. Schuller, and R. W. Picard, Multi-modal active learning from human data: A deep reinforcement learning approach, in Proc. 2019 Int. Conf. on Multimodal Interaction, Suzhou, China, 2019, pp. 6–15.