References(75)
[1]
D. Pandey, H. Wang, X. Yin, K. Wang, Y. Zhang, and J. Shen, Automatic breast lesion segmentation in phase preserved DCE-MRIs, Health Information Science and Systems, vol. 10, p. 9, 2022.
[2]
F. Zhang, Y. Wang, S. Liu, and H. Wang, Decision-based evasion attacks on tree ensemble classifiers, World Wide Web, vol. 23, no. 5, pp. 2957–2977, 2020.
[3]
M. Peng, J. Zhu, H. Wang, X. Li, Y. Zhang, X. Zhang, and G. Tian, Mining event-oriented topics in microblog stream with unsupervised multi-view hierarchical embedding, ACM Transactions on Knowledge Discovery from Data, vol. 12, no. 3, pp. 1–26, 2018.
[4]
J. Y. Li, Z. H. Zhan, H. Wang, and J. Zhang, Data-driven evolutionary algorithm with perturbation-based ensemble surrogates, IEEE Transactions on Cybernetics, vol. 51, no. 8, pp. 3925–3937, 2021.
[5]
T. Huang, Y. J. Gong, S. Kwong, H. Wang, and J. Zhang, A niching memetic algorithm for multi-solution traveling salesman problem, IEEE Transactions on Evolutionary Computation, vol. 24, no. 3, pp. 508–522, 2019.
[6]
Y. H. Zhang, Y. J. Gong, Y. Gao, H. Wang, and J. Zhang, Parameter-free Voronoi neighborhood for evolutionary multimodal optimization, IEEE Transactions on Evolutionary Computation, vol. 24, no. 2, pp. 335–349, 2020.
[7]
S. Siuly, O. Alcin, E. Kabir, A. Sengur, H. Wang, Y. Zhang, and F. Whittaker, A new framework for automatic detection of patients with mild cognitive impairment using resting-state EEG signals, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 9, pp. 1966–1976, 2020.
[8]
J. Y. Li, K. J. Du, Z. H. Zhan, H. Wang, and J. Zhang, Distributed differential evolution with adaptive resource allocation, IEEE Transactions on Cybernetics, .
[9]
J. Yin, M. J. Tang, J. Cao, H. Wang, M. You, and Y. Lin, Vulnerability exploitation time prediction: An integrated framework for dynamic imbalanced learning, World Wide Web, vol. 25, pp. 401–423, 2021.
[10]
H. Wang, Y. Wang, T. Taleb, and X. Jiang, Editorial: Special issue on security and privacy in network computing, World Wide Web, vol. 23, pp. 951–957, 2019.
[11]
H. Wang, L. Sun, and E. Bertino, Building access control policy model for privacy preserving and testing policy conflicting problems, Journal of Computer and System Sciences, vol. 80, no. 8, pp. 1493–1503, 2014.
[12]
X. Sun, H. Wang, J. Li, and Y. Zhang, Injecting purpose and trust into data anonymisation, Computers & Security, vol. 30, no. 5, pp. 332–345, 2011.
[13]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, Communication-efficient learning of deep networks from decentralized data, in Proc. 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 2017, pp. 1273–1282.
[14]
J. Yin, M. Tang, J. Cao, M. You, H. Wang, and M. Alazab, Knowledge-driven cybersecurity intelligence: Software vulnerability co-exploitation behavior discovery, IEEE Transactions on Industrial Informatics, .
[15]
X. Sun, H. Wang, J. Li, and J. Pei, Publishing anonymous survey rating data, Data Min. Knowl. Discov., vol. 23, pp. 379–406, 2011.
[16]
Y. F. Ge, M. Orlowska, J. Cao, H. Wang, and Y. Zhang, MDDE: Multitasking distributed differential evolution for privacy-preserving database fragmentation, The VLDB Journal, vol. 31, pp. 957–975, 2022.
[17]
Y. Qu, S. R. Pokhrel, S. Garg, L. Gao, and Y. Xiang, A blockchained federated learning framework for cognitive computing in industry 4.0 networks, IEEE Transactions on Industrial Informatics, vol. 17, no. 4, pp. 2964–2973, 2021.
[18]
Y. Qu, L. Gao, T. H. Luan, Y. Xiang, S. Yu, B. Li, and G. Zheng, Decentralized privacy using blockchain-enabled federated learning in fog computing, IEEE Internet of Things Journal, vol. 7, no. 6, pp. 5171–5183, 2020.
[19]
L. Sun, J. Ma, H. Wang, Y. Zhang, and J. Yong, Cloud service description model: An extension of USDL for cloud services, IEEE Transactions on Services Computing, vol. 11, no. 2, pp. 354–368, 2015.
[20]
K. Cheng, L. Wang, Y. Shen, H. Wang, Y. Wang, X. Jiang, and H. Zhong, Secure k-NN query on encrypted cloud data with multiple keys, IEEE Transactions on Big Data, vol. 7, no. 4, pp. 689–702, 2017.
[21]
Y. Ye, S. Li, F. Liu, Y. Tang, and W. Hu, EdgeFed: Optimized federated learning based on edge computing, IEEE Access, vol. 8, pp. 209191–209198, 2020.
[22]
X. Mo and J. Xu, Energy-efficient federated edge learning with joint communication and computation design, Journal of Communications and Information Networks, vol. 6, no. 2, pp. 110–124, 2021.
[23]
T. Zhou, X. Li, C. Pan, M. Zhou, and Y. Yao, Multi-server federated edge learning for low power consumption wireless resource allocation based on user QoE, Journal of Communications and Networks, vol. 23, no. 6, pp. 463–472, 2021.
[24]
J. Mills, J. Hu, and G. Min, Multi-task federated learning for personalised deep neural networks in edge computing, IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 3, pp. 630–641, 2022.
[25]
S. Yu, X. Chen, Z. Zhou, X. Gong, and D. Wu, When deep reinforcement learning meets federated learning: Intelligent multitimescale resource management for multiaccess edge computing in 5G ultradense network, IEEE Internet of Things Journal, vol. 8, no. 4, pp. 2238–2251, 2021.
[26]
Z. Zhu, S. Wan, P. Fan, and K. B. Letaief, Federated multiagent actor–critic learning for age sensitive mobile-edge computing, IEEE Internet of Things Journal, vol. 9, no. 2, pp. 1053–1067, 2022.
[27]
S. R. Pandey, M. N. H. Nguyen, T. N. Dang, N. H. Tran, K. Thar, Z. Han, and C. S. Hong, Edge-assisted democratized learning toward federated analytics, IEEE Internet of Things Journal, vol. 9, no. 1, pp. 572–588, 2022.
[28]
W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y. -C. Liang, Q. Yang, D. Niyato, and C. Miao, Federated learning in mobile edge networks: A comprehensive survey, IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2031–2063, 2020.
[29]
L. Gao, T. H. Luan, B. Gu, Y. Qu, and Y. Xiang, An introduction to edge computing, in Privacy-Preserving in Edge Computing, L. Gao, T. H. Luan, B. Gu, Y. Qu, and Y. Xiang, eds. Singapore: Springer, 2021, pp. 1–14.
[30]
L. Gao, T. H. Luan, B. Gu, Y. Qu, and Y. Xiang, Blockchain based decentralized privacy preserving in edge computing, in Privacy-Preserving in Edge Computing, L. Gao, T. H. Luan, B. Gu, Y. Qu, and Y. Xiang, eds. Singapore: Springer, 2021, pp. 83–109.
[31]
S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, Adaptive federated learning in resource constrained edge computing systems, IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1205–1221, 2019.
[32]
M. E. Kabir, H. Wang, and E. Bertino, A role-involved purpose-based access control model, Information Systems Frontiers, vol. 14, pp. 809–822, 2012.
[33]
H. Wang, Y. Zhang, J. Cao, and V. Varadharajan. Achieving secure and flexible M-services through tickets, IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, vol. 33, no. 6, pp. 697–708, 2003.
[34]
M. E. Kabir, A. N. Mahmood, H. Wang, and A. K. Mustafa, Microaggregation sorting framework for k-anonymity statistical disclosure control in cloud computing, IEEE Transactions on Cloud Computing, vol. 8, no. 2, pp. 408–417, 2015.
[35]
B. Gu, L. Gao, X. Wang, Y. Qu, J. Jin, and S. Yu, Privacy on the edge: Customizable privacy-preserving context sharing in hierarchical edge computing, IEEE Transactions on Network Science and Engineering, vol. 7, no. 4, pp. 2298–2309, 2020.
[36]
M. M. A. Aziz, M. M. Anjum, N. Mohammed, and X. Jiang, Generalized genomic data sharing for differentially private federated learning, Journal of Biomedical Informatics, vol. 132, p. 104113, 2022.
[37]
H. Wang, Z. Kaplan, D. Niu, and B. Li, Optimizing federated learning on non-IID data with reinforcement learning, in Proc. IEEE INFOCOM 2020-IEEE Conference on Computer Communications, Toronto, Canada, pp. 1698–1707, 2020.
[38]
M. A. Al-Garadi, A. Mohamed, A. K. Al-Ali, X. Du, I. Ali, and M. Guizani, A survey of machine and deep learning methods for internet of things (IoT) security, IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 1646–1685, 2020.
[39]
Z. Tang, H. Hu, and C. Xu, A federated learning method for network intrusion detection, Concurrency and Computation: Practice and Experience, vol. 34, no. 10, p. e6812, 2022.
[40]
J. Mills, J. Hu, and G. Min, Client-side optimisation strategies for communication-efficient federated learning, IEEE Communications Magazine, vol. 60, no. 7, pp. 60–66, 2022.
[41]
Q. Qi and X. Chen, Robust design of federated learning for edge-intelligent networks, IEEE Transactions on Communications, vol. 70, no. 7, pp. 4469–4481, 2022.
[42]
P. Tian, W. Liao, W. Yu, and E. Blasch, WSCC: A weight similarity based client clustering approach for non-IID federated learning, IEEE Internet of Things Journal, vol. 9, no. 20, pp. 20243–20256, 2022.
[43]
Z. Ji, L. Chen, N. Zhao, Y. Chen, G. Wei, and F. R. Yu, Computation offloading for edge-assisted federated learning, IEEE Transactions on Vehicular Technology, vol. 70, no. 9, pp. 9330–9344, 2021.
[44]
H. Liu, X. Yuan, and Y. J. A. Zhang, Reconfigurable intelligent surface enabled federated learning: A unified communication-learning design approach, IEEE Transactions on Wireless Communications, vol. 20, no. 11, pp. 7595–7609, 2021.
[45]
Z. Lin, H. Liu, and Y. J. A. Zhang, Relay-assisted cooperative federated learning, IEEE Transactions on Wireless Communications, vol. 21, no. 9, pp. 7148–7164, 2022.
[46]
Z. Zhou, Y. Li, X. Ren, and S. Yang, Towards efficient and stable k-asynchronous federated learning with unbounded stale gradients on non-IID data, IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 12, pp. 3291–3305, 2022.
[47]
T. Nishio and R. Yonetani, Client selection for federated learning with heterogeneous resources in mobile edge, in Proc. ICC 2019-2019 IEEE International Conference on Communications (ICC), Shanghai, China, 2019, pp. 1–7.
[48]
T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, Federated optimization in heterogeneous networks, arXiv preprint arXiv: 1812.06127, 2018.
[49]
Y. -S. Jeon, M. M. Amiri, J. Li, and H. V. Poor, A compressive sensing approach for federated learning over massive MIMO communication systems, IEEE Transactions on Wireless Communications, vol. 20, no. 3, pp. 1990–2004, 2021.
[50]
Z. Chai, A. Ali, S. Zawad, S. Truex, A. Anwar, N. Baracaldo, Y. Zhou, H. Ludwig, F. Yan, and Y. Cheng, TiFL: A tier-based federated learning system, in Proc. 29th International Symposium on High-Performance Parallel and Distributed Computing, Stockholm, Sweden, 2020, pp. 125–136.
[51]
Y. Liu, Y. Qu, C. Xu, Z. Hao, and B. Gu, Blockchain-enabled asynchronous federated learning in edge computing, Sensors, vol. 21, no. 10, p. 3335, 2021.
[52]
J. Chen, X. Pan, R. Monga, S. Bengio, and R. Jozefowicz, Revisiting distributed synchronous SGD, arXiv preprint arXiv: 1604.00981, 2016.
[53]
N. Ferdinand, H. Al-Lawati, S. C. Draper, and M. Nokleby, Anytime minibatch: Exploiting stragglers in online distributed optimization, arXiv preprint arXiv: 2006.05752, 2020.
[54]
R. Tandon, Q. Lei, A. G. Dimakis, and N. Karampatziakis, Gradient coding: Avoiding stragglers in distributed learning, in Proc. 34th International Conference on Machine Learning, Sydney, Australia, 2017, pp. 3368–3376.
[55]
E. Yang, D. K. Kang, and C. H. Youn, BOA: Batch orchestration algorithm for straggler mitigation of distributed DL training in heterogeneous GPU cluster, The Journal of Supercomputing, vol. 76, pp. 47–67, 2020.
[56]
Q. Zhou, S. Guo, H. Lu, L. Li, M. Guo, Y. Sun, and K. Wang, Falcon: Addressing stragglers in heterogeneous parameter server via multiple parallelism, IEEE Transactions on Computers, vol. 70, no. 1, pp. 139–155, 2020.
[57]
R. Bitar, M. Wootters, and S. E. Rouayheb, Stochastic gradient coding for straggler mitigation in distributed learning, IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 1, pp. 277–291, 2020.
[58]
S. Prakash, S. Dhakal, M. R. Akdeniz, Y. Yona, S. Talwar, S. Avestimehr, and N. Himayat, Coded computing for low-latency federated learning over wireless edge networks, IEEE Journal on Selected Areas in Communications, vol. 39, no. 1, pp. 233–250, 2020.
[59]
W. Wu, L. He, W. Lin, and R. Mao, Accelerating federated learning over reliability-agnostic clients in mobile edge computing systems, IEEE Transactions on Parallel and Distributed Systems, vol. 32, no. 7, pp. 1539–1551, 2020.
[60]
Z. Li, H. Zhou, T. Zhou, H. Yu, Z. Xu, and G. Sun, ESync: Accelerating intra-domain federated learning in heterogeneous data centers, IEEE Transactions on Services Computing, vol. 15, no. 4, pp. 2261–2274, 2020.
[61]
J. He, S. Guo, M. Li, and Y. Zhu, AceFL: Federated learning accelerating in 6G-enabled mobile edge computing networks, IEEE Transactions on Network Science and Engineering, .
[62]
C. You, D. Feng, K. Guo, H. H. Yang, and T. Q. S. Quek, Semi-synchronous personalized federated learning over mobile edge networks, arXiv preprint arXiv: 2209.13115, 2022.
[63]
T. Q. Dinh, D. N. Nguyen, D. T. Hoang, T. V. Pham, and E. Dutkiewicz, In-network computation for large-scale federated learning over wireless edge networks, IEEE Transactions on Mobile Computing, .
[64]
S. Singh, R. Sulthana, T. Shewale, V. Chamola, A. Benslimane, and B. Sikdar, Machine-learning-assisted security and privacy provisioning for edge computing: A survey, IEEE Internet of Things Journal, vol. 9, no. 1, pp. 236–260, 2022.
[65]
X. Wang, Y. Han, V. C. M. Leung, D. Niyato, X. Yan, and X. Chen, Convergence of edge computing and deep learning: A comprehensive survey, IEEE Communications Surveys & Tutorials, vol. 22, no. 2, pp. 869–904, 2020.
[66]
J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, Federated learning: Strategies for improving communication efficiency, arXiv preprint arXiv: 1610.05492, 2016.
[67]
Y. Zhang, H. Qu, D. Metaxas, and C. Chen, Local regularizer improves generalization, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 4, pp. 6861–6868, 2020.
[68]
T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, Federated learning: Challenges, methods, and future directions, IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020.
[69]
D. Yin, A. Pananjady, M. Lam, D. Papailiopoulos, K. Ramchandran, and P. Bartlett, Gradient diversity: A key ingredient for scalable distributed learning, in Proc. Twenty-First International Conference on Artificial Intelligence and Statistics, Playa Blanca, Spain, 2018, pp. 1998–2007.
[70]
H. T. Nguyen, V. Sehwag, S. Hosseinalipour, C. G. Brinton, M. Chiang, and H. V. Poor, Fast-convergent federated learning, IEEE Journal on Selected Areas in Communications, vol. 39, no. 1, pp. 201–218, 2020.
[71]
S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, Adaptive federated learning in resource constrained edge computing systems, IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1205–1221, 2019.
[72]
Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, Federated learning with non-IID data, arXiv preprint arXiv: 1806.00582, 2018.
[73]
G. Qu and N. Li, Accelerated distributed nesterov gradient descent, IEEE Transactions on Automatic Control, vol. 65, no. 6, pp. 2566–2581, 2019.
[74]
R. J. Tibshirani, The lasso problem and uniqueness, Electronic Journal of Statistics, vol. 7, pp. 1456–1490, 2013.
[75]
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., PyTorch: An imperative style, high-performance deep learning library, in Proc. 33rd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2019, pp. 8026–8037.