823
Views
814
Downloads
8
Crossref
N/A
WoS
7
Scopus
N/A
CSCD
The existing literature on device-to-device (D2D) architecture suffers from a dearth of analysis under imperfect channel conditions. There is a need for rigorous analyses on the policy improvement and evaluation of network performance. Accordingly, a two-stage transmit power control approach (named QSPCA) is proposed: First, a reinforcement Q-learning based power control technique and; second, a supervised learning based support vector machine (SVM) model. This model replaces the unified communication model of the conventional D2D setup with a distributed one, thereby requiring lower resources, such as D2D throughput, transmit power, and signal-to-interference-plus-noise ratio as compared to existing algorithms. Results confirm that the QSPCA technique is better than existing models by at least 15.31% and 19.5% in terms of throughput as compared to SVM and Q-learning techniques, respectively. The customizability of the QSPCA technique opens up multiple avenues and industrial communication technologies in 5G networks, such as factory automation.
The existing literature on device-to-device (D2D) architecture suffers from a dearth of analysis under imperfect channel conditions. There is a need for rigorous analyses on the policy improvement and evaluation of network performance. Accordingly, a two-stage transmit power control approach (named QSPCA) is proposed: First, a reinforcement Q-learning based power control technique and; second, a supervised learning based support vector machine (SVM) model. This model replaces the unified communication model of the conventional D2D setup with a distributed one, thereby requiring lower resources, such as D2D throughput, transmit power, and signal-to-interference-plus-noise ratio as compared to existing algorithms. Results confirm that the QSPCA technique is better than existing models by at least 15.31% and 19.5% in terms of throughput as compared to SVM and Q-learning techniques, respectively. The customizability of the QSPCA technique opens up multiple avenues and industrial communication technologies in 5G networks, such as factory automation.
I. Budhiraja, N. Kumar, and S. Tyagi, Deep-reinforcement-learning-based proportional fair scheduling control scheme for underlay D2D communication, IEEE Internet Things J., vol. 8, no. 5, pp. 3143–3156, 2021.
X. Wang, T. Jin, L. S. Hu, and Z. H. Qian, Energy-efficient power allocation and Q-learning-based relay selection for relay-aided D2D communication, IEEE Trans. Veh. Technol., vol. 69, no. 6, pp. 6452–6462, 2020.
A. Lakhan, M. A. Mohammed, A. N. Rashid, S. Kadry, T. Panityakul, K. H. Abdulkareem, and O. Thinnukool, Smart-contract aware ethereum and client-fog-cloud healthcare system, Sensors, vol. 21, no. 12, p. 4093, 2021.
P. Gandotra and R. K. Jha, Device-to-device communication in cellular networks: A survey, J. Netw. Comput. Appl., vol. 71, pp. 99–117, 2016.
N. Lee, X. Q. Lin, J. G. Andrews, and R. W. Heath, Power control for D2D underlaid cellular networks: Modeling, algorithms, and analysis, IEEE J. Sel. Areas Commun., vol. 33, no. 1, pp. 1–13, 2015.
A. Hammoodi, L. Audah, M. A. Taher, M. A. Mohammed, M. S. Aljumaily, A. Salh, and S. A. Hamzah, Novel universal windowing multicarrier waveform for 5G systems, Comput. Mater. Contin., vol. 67, no. 2, pp. 1523–1536, 2021.
K. Zia, N. Javed, M. N. Sial, S. Ahmed, A. A. Pirzada, and F. Pervez, A distributed multi-agent RL-based autonomous spectrum allocation scheme in D2D enabled multi-tier HetNets, IEEE Access, vol. 7, pp. 6733–6745, 2019.
M. Lin, J. Ouyang, and W. P. Zhu, Joint beamforming and power control for device-to-device communications underlaying cellular networks, IEEE J. Sel. Areas Commun., vol. 34, no. 1, pp. 138–150, 2016.
S. Aslam, F. Alam, S. F. Hasan, and M. A. Rashid, A machine learning approach to enhance the performance of D2D-enabled clustered networks, IEEE Access, vol. 9, pp. 16114–16132, 2021.
B. A. Khalaf, S. A. Mostafa, A. Mustapha, M. A. Mohammed, M. A. Mahmoud, B. A. S. Al-Rimy, S. A. Razak, M. Elhoseny, and A. Marks, An adaptive protection of flooding attacks model for complex network environments, Secur. Commun. Netw., vol. 2021, p. 5542919, 2021.
X. F. Wang, R. B. Li, C. Y. Wang, X. H. Li, T. Taleb, and V. C. M. Leung, Attention-weighted federated deep reinforcement learning for device-to-device assisted heterogeneous collaborative edge caching, IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 154–169, 2021.
L. Feng, Z. X. Yang, Y. Yang, X. Y. Que, and K. Zhang, Smart mode selection using online reinforcement learning for VR broadband broadcasting in D2D assisted 5G HetNets, IEEE Trans. Broadcast., vol. 66, no. 2, pp. 600–611, 2020.
D. Q. Feng, L. Lu, Y. Yuan-Wu, G. Y. Li, G. Feng, and S. Q. Li, Device-to-device communications underlaying cellular networks, IEEE Trans. Commun., vol. 61, no. 8, pp. 3541–3551, 2013.
L. X. Li, Y. Xu, J. Y. Yin, W. Liang, X. Li, W. Chen, and Z. Han, Deep reinforcement learning approaches for content caching in cache-enabled D2D networks, IEEE Internet Things J., vol. 7, no. 1, pp. 544–557, 2020.
K. K. Nguyen, T. Q. Duong, N. A. Vien, N. A. Le-Khac, and M. N. Nguyen, Non-cooperative energy efficient power allocation game in D2D communication: A multi-agent deep reinforcement learning approach, IEEE Access, vol. 7, pp. 100480–100490, 2019.
G. Künzel, L. S. Indrusiak, and C. E. Pereira, Latency and lifetime enhancements in industrial wireless sensor networks: A Q-learning approach for graph routing, IEEE Trans. Ind. Informat., vol. 16, no. 8, pp. 5617–5625, 2020.
S. Maldonado, J. Merigó, and J. Miranda, IOWA-SVM: A density-based weighting strategy for SVM classification via OWA operators, IEEE Trans. Fuzzy Syst., vol. 28, no. 9, pp. 2143–2150, 2020.
S. Zidi, T. Moulahi, and B. Alaya, Fault detection in wireless sensor networks through SVM classifier, IEEE Sens. J., vol. 18, no. 1, pp. 340–347, 2018.
This work was supported by the Science and Engineering Research Board (SERB-DST), Govt. of India (No. EEQ/2019/000010).
This work is available under the CC BY-NC-ND 3.0 IGO license: https://creativecommons.org/licenses/by-nc-nd/3.0/igo/