Journal Home > Volume 2 , Issue 4

The existing literature on device-to-device (D2D) architecture suffers from a dearth of analysis under imperfect channel conditions. There is a need for rigorous analyses on the policy improvement and evaluation of network performance. Accordingly, a two-stage transmit power control approach (named QSPCA) is proposed: First, a reinforcement Q-learning based power control technique and; second, a supervised learning based support vector machine (SVM) model. This model replaces the unified communication model of the conventional D2D setup with a distributed one, thereby requiring lower resources, such as D2D throughput, transmit power, and signal-to-interference-plus-noise ratio as compared to existing algorithms. Results confirm that the QSPCA technique is better than existing models by at least 15.31% and 19.5% in terms of throughput as compared to SVM and Q-learning techniques, respectively. The customizability of the QSPCA technique opens up multiple avenues and industrial communication technologies in 5G networks, such as factory automation.


menu
Abstract
Full text
Outline
About this article

QSPCA: A two-stage efficient power control approach in D2D communication for 5G networks

Show Author's information Saurabh Chandra1 Prateek1Rohit Sharma1Rajeev Arya1( )Korhan Cengiz2
Wireless Sensor Network Lab, the Department of Electronics and Communication Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna 800005, India
Department of Electrical-Electronics Engineering, Trakya University, Edirne 22030, Turkey

Abstract

The existing literature on device-to-device (D2D) architecture suffers from a dearth of analysis under imperfect channel conditions. There is a need for rigorous analyses on the policy improvement and evaluation of network performance. Accordingly, a two-stage transmit power control approach (named QSPCA) is proposed: First, a reinforcement Q-learning based power control technique and; second, a supervised learning based support vector machine (SVM) model. This model replaces the unified communication model of the conventional D2D setup with a distributed one, thereby requiring lower resources, such as D2D throughput, transmit power, and signal-to-interference-plus-noise ratio as compared to existing algorithms. Results confirm that the QSPCA technique is better than existing models by at least 15.31% and 19.5% in terms of throughput as compared to SVM and Q-learning techniques, respectively. The customizability of the QSPCA technique opens up multiple avenues and industrial communication technologies in 5G networks, such as factory automation.

Keywords: machine learning, Internet of Things (IoT), power control, 5G, support vector machine (SVM), interference, device-to-device (D2D), Q-learning

References(21)

1

I. Budhiraja, N. Kumar, and S. Tyagi, Deep-reinforcement-learning-based proportional fair scheduling control scheme for underlay D2D communication, IEEE Internet Things J., vol. 8, no. 5, pp. 3143–3156, 2021.

2

X. Wang, T. Jin, L. S. Hu, and Z. H. Qian, Energy-efficient power allocation and Q-learning-based relay selection for relay-aided D2D communication, IEEE Trans. Veh. Technol., vol. 69, no. 6, pp. 6452–6462, 2020.

3

A. Lakhan, M. A. Mohammed, A. N. Rashid, S. Kadry, T. Panityakul, K. H. Abdulkareem, and O. Thinnukool, Smart-contract aware ethereum and client-fog-cloud healthcare system, Sensors, vol. 21, no. 12, p. 4093, 2021.

4

P. Gandotra and R. K. Jha, Device-to-device communication in cellular networks: A survey, J. Netw. Comput. Appl., vol. 71, pp. 99–117, 2016.

5

N. Lee, X. Q. Lin, J. G. Andrews, and R. W. Heath, Power control for D2D underlaid cellular networks: Modeling, algorithms, and analysis, IEEE J. Sel. Areas Commun., vol. 33, no. 1, pp. 1–13, 2015.

6

A. Hammoodi, L. Audah, M. A. Taher, M. A. Mohammed, M. S. Aljumaily, A. Salh, and S. A. Hamzah, Novel universal windowing multicarrier waveform for 5G systems, Comput. Mater. Contin., vol. 67, no. 2, pp. 1523–1536, 2021.

7

K. Zia, N. Javed, M. N. Sial, S. Ahmed, A. A. Pirzada, and F. Pervez, A distributed multi-agent RL-based autonomous spectrum allocation scheme in D2D enabled multi-tier HetNets, IEEE Access, vol. 7, pp. 6733–6745, 2019.

8
M. Lee, G. D. Yu, and G. Y. Li, Accelerating resource allocation for D2D communications using imitation learning, presented at 2019 IEEE 90th Vehicular Technology Conf. (VTC2019-Fall), Honolulu, HI, USA, 2019, pp. 1–5.https://doi.org/10.1109/VTCFall.2019.8891075
DOI
9

M. Lin, J. Ouyang, and W. P. Zhu, Joint beamforming and power control for device-to-device communications underlaying cellular networks, IEEE J. Sel. Areas Commun., vol. 34, no. 1, pp. 138–150, 2016.

10

S. Aslam, F. Alam, S. F. Hasan, and M. A. Rashid, A machine learning approach to enhance the performance of D2D-enabled clustered networks, IEEE Access, vol. 9, pp. 16114–16132, 2021.

11
S. W. Nie, Z. Q. Fan, M. Zhao, X. Y. Gu, and L. Zhang, Q-learning based power control algorithm for D2D communication, presented at 2016 IEEE 27th Annual Int. Symp. Personal, Indoor, and Mobile Radio Communications (PIMRC), Valencia, Spain, 2016, pp. 1–6.https://doi.org/10.1109/PIMRC.2016.7794793
DOI
12

B. A. Khalaf, S. A. Mostafa, A. Mustapha, M. A. Mohammed, M. A. Mahmoud, B. A. S. Al-Rimy, S. A. Razak, M. Elhoseny, and A. Marks, An adaptive protection of flooding attacks model for complex network environments, Secur. Commun. Netw., vol. 2021, p. 5542919, 2021.

13

X. F. Wang, R. B. Li, C. Y. Wang, X. H. Li, T. Taleb, and V. C. M. Leung, Attention-weighted federated deep reinforcement learning for device-to-device assisted heterogeneous collaborative edge caching, IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 154–169, 2021.

14

L. Feng, Z. X. Yang, Y. Yang, X. Y. Que, and K. Zhang, Smart mode selection using online reinforcement learning for VR broadband broadcasting in D2D assisted 5G HetNets, IEEE Trans. Broadcast., vol. 66, no. 2, pp. 600–611, 2020.

15

D. Q. Feng, L. Lu, Y. Yuan-Wu, G. Y. Li, G. Feng, and S. Q. Li, Device-to-device communications underlaying cellular networks, IEEE Trans. Commun., vol. 61, no. 8, pp. 3541–3551, 2013.

16
S. A. Mostafa, S. S. Gunasekaran, A. Mustapha, M. A. Mohammed, and W. M. Abduallah, Modelling an adjustable autonomous multi-agent internet of things system for elderly smart home, in Proc. AHFE 2019 Int. Conf. Neuroergonomics and Cognitive Engineering, and the AHFE Int. Conf. Industrial Cognitive Ergonomics and Engineering Psychology, H. Ayaz, ed. Washington, DC, USA: Springer, 2020, pp. 301–311.https://doi.org/10.1007/978-3-030-20473-0_29
DOI
17

L. X. Li, Y. Xu, J. Y. Yin, W. Liang, X. Li, W. Chen, and Z. Han, Deep reinforcement learning approaches for content caching in cache-enabled D2D networks, IEEE Internet Things J., vol. 7, no. 1, pp. 544–557, 2020.

18

K. K. Nguyen, T. Q. Duong, N. A. Vien, N. A. Le-Khac, and M. N. Nguyen, Non-cooperative energy efficient power allocation game in D2D communication: A multi-agent deep reinforcement learning approach, IEEE Access, vol. 7, pp. 100480–100490, 2019.

19

G. Künzel, L. S. Indrusiak, and C. E. Pereira, Latency and lifetime enhancements in industrial wireless sensor networks: A Q-learning approach for graph routing, IEEE Trans. Ind. Informat., vol. 16, no. 8, pp. 5617–5625, 2020.

20

S. Maldonado, J. Merigó, and J. Miranda, IOWA-SVM: A density-based weighting strategy for SVM classification via OWA operators, IEEE Trans. Fuzzy Syst., vol. 28, no. 9, pp. 2143–2150, 2020.

21

S. Zidi, T. Moulahi, and B. Alaya, Fault detection in wireless sensor networks through SVM classifier, IEEE Sens. J., vol. 18, no. 1, pp. 340–347, 2018.

Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 08 June 2021
Revised: 08 September 2021
Accepted: 29 September 2021
Published: 30 December 2021
Issue date: December 2021

Copyright

© All articles included in the journal are copyrighted to the ITU and TUP.

Acknowledgements

Acknowledgment

This work was supported by the Science and Engineering Research Board (SERB-DST), Govt. of India (No. EEQ/2019/000010).

Rights and permissions

This work is available under the CC BY-NC-ND 3.0 IGO license: https://creativecommons.org/licenses/by-nc-nd/3.0/igo/

Return