Energy-Efficient and QoS-Aware Routing in Wireless Sensor Networks Using Deep Q-Learning With Dynamic Clustering

Các tác giả

Email tác giả liên hệ:

thept@hcmute.edu.vn

DOI:

https://doi.org/10.54644/jte.2026.2068

Từ khóa:

Wireless Sensor Networks, Deep Reinforcement Learning, DQN, QoS Routing, Congestion Control, Clustering

Tóm tắt

Wireless Sensor Networks (WSNs) encounter significant challenges in balancing limited energy resources with strict Quality of Service (QoS) requirements, especially in dense deployments with dynamic traffic patterns. Traditional routing protocols rely on static heuristics that are unable to adapt to evolving network conditions such as heterogeneous energy distribution, traffic fluctuations, and topology changes. This paper presents PSR-DRL+, an adaptive routing protocol that combines Deep Q-Networks (DQN) with dynamic clustering based on node energy states and spatial distribution. The protocol utilizes a multi-objective reward function that simultaneously optimizes energy consumption, end-to-end delay, queue occupancy, and routing distance. This enables learning agents to balance network lifetime with QoS guarantees. Simulations conducted in Matlab on a scenario with 100 nodes demonstrate that PSR-DRL+ extends the time until the first node dies to 2,171 seconds, representing a 73.6% improvement over RLBEEP, Additionally, it maintains a packet delivery ratio above 95% even under heavy traffic loads. These results validate that congestion-aware deep reinforcement learning provides a viable framework for next-generation energy-constrained IoT deployments.

Tải xuống: 0

Dữ liệu tải xuống chưa có sẵn.

Tiểu sử của Tác giả

Nguyen Phuong Thinh, Ho Chi Minh City University of Technology and Engineering, Vietnam

Nguyen Phuong Thinh received the B.Eng. degree in Electronics and Telecommunications Engineering from Ton Duc Thang University, Vietnam, in 2024. He is currently pursuing the M.Sc. degree at the Ho Chi Minh City University of Technology and Engineering (HCM-UTE) (formerly Ho Chi Minh City University of Technology and Education), Vietnam, since 2025. His research interests include wireless sensor networks (WSNs), energy-efficient routing, deep reinforcement learning, and intelligent network optimization.

Email: 2531313@student.hcmute.edu.vn ORCID:  https://orcid.org/0009-0006-4836-8156

Phan Thi The, Ho Chi Minh City University of Technology and Engineering, Vietnam

Phan Thi The was born in Vietnam in 1982. She received Master Data Transmission and Network in Post & Telecommunications Institute of Technology (Ptit), Vietnam, 2012, She got PhD degree PhD in Information System from Post & Telecommunications Institute of Technology, Vietnam in 2022. She is working as a lecture in Faculty of Information Technology, Ho Chi Minh City University of Technology and Engineering (formerly Ho Chi Minh City University of Technology and Education), Vietnam. Her research interests include WSN, artificial intelligence, machine learning, data mining.

Email: thept@hcmute.edu.vn. ORCID:  https://orcid.org/0009-0004-0251-5152

Nguyen Thanh Son, Ho Chi Minh City University of Technology and Engineering, Vietnam

Nguyen Thanh Son is a lecturer at Information System Division in Faculty of Information Technology, Ho Chi Minh City University of Technology and Engineering (formerly Ho Chi Minh City University of Technology and Education). He got PhD degree from University of Technology, Ho Chi Minh City, Vietnam. His research interests include artificial intelligence, machine learning, data mining, and time series. He can be contacted at:

Email: sonnt@hcmute.edu.vn. ORCID:  https://orcid.org/0000-0001-9414-3456

Tài liệu tham khảo

I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “Wireless sensor networks: A survey,” Computer Networks, vol. 38, no. 4, pp. 393–422, Mar. 2002. DOI: https://doi.org/10.1016/S1389-1286(01)00302-4

K. Sohraby, D. Minoli, and T. Znati, Wireless Sensor Networks: Technology, Protocols, and Applications. Hoboken, NJ, USA: John Wiley & Sons, 2007. DOI: https://doi.org/10.1002/047011276X

W. R. Heinzelman, A. Chandrakasan, and H. Balakrishnan, “Energy-efficient communication protocol for wireless microsensor networks,” in Proc. 33rd Annu. Hawaii Int. Conf. System Sciences (HICSS), Maui, HI, USA, 2000, pp. 1–10. DOI: https://doi.org/10.1109/HICSS.2000.926982

O. Younis and S. Fahmy, “HEED: A hybrid, energy-efficient, distributed clustering approach for ad hoc sensor networks,” IEEE Trans. Mobile Comput., vol. 3, no. 4, pp. 366–379, Oct.–Dec. 2004. DOI: https://doi.org/10.1109/TMC.2004.41

K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep reinforcement learning: A brief survey,” IEEE Signal Process. Mag., vol. 34, no. 6, pp. 26–38, Nov. 2017. DOI: https://doi.org/10.1109/MSP.2017.2743240

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed. Cambridge, MA, USA: MIT Press, 2018.

V. Mnih et al., “Playing Atari with deep reinforcement learning,” arXiv:1312.5602, 2013.

M. F. I. Sezar and M. Rashedunnabi, “Power saving routing for wireless sensor network using deep reinforcement learning,” 2025.

A. Abadi et al., “RLBEEP: Reinforcement-learning-based energy-efficient control and routing protocol for wireless sensor networks,” Wireless Pers. Commun., 2022. DOI: https://doi.org/10.1109/ACCESS.2022.3167058

P. M. Mutombo, “EER-RL: Energy-efficient routing protocol using reinforcement learning for WSN,” Int. J. Eng. Res. Technol. (IJERT), 2021. DOI: https://doi.org/10.1155/2021/5589145

W. Guo, B. Zhang, G. Chen, and X. Wang, “Reinforcement learning-based routing for energy harvesting wireless sensor networks,” IEEE Access, vol. 7, pp. 169600–169613, 2019.

T. S. Pradeep and S. P. Kumar, “A survey on sleep schedule in wireless sensor networks,” Int. J. Eng. Res. Technol. (IJERT), 2013.

J. Zheng, “A novel sleep scheduling algorithm for wireless sensor networks,” 2014.

S. Buzura, B. Iancu, and V. Dadarlat, “Optimizations for energy efficiency in software-defined wireless sensor networks,” Sensors, vol. 20, no. 17, art. no. 4821, 2020. DOI: https://doi.org/10.3390/s20174779

N. Kapileswar, J. Simon, and P. Sankaranarayanan, “Energy-efficient routing in wireless sensor networks using deep Q-learning and adaptive threshold-based clustering,” in Proc. 7th Int. Conf. Intelligent Sustainable Systems (ICISS), 2025. DOI: https://doi.org/10.1109/ICISS63372.2025.11076402

N. Pantazis and D. D. Vergados, “A survey on power control issues in wireless sensor networks,” IEEE Commun. Surveys Tuts., vol. 9, no. 4, pp. 86–107, 2007. DOI: https://doi.org/10.1109/COMST.2007.4444752

T. P. Lillicrap et al., “Continuous control with deep reinforcement learning,” in Proc. Int. Conf. Learning Representations (ICLR), 2016.

X. Liu, J. Zhang, and X. Zhang, “Deep reinforcement learning-based dynamic routing for wireless sensor networks,” Ad Hoc Netw., vol. 106, art. no. 102213, 2020.

M. A. Al-Kababji and A. E. Al-Fayoumi, “An energy-efficient routing protocol for wireless sensor networks based on deep reinforcement learning,” IEEE Access, vol. 9, pp. 136773–136787, 2021.

S. Sharma and R. Kumar, “Q-learning-based energy-efficient routing protocols for wireless sensor networks: A survey,” Wireless Pers. Commun., vol. 116, pp. 2963–2989, 2021.

Z. Sun, M. Wei, Z. Zhang, and G. Qu, “Self-adaptive routing for wireless sensor networks based on deep reinforcement learning,” IEEE Sensors J., vol. 20, no. 19, pp. 11667–11677, Oct. 2020. DOI: https://doi.org/10.1109/JSEN.2019.2925719

F. Tang, H. Zhang, and L. T. Yang, “Multipath routing for congestion control in wireless sensor networks based on deep reinforcement learning,” IEEE Internet Things J., vol. 8, no. 20, pp. 15306–15316, Oct. 2021.

Y. Sun, L. Liu, and G. Wang, “Delay-constrained energy-efficient routing in wireless sensor networks using reinforcement learning,” Computer Networks, vol. 183, art. no. 107530, 2020.

P. Singh and V. K. Sharma, “Multi-objective optimization for energy-efficient routing in WSNs using hybrid approach,” Wireless Netw., vol. 30, no. 1, pp. 45–62, 2024. DOI: https://doi.org/10.1007/s11276-024-03686-5

K. Dev, P. K. Singh, and S. Tanwar, “Deep reinforcement learning for QoS-aware routing in software-defined wireless sensor networks,” IEEE Trans. Netw. Service Manag., vol. 19, no. 2, pp. 1437–1449, Jun. 2022.

T. M. Behera, S. K. Mohapatra, and U. C. Samal, “I-LEACH: An improved LEACH protocol for WSNs,” Procedia Comput. Sci., vol. 171, pp. 1651–1660, 2020.

A. Rovetta, X. Masip-Bruin, and G. J. Navaridas, “A comprehensive survey on reinforcement learning for routing in IoT networks,” Computer Networks, vol. 199, art. no. 108463, 2021. DOI: https://doi.org/10.1016/j.comnet.2021.108463

L. Q. Jing and Y. D. Zhang, “Review of routing protocols for wireless sensor networks based on machine learning,” Artif. Intell. Rev., vol. 56, pp. 123–165, 2023.

S. B. Othman, A. Bahri, and A. Yahya, “Energy-efficient clustering algorithm for WSN based on reinforcement learning,” IET Commun., vol. 16, no. 5, pp. 504–514, 2022.

H. Zhang, X. Li, and J. Yan, “Digital twin-driven deep reinforcement learning for routing in industrial wireless sensor networks,” IEEE Trans. Ind. Informat., vol. 19, no. 2, pp. 1324–1334, Feb. 2023.

Tải xuống

Đã Xuất bản

2026-02-28

Cách trích dẫn

[1]
Nguyen Phuong Thinh, Phan Thi The, và Nguyen Thanh Son, “Energy-Efficient and QoS-Aware Routing in Wireless Sensor Networks Using Deep Q-Learning With Dynamic Clustering”, JTE, vol 21, số p.h 01, tr 58–70, tháng 2 2026.

Số

Chuyên mục

Bài báo khoa học

Categories

Các bài báo được đọc nhiều nhất của cùng tác giả