Enhancing Workplace Safety: Personal Protective Equipment Detection
Email tác giả liên hệ:
nkchienster@gmail.comDOI:
https://doi.org/10.54644/jte.2025.1637Từ khóa:
Convolutional Neural Networks (CNN), Deep learning Architecture, Personal Protective Equipment (PPE), You Only Look Once (YOLO), Machine LearningTóm tắt
Industries such as construction, cold food processing, and the chemical sector are particularly vulnerable to a range of potential hazards. Personal Protective Equipment (PPE) plays a critical role in safeguarding workers in these high-risk environments. However, ensuring the consistent use of PPE and adherence to established safety protocols is a complex task. This complexity arises from factors such as human error, negligence, and inadequate supervision. Traditional methods of monitoring PPE compliance typically involve manual inspections, which are not only labor-intensive but also have demonstrated limited effectiveness in ensuring consistent PPE use. To address these challenges, this study proposes the utilization of the YOLOv8 algorithm to achieve improved accuracy and suitability for a broader range of real-world working environments. In support of this approach, we have developed a new dataset named PPE-AYN, which includes five distinct classes (person, head, hat, glasses, and glove) and comprises a total of 2980 images. The YOLOv8 algorithm represents the latest advancement in the YOLO family of object detection models and is renowned for its rapid and precise detection capabilities. These characteristics make YOLOv8 particularly well-suited for the task of PPE detection, offering a promising solution to enhance safety compliance in various industrial settings. By leveraging this technology, we aim to significantly improve the monitoring and enforcement of PPE usage, thereby reducing the risk of accidents and injuries in hazardous work environments.
Tải xuống: 0
Tài liệu tham khảo
“IIF Latest Numbers : U.S. Bureau of Labor Statistics.” Accessed: Oct. 31, 2023. [Online]. Available: https://www.bls.gov/iif/latest-numbers.htm
“Construction Statistics | NIOSH | CDC.” Accessed: Oct. 31, 2023. [Online]. Available: https://www.cdc.gov/niosh/construction/statistics.html
“Workers suffered 18,510 eye-related injuries and illnesses in 2020 : The Economics Daily: U.S. Bureau of Labor Statistics.” Accessed: Oct. 31, 2023. [Online]. Available: https://www.bls.gov/opub/ted/2023/workers-suffered-18510-eye-related-injuries-and-illnesses-in-2020.htm
D. Hardison, A. Dickerson, B. Sylcott, and K. Lee, “Evaluating the Effectiveness of Worker Safety Vests on Drivers’ Visual Attention,” in Construction Research Congress 2020, pp. 105–113, doi: 10.1061/9780784482872.012. DOI: https://doi.org/10.1061/9780784482872.012
A. Hulme, M. Nigel, and G. A, “Industrial head injuries and the performance of helmets,” Sep. 1995.
M. Ferdous and S. M. M. Ahsan, “PPE detector: a YOLO-based architecture to detect personal protective equipment (PPE) for construction sites,” PeerJ Comput. Sci., vol. 8, p. e999, Jun. 2022, doi: 10.7717/peerj-cs.999. DOI: https://doi.org/10.7717/peerj-cs.999
Z. Xie, H. Liu, Z. Li, and Y. He, “A convolutional neural network based approach towards real-time hard hat detection,” in 2018 IEEE International Conference on Progress in Informatics and Computing (PIC), 2018, pp. 430–434, doi: 10.1109/PIC.2018.8706269. DOI: https://doi.org/10.1109/PIC.2018.8706269
M. I. B. Ahmed et al., “Personal Protective Equipment Detection: A Deep-Learning-Based Sustainable Approach,” Sustainability, vol. 15, no. 18, Art. no. 18, Jan. 2023, doi: 10.3390/su151813990. DOI: https://doi.org/10.3390/su151813990
Z. Wang, Y. Wu, L. Yang, A. Thirunavukarasu, C. Evison, and Y. Zhao, “Fast Personal Protective Equipment Detection for Real Construction Sites Using Deep Learning Approaches,” Sensors, vol. 21, no. 10, p. 3478, 2021, doi: 10.3390/s21103478. DOI: https://doi.org/10.3390/s21103478
J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” CoRR, vol. abs/1506.02640, 2015, [Online]. Available: http://arxiv.org/abs/1506.02640
J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 6517–6525, doi: 10.1109/CVPR.2017.690. DOI: https://doi.org/10.1109/CVPR.2017.690
J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” CoRR, vol. abs/1804.02767, 2018, Accessed: Oct. 31, 2023. [Online]. Available: http://arxiv.org/abs/1804.02767
A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” CoRR, vol. abs/2004.10934, 2020, Accessed: Oct. 31, 2023. [Online]. Available: https://arxiv.org/abs/2004.10934
“Comprehensive Guide to Ultralytics YOLOv5 - Ultralytics YOLOv8 Docs.” Accessed: Oct. 31, 2023. [Online]. Available: https://docs.ultralytics.com/yolov5/#citation
C. Li et al., “YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications,” Sep. 07, 2022, doi: 10.48550/arXiv.2209.02976.
C. Y. Wang, A. Bochkovskiy, and H. Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” Jul. 06, 2022, doi: 10.48550/arXiv.2207.02696. DOI: https://doi.org/10.1109/CVPR52729.2023.00721
G. Jocher, A. Chaurasia, and J. Qiu, YOLO by Ultralytics. (Jan. 2023). Python. Accessed: Oct. 31, 2023. [Online]. Available: https://github.com/ultralytics/ultralytics
G. Wang, Y. Chen, P. An, H. Hong, J. Hu, and T. Huang, “UAV-YOLOv8: A Small-Object-Detection Model Based on Improved YOLOv8 for UAV Aerial Photography Scenarios,” Sensors, vol. 23, p. 7190, 2023, doi: 10.3390/s23167190. DOI: https://doi.org/10.3390/s23167190
K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” CoRR, vol. abs/1406.4729, 2014, Accessed: Oct. 31, 2023. [Online]. Available: http://arxiv.org/abs/1406.4729
S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path Aggregation Network for Instance Segmentation,” CoRR, vol. abs/1803.01534, 2018, Accessed: Oct. 31, 2023. [Online]. Available: http://arxiv.org/abs/1803.01534
X. Li et al., “Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection,” CoRR, vol. abs/2006.04388, 2020, Accessed: Oct. 31, 2023. [Online]. Available: https://arxiv.org/abs/2006.04388
C. Feng, Y. Zhong, Y. Gao, M. R. Scott, and W. Huang, “TOOD: Task-aligned One-stage Object Detection,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2021, pp. 3490–3499, doi: 10.1109/ICCV48922.2021.00349. DOI: https://doi.org/10.1109/ICCV48922.2021.00349
R. Padilla, W. L. Passos, T. L. B. Dias, S. L. Netto, and E. A. B. da Silva, “A Comparative Analysis of Object Detection Metrics with a Companion Open-Source Toolkit,” Electronics, vol. 10, no. 3, 2021, doi: 10.3390/electronics10030279. DOI: https://doi.org/10.3390/electronics10030279
H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression,” presented at the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, Jun. 2019, pp. 658–666, doi: 10.1109/CVPR.2019.00075. DOI: https://doi.org/10.1109/CVPR.2019.00075
Tải xuống
Đã Xuất bản
Cách trích dẫn
Giấy phép
Bản quyền (c) 2025 Tạp chí Khoa học Giáo dục Kỹ Thuật
Tác phẩm này được cấp phép theo Giấy phép quốc tế Creative Commons Attribution-NonCommercial 4.0 .
Bản quyền thuộc về JTE.


