https://www.jte.edu.vn/index.php/jte/issue/feed Journal of Technical Education Science 2026-03-02T09:50:42+07:00 Journal Secretariat jte@hcmute.edu.vn Open Journal Systems <div class="row"> <div class="col-4"> <table style="width: 100%; border-collapse: collapse; height: 128px;"> <tbody> <tr> <td style="width: 30%; vertical-align: top;"> <p><img style="border: solid 1px black;" src="https://jte.edu.vn/public/journals/1/journalThumbnail_en_US.jpg" alt="" width="238" height="333" /></p> </td> <td style="width: 2.57732%;"> </td> <td style="width: 2.63852%;"> </td> <td style="width: 70%; vertical-align: top;"> <p><span style="font-weight: 400;"><strong>Journal of Technical Education Science (JTE), </strong>under Ho Chi Minh City University of Technology and Engineering, is a trimonthly, double-blind reviewed, open access, multidisciplinary journal dedicated to publishing quality original research articles and review-articles in all areas of the fundamental, educational, technological and engineering sciences. Papers published by the journal aim to represent important advances of significance to specialists within each field. </span><span style="font-weight: 400;">JTE published its first Volume in August 2006. </span><span style="font-weight: 400;">Since 2021, all issues have been registered in the CrossRef system with Digital Object Identifier (DOI) prefix 10.54644. (<a href="https://jte.edu.vn/index.php/jte/about">More info here</a>)</span></p> <p><strong>P-ISSN: <a title="2615-9740" href="https://portal.issn.org/resource/ISSN/2615-9740" target="_blank" rel="noopener">2615-9740</a> </strong>(English version)<br /><strong>P-ISSN: <a title="1859-1272" href="https://portal.issn.org/resource/ISSN/1859-1272" target="_blank" rel="noopener">1859-1272</a> </strong>(Vietnamese version)<br /><strong>DOI: 10.54644/jte.2026.xxxx</strong></p> </td> </tr> </tbody> </table> <h2> </h2> <h2>Aims and scope</h2> <p>The Journal of Technical Education Science (JTE) strives to disseminate scientific research conducted in the fields of science and engineering at both national and international levels to scientists and the public. We highly welcome original research articles across various disciplines including fundamental, educational, technological, and engineering sciences. These articles should present theoretical and experimental research outputs and must not have been previously published in other journals.</p> <p>The JTE publishes articles with the focus and scope of the fields of Maths; Physics; Chemistry; Mechanics; Civil and Construction Engineering; Mechanical Engineering; Vehicle Engineering; Energy Engineering and Technology; Information Technology; Electrical and Electronics Engineering; Automation and Control Engineering; Food Science and Technology; Chemical Engineering and Technology; Environmental Science and Technology; Psychology; Educational Management; Teaching Methods; Vocational Education.</p> <h2>Publication Frequency</h2> <p>Starting from May 2025, the JTE publishes its online versions trimonthly (8 issues per year: 4 Vietnamese issues and 4 English issues): at the end of February, May, August, and November. Additionally, the journal may consider to publish some special issues (SIs) during these specified periods to attract articles on emerging or trending topics. Articles that have been accepted for publication may be published online as soon as the copyediting, typesetting, and proofreading processes have been completed. These articles are final and fully citable.</p> <h2>Article processing charge </h2> <p>The journal does not charge submission fees. Only accepted articles are subject to a publication fee of 1,000,000 VND (40 USD) per article for disciplines recognized by the Vietnamese National Council for Professorship Titles, or 500,000 VND (20 USD) per article for other disciplines. For more detailed information of the publication fee and payment, please see <a href="https://jte.edu.vn/index.php/jte/publication-fee"><strong>HERE</strong></a>.</p> </div> </div> https://www.jte.edu.vn/index.php/jte/article/view/1758 An Embedded System With YOLOv5 for Automated Drug Delivery System 2025-09-17T11:22:05+07:00 Truc-Ly Le lethitrucly2806@gmail.com Phuc-Hau Nguyen hauflo2003@gmail.com Thien-Nhan Mai maithiennhan29@gmail.com Quoc-Kien Lam lamquockien.2805@gmail.com Song-Toan Tran tstoan1512@tvu.edu.vn <p>The integration of technology into pharmaceutical operations has led to the development of automated drug delivery systems, bringing numerous benefits such as reducing medication errors and improving patient satisfaction. With advancements in technology, automated drug delivery systems have a huge growth potential. Their deployment can significantly improve healthcare services and drive the development of the pharmaceutical industry. In this study, an embedded system on Raspberry Pi integrated with the YOLOv5 deep learning model and a hardware system controlled by a Mitsubishi FX5U Programmable Logic Controller (PLC) is proposed for a drug dispensing system. Drug vials will be collected and their images analyzed by YOLOv5, and a proposed line cutting position determination algorithm will identify the necessary cutting positions. These positions will be communicated to the PLC and control the cutting system accordingly. The training results of the YOLOv5 model achieved an accuracy of over 99% for basic drug types. The optimal cutting path determination algorithm provides the correct cutting positions to the cutting system from the PLC. The research results contribute to the construction and development of automated drug dispensing devices and systems.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2025 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/1756 Optimizing Binary Neural Network for Resource-Constrained Edge Devices in IoT Applications 2025-10-01T09:57:50+07:00 Van Minh Nguyen minhngv@hcmute.edu.vn Tien Tu Ngo ttn.twenty.oh.two@gmail.com Tien Dung Tran dungjan28th@gmail.com Minh Tam Nguyen tamnm@hcmute.edu.vn Minh Huan Vo huanvm@hcmute.edu.vn <p>The implementation of artificial intelligence models on edge devices is increasingly popular, bringing many values in reducing latency, effectively utilizing bandwidth, improving data security, enhancing privacy and reducing costs for users. However, this work poses many challenges in terms of accuracy, processing speed, hardware resources and model size for devices constrained by limited hardware. Binary Neural Network (BNN) is proposed as a potential solution to reduce resource requirements by using only 1 bit for quantizing. In this study, BNN network is optimized by binary quantizing both weights and activation functions with XNOR-popcout multiplication to optimize BNN network. The results show that BNN network model is lighter in memory footprint when deployed on hardware with limited computational resources, less computational time than conventional BNN network which helps the model execute faster as the network architecture becomes less complex, with acceptable accuracy on two datasets MNIST and Fashion MNIST. The proposed BNN model resul can be deployed on edge devices for IoT applications.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2025 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/1969 Design of a Telemedicine System for Classification of Breast Cancer Images 2025-09-12T08:35:57+07:00 Thanh-Tam Nguyen tamnt.ncs@hcmute.edu.vn Thanh-Hai Nguyen nthai@hcmute.edu.vn Tin-Trung Nguyen nguyentintrung.dr@gmail.com <p>Breast cancer is one of complex breast lesions. Therefore, accurate diagnosis to determine whether there is cancer disease or not, to determine which stage is a challenge for most doctors. This article proposes a telemedicine system for diagnosing breast cancer disease using EfficientNet-B7 in AI model, in which three image sets of Benign, Malignant and Normal are used. The main points are that, this telemedicine system is designed and calculated suitably so that a DICOM image can be transmitted from the image collected place to a server for classification and diagnosis, in which protocols and storage parts in this system are carefully selected and tested for its efficiency. Furthermore, layers and coefficients of the EfficientNet-B7 model are calculated and selected to increase the classification performance. Thus, the overall system results produced an accuracy of about 89.58%, which is a significant result for a complex and challenging system. Thus, the system can be improved in the future by enhancing the image sets, updating the deep learning network appropriately, and configuring a powerful enough server system.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2025 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/1890 SepU-Net MRI Segmentation Algorithm Using Depthwise Separable Convolution and Pointwise Convolution Integrated U-Net 2026-03-02T09:50:42+07:00 Phong Mai Hong 20119192@student.hcmute.edu.vn Lam Mai Thanh 20119137@student.hcmute.edu.vn Lam Nguyen Ngo lamnn@hcmute.edu.vn <p>Accurate segmentation of brain tumors in MRI remains challenging due to the computational demands of conventional deep learning models. We present SepU-Net, a lightweight convolutional neural network that employs Depthwise Separable Convolutions and efficiently channel attention to reduce model complexity by 69.3% compared to standard U-Net (2.39M vs. 7.76M parameters).Evaluated on both BraTS2020 and BraTS2021, SepU-Net achieves high accuracy (0.9938 and 0.994), mean IoU (0.842 and 0.8318), and Dice coefficients (0.846 and 0.8325), with only minor declines on the more heterogeneous BraTS2021 dataset. Notably, SepU-Net delivers a 32.34% improvement in Tumor Core segmentation and a 10.14% gain in Enhancing Tumor segmentation over U-Net, while maintaining strong precision (0.994/0.9942) and sensitivity (0.9921/0.9915) across datasets. SepU-Net requires only 8.3 GFLOPs per inference, 65% fewer than U-Net and operates efficiently on embedded devices with a memory footprint of 1.4GB. These results validate its ability to balance accuracy and efficiency, enabling real-time segmentation in clinical settings. Future work will integrate attention mechanisms and extend the architecture to 3D for enhanced spatial context.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2026 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/1971 A Framework for Automated and Visualized Penetration Testing 2026-03-02T09:50:40+07:00 Thang Loi Nguyen 22162023@student.hcmute.edu.vn Thanh Van Nguyen vanntth@hcmute.edu.vn Luu Gia Bao Nguyen 22162005@student.hcmute.edu.vn <p>The fragmentation of command-line tools in penetration testing creates inefficient scenarios, additional manual use, and inconsistent results, all of which can make workflows extremely problematic for complex security testing scenarios. This paper presents EzPentest, a framework designed to automate and visualize penetration testing through a single web interface. EzPentest's novelty is its YAML-based workflows, which support conditional logic, looping, and parallelization to create flexible and repeatable testing processes. Key to the use of EzPentest, is the parser engine which will convert the output of different tools into a standardized JSON output, this transformation standardizes vulnerability analysis and reporting. Along with its parser, EzPentest has a modular approach to allow the community to enhance and share the workflows that will connect various tools to create holistic penetration testing scenarios. In experiments with benchmark applications, as in DVWA and bWAPP, EzPentest achieves the highest detection rate of 89.39%. As demonstrated, EzPentest is more than simply an solution to provide scalable, accessible, and collaborative penetration testing, it is an open community resource that is particularly beneficial in educational institutions as it makes easier to understand an advanced area of software vulnerability assessing and security testing and allows small-to-medium enterprises to undertake initiatives to automate pentesting.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2026 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/2068 Energy-Efficient and QoS-Aware Routing in Wireless Sensor Networks Using Deep Q-Learning With Dynamic Clustering 2026-03-02T09:50:34+07:00 Nguyen Phuong Thinh 2531313@student.hcmute.edu.vn Phan Thi The thept@hcmute.edu.vn Nguyen Thanh Son sonnt@hcmute.edu.vn <p>Wireless Sensor Networks (WSNs) encounter significant challenges in balancing limited energy resources with strict Quality of Service (QoS) requirements, especially in dense deployments with dynamic traffic patterns. Traditional routing protocols rely on static heuristics that are unable to adapt to evolving network conditions such as heterogeneous energy distribution, traffic fluctuations, and topology changes. This paper presents PSR-DRL+, an adaptive routing protocol that combines Deep Q-Networks (DQN) with dynamic clustering based on node energy states and spatial distribution. The protocol utilizes a multi-objective reward function that simultaneously optimizes energy consumption, end-to-end delay, queue occupancy, and routing distance. This enables learning agents to balance network lifetime with QoS guarantees. Simulations conducted in Matlab on a scenario with 100 nodes demonstrate that PSR-DRL+ extends the time until the first node dies to 2,171 seconds, representing a 73.6% improvement over RLBEEP, Additionally, it maintains a packet delivery ratio above 95% even under heavy traffic loads. These results validate that congestion-aware deep reinforcement learning provides a viable framework for next-generation energy-constrained IoT deployments.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2026 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/2069 A Contextual-Enhanced LightGCN for Movie Recommendation Systems 2026-03-02T09:50:32+07:00 Dinh-Quoc-Hoa Pham 2531307@student.hcmute.edu.vn Huyen-Trang Phan trangpth@hcmute.edu.vn <p>In the context of the digital information explosion, recommender systems have been widely deployed to mitigate information overload through personalized information filtering. Traditional methods, such as collaborative filtering and content-based filtering, established the foundation for this field. Recently, advancements in deep learning particularly Graph Convolutional Network-based models such as LightGCN have demonstrated superior effectiveness in learning user and item representations from high-order interaction graph structures. To alleviate this limitation, this paper proposes a recommendation method titled Contextual-enhanced LightGCN<a href="#_ftn1" name="_ftnref1">[1]</a>. This approach enhances the LightGCN model by simultaneously leveraging movie content features and user demographic information to aggregate information during the training process. Our ablation study further clarifies that while item content features enhance recommendation quality, the simple integration of user demographics introduces noise and degrades performance. Comprehensive experiments on MovieLens 100K and MovieLens 1M datasets, averaged over three independent runs, indicate that CF-LightGCN consistently outperforms the LightGCN baseline, achieving a Recall@20 improvement of up to 1.5%.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2026 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/2077 An Integrated Approach for Multi-Object Detection and Tracking in Traffic Monitoring Using YOLOv9c and ByteTrack 2026-03-02T09:50:31+07:00 Nguyen Thi Thanh Thuy thuyntt@huit.edu.vn Ho Van Luc hovanluc@gmail.com Nguyen Thi Thai An thaiantl@gmail.com Phung The Bao baopt@huit.edu.vn Nguyen Thi Dinh dinhnt@huit.edu.vn <p>This paper proposes an integrated method for object detection and tracking in congested traffic environments, based on a combination of the YOLOv9c object detection model and the ByteTrack multi-object tracking algorithm. In this proposed method, the YOLOv9c model is trained and fine-tuned to enhance the performance of vehicle detection in complex conditions. Simultaneously, ByteTrack algorithm links objects across extracted video frames by leveraging both high- and low-confidence bounding boxes. This approach reduces the identity loss and increases the stability of object tracking in traffic, especially in conditions with high object density and severe occlusion. To implement this method, the object detection model was trained and refined on the BDD100K dataset, combined with the Vietnam Traffic Dataset, with a focus on common vehicle classes, including bicycles, motorbikes, cars, buses, and trucks. Experimental results showed that the model achieved a Precision of 89.8% and a Recall of 72.7% in the daytime traffic congestion scenario, and a recall rate of 90.1% in nighttime conditions. For the multi-object tracking problem, the system achieved an IDF1 of 84.3%, demonstrating its ability to maintain stable object identification even in the presence of obstructions, and achieved an MOTA of 69.9% under favorable observation conditions. These results confirm that the proposed method is highly effective in detecting and tracking traffic objects and has potential applications in intelligent traffic monitoring systems and real-time video analysis.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2026 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/2062 Retinal Diseases Classification From OCT Images Using Pretrained Dual-Encoder Architecture 2026-03-02T09:50:36+07:00 Ngo Quang Huy nqhuy@hcmiu.edu.vn Hoang Thai Xuan Khoa khoahtx@hcmute.edu.vn Le Van Vinh vinhlv@hcmute.edu.vn <p>Retinal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma, are leading causes of irreversible vision loss, making early and accurate diagnosis essential for effective treatment. Optical coherence tomography (OCT) provides high-resolution cross-sectional retinal images that support disease assessment; however, many challenges still remain due to noise and artifacts in images or complex retinal structures. In this study, we propose a dual-encoder framework for retinal disease classification from OCT B-scan images by jointly leveraging two pretrained foundation models: RETFound and MIRAGE. Following standardized preprocessing and resampling, high-quality features extracted from the encoders are combined for the final classification tasks. To mitigate overfitting on limited medical data, the RETFound encoder is frozen during training to preserve general visual features, whereas the MIRAGE encoder is fine-tuned to adapt to specific classification objectives. Extensive experiments conducted on seven public OCT benchmark datasets demonstrate that the proposed method outperforms single-encoder baselines on the majority of benchmarks. The framework achieved an average balanced accuracy (BAcc) of 89.8%, an F1-score of 90.7%, and a Matthews Correlation Coefficient (MCC) of 83.9%. These results confirm the effectiveness of combining complementary pretrained encoders for robust and generalizable retinal disease classification in clinical settings.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2026 Journal of Technical Education Science https://www.jte.edu.vn/index.php/jte/article/view/2047 Multilingual Neural Machine Translation for Asian Language Treebank 2026-03-02T09:50:38+07:00 Hong Buu Long Nguyen nhblong@fit.hcmus.edu.vn Thanh Tung Vu thanhtungvu727@gmail.com <p>This study examines multilingual neural machine translation (MNMT) for a diverse group of low-resource Asian languages-Bengali, Filipino, Indonesian, Japanese, Khmer, Malay, and Vietnamese-which differ substantially in linguistic families, writing systems, and typology. This paper evaluates state-of-the-art MNMT systems and introduces a Compact &amp; Language-Sensitive MNMT model designed to improve translation performance while reducing computational cost. The proposed approach shares parameters through a compact multilingual representation, and enhances language discrimination using language-sensitive embeddings, a language-sensitive discriminator, and an adaptive cross-attention mechanism that selects attention parameters based on specific language pairs. Integrated with a multi-stage fine-tuning strategy, this model effectively strengthens cross-lingual transfer while maintaining robust language-specific representations. Experiments on the ALT multi-parallel corpus and the KFTT English-Japanese dataset demonstrate that multilingual models significantly outperform single-language NMT baselines. Despite its smaller size, the proposed Compact &amp; Language-Sensitive MNMT achieves competitive or superior BLEU scores compared to Google’s MNMT, confirming the effectiveness of guided parameter sharing and language-sensitive training. These results highlight the value of compact multilingual architectures and multi-parallel datasets for advancing low-resource Asian machine translation.</p> 2026-02-28T00:00:00+07:00 Copyright (c) 2026 Journal of Technical Education Science