Journal of Computing & Biomedical Informatics
https://jcbi.org/index.php/Main
<p style="text-align: justify;"><strong>Journal of Computing & Biomedical Informatics (JCBI) </strong>is a peer-reviewed open-access journal that is recognised by the Higher Education Commission (H.E.C.) Pakistan. JCBI publishes high-quality scholarly articles reporting substantive results on a wide range of learning methods applied to a variety of learning problems. All submitted articles should report original, previously unpublished research results, experimental or theoretical. Articles submitted to the journal should meet these criteria and must not be under consideration for publication elsewhere. Manuscripts should follow the style of the journal and are subject to both review and editing. JCBI encourage authors of original research papers to describe work such as the following:</p> <ul> <li>Articles in the areas of computational approaches, artificial intelligence, big data, software engineering, cybersecurity, internet of things, and data analysis.</li> <li>Reports substantive results on a wide range of learning methods applied to a variety of learning problems.</li> <li>Articles provide solid support via empirical studies, theoretical analysis, or comparison to psychological phenomena.</li> <li>Articles that respond to a need in medicine, or rare data analysis with novel methods.</li> <li>Articles that Involve healthcare professional's motivation for the work and evolutionary results are usually necessary.</li> <li>Articles show how to apply learning methods to solve important application problems.</li> </ul> <p style="text-align: justify;">Journal of Computing & Biomedical Informatics (JCBI) accepts interdisciplinary field that studies and pursues the effective uses of computational and biomedical data, information, and knowledge for scientific inquiry, problem-solving, and decision making, motivated by efforts to improve human health. Novel high performance computing methods, big data analysis, and artificial intelligence that advance material technologies are especially welcome.</p>Journal of Computing & Biomedical Informaticsen-USJournal of Computing & Biomedical Informatics2710-1606<p>This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under<a href="http://creativecommons.org/licenses/by/4.0"> CCBY 4.0 International License</a></p>IoT Based Garden Managing System with Wireless Control Using ESP-8266 Controller
https://jcbi.org/index.php/Main/article/view/1334
<p>Internet of Things (IoT) is changing agricultural practices and human life. The main goal of this study was to develop and do statistical testing of an IoT-based module for home gardening for the main purpose of increasing the production. It was hypothesized that digital devices may be interconnected to automate real-life systems in agriculture. This research further, found out the answer to fix the issue of lack of expertise for maintaining homegrown gardens in urban areas by developing a project-based on IoT approach for smart garden monitoring system. An ESP-8266 microcontroller was modified /extended and test statistics such as temperature, moisture, humidity, intensity of light, plants growth, and levels in gardens. Through its Android software, the ESP-8266 microcontroller provided data analysis to verify real-time updates and display environmental profiles. This system helps users optimize gardening practices, promoting plant growth, and achieve 65% reduction of water usage and 45% increases in plant growth. Using this IoT-Base project, the home gardeners of urban areas tackle with the challenges of maintaining gardens, eliminating the need for external gardening services. It is recommended to implement this IoT-based project in urban and rural areas.</p>Muhammad AliSyeda Aasma BibiMuhammad Rehan FaheemSyed Asim Ali ShahHannan AdeelMuhammad Tayyab
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-03-012026-03-01A Machine Learning-Based Firewall Model for Effective Attack Detection Using Dragonfly and Bat Algorithms
https://jcbi.org/index.php/Main/article/view/1342
<p>This work proposes a new machine learning (ML)-based firewall model for attack detection in modern networks. In this regard, the proposed ML-based firewall model is integrated with an advanced feature selection method to optimize the significant features that will improve the accuracy of detection by using the Dragonfly Algorithm (DA) and Bat Algorithm (BA). Logistic Regression (LR) and Gradient Boosting Trees (GBT) were used for the classification in this model. The model was tested and validated on the UNSW-NB15 dataset for the experimentation process because this dataset represents a comprehensive modern network activity. The GBT classifier achieved an accuracy of 100%, demonstrating its great capability in handling selected features and finding the attacks. The LR also attained a very high of accuracy of 99.84%. These outcomes highlight the efficiency of the proposed model in detecting attacks with minimal false positives. The integration of DA and BA for feature selection and the use of robust classifiers make the proposed ML-based firewall a promising solution for safeguarding modern networks.</p>Hani Al-MimiAli Al DahoudAhmad Al Dahoud
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-03-012026-03-01Analyzing the Impact of Pretrained Language Models on Low-Resource Languages
https://jcbi.org/index.php/Main/article/view/1332
<p>The rapid advancement of natural language processing has predominantly benefited high-resource languages such as English, Chinese, and Spanish, leaving thousands of languages underserved. This digital language divide limits equitable access to technology and threatens global linguistic diversity. This paper presents a systematic evaluation of eight pretrained language models across seven low-resource languages representing five distinct language families. Through extensive experiments on sentiment analysis, named entity recognition, and machine translation tasks, we demonstrate that multilingual BERT achieves the highest average accuracy of 74.5%. We further propose a novel adaptation framework combining vocabulary augmentation, continual pretraining, task-adaptive fine-tuning, and knowledge distillation that improves performance by up to 18.7%. Our analysis identifies vocabulary overlap as the strongest predictor of cross-lingual transfer success, explaining 76.3% of performance variance. These findings provide evidence-based guidelines for researchers and practitioners developing inclusive NLP technologies for underserved language communities. Limitations of this study include the focus on seven languages (generalizability to other low-resource languages requires further validation), computational constraints that prevented evaluation of models exceeding 300M parameters, and potential biases introduced by dataset availability and quality across languages.</p>Muhammad Irshad HussainShafiq HussainAleena JamilAdeen AmjadSajid Iqbal
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-04-182026-04-1810.56979/1101/2026/1332Multi-Class Classification of Alzheimers Impairment and Dementia Stages using an Efficientnet-B0 Deep Learning Framework
https://jcbi.org/index.php/Main/article/view/1202
<p>Alzheimer’s disease is a progressive neurodegenerative condition that leads to deterioration of memory and cognitive function, where early detection is critical as therapeutic interventions are most effective prior to extensive neuronal loss. Many existing deep learning approaches focus on binary or ternary classification schemes, which inadequately reflect the full clinical spectrum of cognitive impairment and dementia, thereby limiting their practical diagnostic relevance.This study proposes a lightweight deep learning framework based on EfficientNet-B0 for fine-grained eight-class Alzheimer’s staging, encompassing four levels of cognitive impairment and four levels of dementia. A structured preprocessing pipeline, including normalization, contrast enhancement, resizing, and targeted data augmentation, is employed to improve MRI consistency and enhance discriminative feature learning.Under a strict patient-level evaluation protocol, the proposed model achieves a peak testing accuracy of 99.78% and demonstrates strong and stable performance when compared with commonly used 2D convolutional neural network baselines. Owing to its low computational complexity and consistent performance across classes, the framework represents a promising research-stage tool for automated Alzheimer’s disease staging and warrants further validation, including comparison with transformer-based and volumetric approaches, before clinical deployment.</p>Makki Riaz KhanMuhammad Masood Ul Rahman UsmaniMuzamil MehboobFaheem AbbasShahnaz RafiqueKishwar Bibi
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-04-182026-04-1810.56979/1101/2026/1202Intelligent Healthcare Chatbot to Enhance Patient Satisfaction and Engagement with Implementation of Advance HCI Techniques
https://jcbi.org/index.php/Main/article/view/1186
<p>Effective communication is crucial to the quality of healthcare delivery, whereas communication breakdowns between patients and healthcare providers are an ongoing issue. AI Chatbot have proven to be a potential solution to enhance patient interaction and satisfaction through availability and customization. The purpose of this research was to develop and evaluate an AI-based healthcare Chatbot, created in accordance with the principles of advanced HCI, to improve patient interdependence and satisfaction and to compare the performance of the developed Chatbot with the existing healthcare Chatbot. The Chatbot has been created based on publicly available healthcare datasets in Kaggle, and preprocessing, including cleaning and parting of the data and correction of spelling mistakes, have been done. Principles of HCI like usability, accessibility, error handling, and user feedback were taken into consideration and performance determined by usability testing of 150 users, engagement metrics evaluation, and deep-learning testing using accuracy, precision, and recall. It was found that usability was very high (88% of tasks in 12.5 seconds on average, 8% error rate), user satisfaction was 4.6/5 on average, and 85% were of the opinion that the interface was easy to use. The engagement metrics showed that the average session time was 4.5 minutes, 78% of tasks completed and 62% retention rate. The deep learning model was 91.3 %, 89.7 %, and 87.5 % accurate, precise, and recalled, respectively, in terms of the ability to interpret patient requests. Altogether, the HCI-based AI Chatbot enhanced patient engagement and satisfaction considerably through usability, accessibility, and receptive interactions, the error management, individualized communication, and multimodal interface, which supported effective patient-providing communication. The next idea that should be incorporated in the future work is to combine multilingual support, voice-based interaction, and connectivity with the electronic health record in order to increase its efficacy.</p>Muhammad KhalidMuhammad YousafMudasar Ahmed SoomroMuhammad Imtiaz YousufNasreen Jawaid
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-04-182026-04-1810.56979/1101/2026/1186A Hybrid HHO-BA Feature Selection Framework for High-Accuracy Malicious URL Detection Using LightGBM
https://jcbi.org/index.php/Main/article/view/1341
<p>Malicious URLs are frequently used as delivery channels for malware and continue to represent a challenge in cybersecurity. The following paper propose URL detection framework based on using a combination of the Harris Hawks Optimizer (HHO) and Bat Algorithm (BA) using a union feature selection strategy. The goal is to build an informative and diverse subset of features by using the features that were chosen using two complementary metaheuristic search methods. LightGBM and Naive Bayes classifiers are used to evaluate the selected features on ISCX-URL2016 dataset. As it has been experimentally found, LightGBM has a higher accuracy of 99.52 % and Naive Bayes has an accuracy of 81.12 % indicating a distinct difference in the capability of modelling structured URL features. The proposed framework is competitive or better accurate when compared to some of the past studies. The results demonstrate that the HHOUBA union approach is successful in minimizing feature redundancy and discriminative information is maintained, which results in higher learning performance and a steady classification performance. The suggested solution is a solid solution to malicious URL detection.</p>Motasem M. ElshourbagyAhmed G. MabroukSalahudin AyubiAbdullah T. ElgammalMohamed Ghetas
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-03-012026-03-01Cybersecurity Architecture for Medical Live Monitoring Systems Using HTTP Protocols and SHA-256 Encryption
https://jcbi.org/index.php/Main/article/view/1252
<p>Live health monitoring systems are revolutionizing patient care, yet the security and integrity of sensitive medical data remain a critical challenge. To address the vulnerabilities of centralized databases and basic authentication in traditional systems, we propose a secure framework that leverages a decentralized architecture. This proposed system uses standard HTTPS protocols for communication, with each client request authenticated using JSON Web Tokens (JWT) signed with HMAC-SHA256. Patient records are encrypted using AES-256-GCM before storage, with cryptographic hashes of these records written to a private Hyperledger Fabric blockchain for tamper-proof auditability. A smart contract enforces decentralized, role-based access control to ensure only authorized personnel can view sensitive information. A predictive analytics module processes secured vital data to forecast potential health deterioration, triggering automated alerts to the responsible physician. In a simulation handling 1,000 patient records over 30 days, the system demonstrated 99.98% data integrity, processed an average of 4,820 authenticated requests per minute with a mean latency of 185ms, and achieved 87.3% accuracy in predicting critical health events with a 30-minute lead time, confirming its efficacy as a robust and secure solution for remote patient monitoring.</p>J. Dafni RoseSunitha TSuma TCinthuja KMohanaprakash T AJustindhas Y
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-03-012026-03-01Iot-Based Rain Detector System Using Arduino ESP-32
https://jcbi.org/index.php/Main/article/view/1336
<p>Is it feasible to automate the rain detector using an Arduino ESP-32 IoT device to enhance its performance? The Internet of Things [IOT] is changing weather practices and human life. Researchers are implementing IoT in weather practices. The development and statistical testing of an IoT-based rain detection module using the Arduino ESP32 microcontroller was the main focus of this study to enhance weather-based automation in smart environments. It was hypothesized that digital sensors and controllers could be interconnected to automate real-life systems, such as detecting rainfall and responding accordingly. This research aimed to provide a solution for urban areas where real-time rain monitoring is essential, especially for smart irrigation or safety systems. By implementing a rain detector system based on the IoT approach using the ESP32, the study offers a reliable, low-cost, and automated method to detect rain, trigger alerts, and integrate with other smart devices. An ESP-32 microcontroller was adapted and tested to collect real-time environmental data, specifically focused on detecting rainfall, humidity, temperature, and atmospheric conditions. Through its integrated Android application, the ESP-32 microcontroller enables users to monitor weather conditions remotely, providing real-time environmental updates and data analysis. This system supports better decision-making in weather-sensitive areas, automating responses such as halting irrigation or activating covers during rainfall. It has the potential to reduce water wastage by 30% and minimize weather-related damage by providing timely alerts through automatic rain detection. Using this IoT-based project, users, particularly in urban and rural areas, can manage outdoor systems more efficiently without relying on manual monitoring. It is recommended to implement this IoT-based rain-detection solution across both urban and rural agricultural environments to enhance weather management and disaster preparedness. </p>Syed Asim Ali ShahImran HaiderSyeda Aasma BibiAhmad ShalaldehMishal KhalidAyoub Alsarhan
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-03-012026-03-01Quantum-Safe Wireless Sensor Networks: A Post-Quantum Cryptography Framework with Adaptive Security Optimization
https://jcbi.org/index.php/Main/article/view/1253
<p>Quantum computing poses a significant threat to Wireless Sensor Networks (WSNs) by undermining traditional cryptographic algorithms such as RSA and Elliptic Curve Cryptography (ECC). This work proposes a quantum-safe architecture for WSNs that integrates post-quantum cryptography (PQC) with lightweight IoT protocols to ensure long-term confidentiality, authenticity, and resilience. The framework leverages ML-KEM for key encapsulation and ML-DSA/SLH-DSA for signatures, and seamlessly integrates with EDHOC, OSCORE, and COSE. A novel Efficient Adaptive Parameter Selection (EAPS) mechanism dynamically adjusts cryptographic strength to balance security, energy consumption, and latency under varying network conditions. Experimental evaluation demonstrates that ciphertext fragmentation (8–27 pieces) results in manageable completion times of 4–10 seconds, while join operations consume only 0.005–0.03 J per cryptographic handshake event, while system-level energy consumption including network overhead is higher (~1.0–1.4 J per join cycle depending on hop count and retransmissions). Battery lifetime projections under steady-state sensing workloads range from approximately 115–280 months for low-duty-cycle operation, whereas realistic deployment conditions with periodic communication yield effective lifetimes of 11.5–24 months. Security analysis confirms robust resistance against downgrade attacks and Harvest-Now-Decrypt-Later (HNDL) threats. Moreover, EAPS reduces risk scores by over 50% compared to fixed schemes with less than 10% additional energy overhead, and batch signature verification improves scalability by increasing throughput from ~1,600 to ~2,800 verifications per second. Overall, the proposed framework demonstrates that WSNs can achieve quantum-resistant security with minimal performance trade-offs, ensuring readiness for the post-quantum era.</p>Siri DJanardhan MRaja Sekhar VJaya Prakash PSushama CPramodh Krishna DKranthi Kumar Lella
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-03-012026-03-01A Hybrid Trust Evaluation Framework for Cloud Service Providers in Cybersecurity-Critical Environments
https://jcbi.org/index.php/Main/article/view/1313
<p>The rapid expansion of cloud computing has transformed digital service delivery, but establishing the trustworthiness of Cloud Service Providers (CSPs) remains a significant challenge, especially for sensitive workloads. This study presents a novel hybrid evaluation framework aimed at quantifying CSP trustworthiness from multiple perspectives. The framework combines a deterministic Multi-Attribute Trust Model (MATM) with a sophisticated Fuzzy Inference System (FIS). The MATM assesses CSPs based on three key attributes: Cost Efficiency (CE), Performance (P), and Reputation & Trustworthiness (RT). It utilizes a weighted normalization function to compute a transparent Trustworthiness Level (TL), offering a clear quantitative benchmark. To handle the inherent uncertainty and subjectivity in trust evaluation, the framework incorporates an FIS that uses fuzzy logic to interpret linguistic variables and capture the complex relationships among attributes. This dual approach provides both a straightforward, scalable assessment method and a more nuanced, human-centric evaluation. The framework’s effectiveness was validated through scenario analysis. As an initial proof of concept, the evaluation utilized constructed provider scenarios representing typical market profiles. The results demonstrate the model’s ability to clearly differentiate between service offerings, confirming its utility as a comprehensive decision-support tool.</p>Mamoon ObiedatAhmad AlkhatibQais Al-Na’amnehAyoub AlsarhanMahmoud AljawarnehRahaf HazaymiHussein Al-Ofeishat
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-03-012026-03-01Reinforcement Squeeze-and-Excitation Learning for Automated Gleason Grading in Prostate Cancer Histopathology
https://jcbi.org/index.php/Main/article/view/1277
<p>Proper diagnosis and treatment plans of prostate cancer can only be based on accurate Gleason grading of the prostate cancer using histopathological whole-slide images and manual assessment is likely to be subject to inter-observer variability. The present paper will suggest a new Reinforcement Squeeze-and-Excitation Learning-based Convolutional Neural Network (RIL-SE-CNN) to perform automated Gleason grading. The framework combines reinforcement learning and squeeze-and-excitation blocks to establish dynamically recalibrated channel-wise features and provides an ability to focus diagnostically relevant glandular patterns. Reinforcement feedback is an optimal method of reinforcing the attention mechanism by means of rewarding the discriminative feature representations in heterogeneous tissue regions. The proposed model is tested on two benchmark datasets, PANDA and DiagSet-A, with the help of the typical performance indicators. On the PANDA dataset, the model has an accuracy of 0.9646, precision of 0.9642, recall of 0.9657, F1-score of 0.9642 and Matthew correlation coefficient of 0.9634. The suggested method achieves better results on the DiagSet-A dataset with the following accuracy of 0.9958, precision of 0.9954, recall of 0.9957, F1-score of 0.9955, and MCC of 0.9965. These findings indicate the strength and generalization power of the suggested system in the automated Gleason grading.</p>Maulika PatelParag SanghaniNiraj Shah
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-03-012026-03-01Computational Assessment of nsSNPs Associated with FGFR2 Gene
https://jcbi.org/index.php/Main/article/view/1067
<p>The fibroblast growth factor receptor 2 (FGFR2) controls cell proliferation, differentiation, angiogenesis, and wound healing. Single nucleotide polymorphisms (SNPs) of the FGFR2 gene are linked with Pfeiffer syndrome, Crouzon syndrome, and Jackson-Weiss syndrome. Missense mutations in FGFR2 have also been reported in breast, gastric, and lung cancer. Objective: This study aimed to systematically analyze missense SNPs (nsSNPs) in FGFR2 using an integrative computational pipeline to identify variants with the strongest predicted pathogenic impact. Methods: FGFR2 gene data and protein sequence were retrieved from the NCBI dbSNP [build 153 (March 2020)] and UniProt database [release 2020_02; UniProt ID: P21802]. A total of eleven bioinformatics tools—SIFT, PolyPhen-2, PROVEAN, SNAP2, SNPs&GO, PANTHER, PhD-SNP, PMut, I-Mutant3.0, ConSurf server, and Project HOPE—were used to examine the deleterious potential of nsSNPs. The interaction of FGFR2 with different genes was analyzed using GeneMANIA. Results: We investigated 28,027 total SNPs from dbSNP, of which 701 were coding variants: 294 synonymous and 407 non-synonymous. Among the latter, 393 were missense mutations, 7 frameshift mutations, and 7 nonsense mutations. Only missense nsSNPs were retained for further analysis. Stepwise filtering identified 90 consensus-deleterious variants (≥ 6/8 predictors), 54 extremely damaging variants (all 8 predictors), and 38 unstable variants (ΔΔG ≤ −0.5 kcal/mol). A final set of 24 highly conserved and damaging variants (ConSurf score ≥7) was prioritized. Conclusion: A total of 24 nsSNPs (L757S, G690R, P666S, R664W, D644N, Y616C, L572F, L550P, L550V, I547M, V514M, G502E, G502R, E489K, R450C, P286S, C278Y, G271R, P263L, P256S, D225E, S224P, E219G, and Y105C) were predicted to have the most damaging and disease-causing effects on FGFR2 protein function and structure. Thus, the early prediction of FGFR2 gene functions could aid in disease prognosis. The results of our study provide beneficial information for devising early diagnostic and therapeutic measures.</p>Hira SaleemAnum Munir
Copyright (c) 2026 Journal of Computing & Biomedical Informatics
2026-04-182026-04-1810.56979/1101/2026/1067