https://jcbi.org/index.php/Main/issue/feedJournal of Computing & Biomedical Informatics2025-06-23T10:30:03+00:00Journal of Computing & Biomedical Informaticseditor@jcbi.orgOpen Journal Systems<p style="text-align: justify;"><strong>Journal of Computing & Biomedical Informatics (JCBI) </strong>is a peer-reviewed open-access journal that is recognised by the Higher Education Commission (H.E.C.) Pakistan. JCBI publishes high-quality scholarly articles reporting substantive results on a wide range of learning methods applied to a variety of learning problems. All submitted articles should report original, previously unpublished research results, experimental or theoretical. Articles submitted to the journal should meet these criteria and must not be under consideration for publication elsewhere. Manuscripts should follow the style of the journal and are subject to both review and editing. JCBI encourage authors of original research papers to describe work such as the following:</p> <ul> <li>Articles in the areas of computational approaches, artificial intelligence, big data, software engineering, cybersecurity, internet of things, and data analysis.</li> <li>Reports substantive results on a wide range of learning methods applied to a variety of learning problems.</li> <li>Articles provide solid support via empirical studies, theoretical analysis, or comparison to psychological phenomena.</li> <li>Articles that respond to a need in medicine, or rare data analysis with novel methods.</li> <li>Articles that Involve healthcare professional's motivation for the work and evolutionary results are usually necessary.</li> <li>Articles show how to apply learning methods to solve important application problems.</li> </ul> <p style="text-align: justify;">Journal of Computing & Biomedical Informatics (JCBI) accepts interdisciplinary field that studies and pursues the effective uses of computational and biomedical data, information, and knowledge for scientific inquiry, problem-solving, and decision making, motivated by efforts to improve human health. Novel high performance computing methods, big data analysis, and artificial intelligence that advance material technologies are especially welcome.</p>https://jcbi.org/index.php/Main/article/view/970Systematic Literature Review on Application of Naive Bayes Algorithm for Large Audio Data Classification2025-05-12T19:31:49+00:00Rabia Tehseenrabia.tehseen@ucp.edu.pkShazia Saqibshazia.saqib@iac.edu.pkUzma Omeruzma.omer@ue.edu.pkAnam Mustaqeemanam.mustaqeem@ucp.edu.pkMaham Mehrmaham.mehr@ucp.edu.pkShahan Yamin Siddiquidrshahan@niit.edu.pk<p>The increasing volume of audio data in areas like speech recognition, music genre classification, and environment sound analysis has created a need for more effective and scalable classification algorithms. This systematic literature review focuses on the use of the Naive Bayes algorithm for large-scale audio data classification and evaluates 39 peer-reviewed articles published in the last five years. The review analyses how Naive Bayes has been applied to difficulties such as feature extraction, model training, and real-time classification of audio signals considering its simplicity and computational efficiency. We assess its performance against more sophisticated machine learning techniques and its flexibility with pre-processing, ensemble models, and cross-layer control algorithms. Findings demonstrate that although Naive Bayes does not always outperform deep learning algorithms, this remains a strong option when low latency, explainability, and minimal resources are required. The review also points out gaps in existing research and discusses potential approaches to improve the algorithm's performance in audio-based tasks.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/989Phishing Website URL Detection Using a Hybrid Machine Learning Approach2025-06-09T20:00:45+00:00Muhammad Usman Javeedusmanjavveed@gmail.comShafqat Maria Aslamshafqatmaria34@gmail.comHafiza Ayesha Sadiqaayeshasadiqawarraich@gmail.comAli Razaalirazamscs@gmail.comMuhammad Munawar Iqbalmunwar.iq@uettaxila.edu.pkMisbah Akrammisbahakram399@gmail.com<p>In a relatively short time, the internet has grown and progressed tremendously. With more users and advancements in web development, the internet today supports a large portion of the corporate world. With it, the number of cyber-attacks and threats has skyrocketed, resulting in monetary losses, data breaches, theft of identity, brand reputation damage, and a loss in customer trust in online shopping and banking. Phishing is a type of cyber threat in which a fake person usually hacker impersonates a genuine and trustworthy organization in order to get sensitive and private information from a victim. Furthermore, phishing has been a problem for many years. The global economy has now suffered billions of dollars as a result. In this study, we will examine some techniques for addressing the issue of phishing, particularly phishing using websites, and design solution based on machine learning algorithms to identify phishing websites. In order to understand the machine learning decision-making foundation and examine which attributes in general would be utilized to classify a website as real or phishing, we also conducted feature significance analysis using the provided dataset and solution. In this study we utilized Decision Tree, Random, Forest C-Support Vector Classification and AdaBoost algorithms for the detection of phishing URLs. Random Forest consistently outperformed than the other models across all key metrics. It demonstrated optimal performance in its classifications by achieving the highest accuracy 97.7%, Precision 99% and F1 score 97%.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/978A Deep Learning Approach for Modeling Air Quality Dynamics using Historical Environmental Data2025-05-26T05:23:52+00:00Urooj Akramurooj.akram@iub.edu.pkSaima Noreen Khosasaimakhosa@yahoo.comMuhammad Faheem Mushtaqfaheem.mushtaq@iub.edu.pkMuhammad Rizwanrizwan2phd@gmail.comWajahat Hussainjamwajahat@gmail.comRida Fatimafatima.rida55@gmail.comSamrina Shafiqfaheem.mushtaq@iub.edu.pk<p>Deep learning (DL) has emerged as a powerful tool for air quality forecasting, yet achieving consistently high predictive accuracy remains a significant challenge. In order to improve the air quality and its index forecasts, a number of models have been used, with some adopting the hybridization approaches. In this research, Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) capabilities are combined to present a deep learning-based ensemble technique for simulating air quality dynamics using historical environmental data. The CNNs Model enhance the effectiveness of the ensemble model by extracting spatial information that can raise the accuracy of air quality dynamics predictions. Furthermore, the model's capacity to recognize intricate spatial-temporal patterns in environmental data is improved when CNNs are combined with artificial neural networks. Air quality dataset is used to train the proposed model and other deep learning models, including Convolutional Neural Networks (CNNs), Artificial Neural Networks (ANNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory (LSTM) used to predict air quality. The pre-processing techniques such as removing missing values, categorical value handling and label encoding, are applied on the dataset to enhance input quality. The proposed ensemble method is compared with other deep learning models and shows substantially better accuracy, precision, recall, and F1-score are 0.9985, 0.9988, 0.9984, and 0.9986, respectively. These validated deep learning-based models in air quality prediction can provide valuable insights for authorities and environmental organizations, supporting the development of data-driven pollution control measures and public health initiatives.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/996Leveraging Improved YOLOv10 for High-Performance License Plate Recognition in Challenging Mass Transit Environments2025-06-23T10:30:03+00:00Faheem Mazharfaheemmazharbalouch@gmail.comNaeem Aslamnaeemaslam@nfciet.edu.pkMuhammad Sajidmsajid@aumc.edu.pkAhmad Naeemahmad.naeem@nfciet.edu.pkMuhammad Fuzailfuzail@nfciet.edu.pk<p>Contemporary intelligent transportation systems significantly depend on automatic license plate identification for law enforcement, surveillance, and tollcollecting functions. Automatic license plate recognition is essential for vehicle management; yet, existing systems face challenges such as considerable angle deviations, subpar image quality, environmental disruptions, and barriers that render precise and rapid recognition in real-world contexts difficult in mass transit environments. Although current algorithms may identify license plates under optimal settings, their efficacy frequently diminishes in more intricate scenarios. This study presents a model for recognizing license plates that utilize enhanced YOLOv10 for plate detection and a distinct CNN for character recognition. The improved YOLOv10 algorithm greatly enhances the model’s capacity to extract features and detect license plates efficiently. This research incorporates BiFPN, SEAM, and GCNet modules into the augmented YOLOv10 model. Additionally, we utilize extensive data augmentation to enhance the recognition of partially obscured plates in simulated adverse settings, augmenting the model’s robustness in real-world applications. Experimental findings from our proprietary AOLP dataset indicate that our methodology attains a precision of 94.89%, a recall of 92.79%, an F1 score of 93.83%, and a mean average precision (mAP) of 94.10% for number plate detection, surpassing the baseline YOLOv10 by 3.22% in precision, 4.01% in recall, 3.83% in F1 score, and 0.56% in mAP. The R 2 value is 96.15%, accompanied by an RMSE of 4.01. The character recognition module of our suggested model attains an accuracy of 97.87% with a processing duration of 2.1 ms</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/917Regulating the Future: A Holistic Model for Safe and Ethical Deployment of Robotic Systems in Emerging Industries2025-03-23T10:42:02+00:00Muhammad Bilal bilaltiw.91@gmail.comShahid Ameershahidameer.khan@gmail.comAnam Safdar Awannargesshahbaz20137@gmail.comAyesha Mumtazayeshanouman031@gmail.comSehar Gulsehar.gul@iba-suk.edu.pkNarges Shahbaznargesshahbaz20137@gmail.com<p>The rapid advancement of robotics and automation has created unprecedented opportunities and challenges, necessitating the development of structured frameworks for automated and regulated robotic systems. This study proposes a comprehensive framework designed to address automation, regulatory compliance, ethical considerations, and scalability. Through a combination of literature review, expert interviews, and case studies in industries such as healthcare, manufacturing, and autonomous vehicles, the framework was validated for practical applicability. The findings demonstrate significant improvements in compliance efficiency, time-to-market reduction, and public trust. By comparing the framework to existing studies, this research highlights its adaptability and dynamic nature, addressing critical gaps in global regulatory alignment and ethical oversight. The study concludes with implications for practice and theory and offers recommendations for future research to enhance robotic systems' sustainability and global integration.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/954Bone Tumor Detection in X-ray Images Using Transfer Learning with EfficientNet-B5 2025-04-28T09:27:31+00:00Muhateer Muhammads2019266071@post.umt.edu.pk<p>Bone tumor diagnosis through X-ray imaging is often complex and time-sensitive, with misdiagnoses potentially leading to severe outcomes. In recent years, deep learning has emerged as a powerful tool to assist radiologists in improving diagnostic accuracy and speed. This paper evaluates EfficientNet-B5, a pre-trained convolutional neural network (CNN), for detecting bone tumors in X-ray images. Bone tumors present critical diagnostic challenges, where delayed identification severely impacts patient outcomes. Leveraging transfer learning, we fine-tuned EfficientNet-B5 on a clinical dataset of 170,000 annotated X-ray images (89,200 tumor-positive, 80,800 tumor-negative) to optimize feature extraction for osseous abnormalities. The model achieved 97% accuracy (sensitivity: 96.2%, specificity: 97.8%) on a holdout test set, outperforming ResNet-50 (92%) and DenseNet-201 (94%) under identical training conditions. Cross-dataset validation on the public OsteoSarcoma-2024 corpus confirmed robustness, with 95.3% accuracy. Results demonstrate that pre-trained CNNs like EfficientNet-B5 eliminate resource-intensive training phases while maintaining diagnostic precision, offering a scalable solution for early bone tumor detection. This work provides empirical support for integrating lightweight, pre-optimized architectures into clinical imaging pipelines.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/967A Multidimensional Comparison of Search Engines and Chatgpt in Terms of Information Needs2025-05-11T07:03:05+00:00Umar Khattabumarkhattab.0635689@gmail.comSarah Bukharisarah.bukhari@bzu.edu.pkAyesha Khanayeshakhanum618@gmail.comAbubakar Kamalabubakarkamal543@gmail.com<p>Search engines are an important tool for accessing information today. However, it isn't easy to understand how they process different types of information needs, such as General, Navigational, and Transactional and how their results are relevant to each other. This research compares four widely used search engines such as Google, Bing, DuckDuckGo, and Brave along with a most popular AI tool ChatGPT. The primary objective is to evaluate how effectively these platforms fulfil various information needs. Each search engine and ChatGPT was tested using the same questions to compare how well they work. Their performance was checked based on a few important things like relevance, accuracy, speed, precision, recall and user privacy. Traditional search engines like Google and Bing use keywords meaning they look for exact words the user types. On the other side, ChatGPT is an AI tool that understands natural language and gives answers based on the meaning and context of the question, not just the exact words. The results from all platforms were divided into two types relevant and irrelevant. These results were shown that each search engine and AI tool has strengths and weaknesses. Google and Bing provided accurate answers, Brave and DuckDuckGo delivered faster results, while DuckDuckGo and ChatGPT offered better user privacy protection. This research also highlights the challenge of understanding how various platforms address different information needs. This study helps improve search technologies to make users more satisfied, improve accuracy, and make information retrieval more efficient.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/992AI-Driven Edge Computing for IoT: Revolutionizing Phishing Detection and Mitigation2025-06-15T17:33:28+00:00Abdulrehman Arifkhanabdulrehman026@gmail.comMuhammad Zeeshan Haider Alizohairhaider67@gmail.comSyed Zohair Quain Haiderkhanabdulrehman026@gmail.comQasim Niazkhanabdulrehman026@gmail.com<p>A huge and accelerating growth in Internet of Things (IoT) devices has led to the connection of artificial intelligence (AI) with edge computing and has made possible the use of intelligent, decentralized data processing to support many kinds of applications. The present work critically assesses the progress of AI-based edge computing to IoT ecosystems between 2020 and 2025 with a specific focus on phishing detection and mitigation. Through the review of peer-reviewed literature consider not only the state-of-the-art AI technologies such as deep learning, federated learning, and reinforcement learning but also the architecture novelties that are relevant to smart cities, health care, and industrial IoT. The developments enable real-time data mining, a high scalability level, and the energy-efficient functionality of systems, which contributes immensely to the performance of IoT systems. One of the aspects to note is that the edge AI can appear as an enabler of a strong phishing detector as it allows detecting and achieving threats and responding to them locally and within a short distance, an essential requirement of a safe IoT network in high-sensitive areas. There are however challenges, security vulnerability, trouble with interoperability, low latency, and low resources that characterize edge devices. The review points out such missing links as the absence of universal guidelines on AI-models implementation and scant protective measures against advanced phising attacks. The gaps reduce the smooth integration and scaling in heterogeneous IoT environments. Suggest future research directions which consist of building adaptive AI models to dynamic threat landscape, standardized interoperability, and designing lightweight cryptographic solutions well suited to resource-constrained devices. The paper is a complete synthesis of scenarios that exist, opportunities and challenges and gives research and practice perspectives to develop in making AI-based edge computing more secure and efficient in IoT applications. Through these challenges, it would be possible to bring out the full potential of AI in the edge and convert IoT ecosystems into smart, robust cyber networks that will manage to counter the new forms of cyberattacks, such as phishing.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/977Cloudscape Protection: Securing AMI Meters with Multi-Cloud Strategies in Pakistan2025-05-25T19:31:33+00:00Ansar Alam Khananser.nutqani7262@gmail.comAhmad Naeemahmad.naeem@nfciet.edu.pkNaeem Aslamnaeemaslam@nfciet.edu.pkMuhammad Fuzailfuzail@nfciet.edu.pkSahrish Bashirsahrushjannat@gmail.comMuhammad Huzaifa Rashidhuzaifarashid6447@yahoo.com<p>AMI is disrupting grid and consumer relationship within the energy sector of Pakistan. But the increased dependence on cloud services also comes with significant security challenges, including data leaks and breaches, unauthorized access, and service outages. To this end, this work advocates for a multi-cloud approach as a powerful solution for security, resilience and performance improvement for AMI systems. By spreading infrastructure across multiple cloud providers, it reduces risk associated with single cloud provider reliance including vendor lock-in and single points of failure. It uses a real data-set on AMI and various machine learning techniques such as Decision Trees, Random Forests, Support Vector Machines to detect anomalies and possible intrusions. Out of these, Decision Trees, with accuracy level as highly as 92.5% and good precision and recall levels, proved the effectiveness of the model for detecting the threat in real-time. The paper also discusses technical, operational and legal challenges faced in implementation of multi-cloud infrastructures in Pakistan which provides perspective on its cost-effectiveness and implementation issues. This study adds to the emerging body of knowledge on smart grid security, as it provides an empirically validated AI-empowered multi clouds security framework for the developing countries. It has the practical value to power utilities, policy makers and researchers who are interested in developing scalable secure smart grid infrastructure.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/951The Digital Evolution of the Maritime Industry: Unleashing the Power of IoT and Cloud Computing2025-04-27T16:58:13+00:00Abdul Basit Iqbalfa-24-msds-007@lgu.edu.pkFarwa Tariqfa-24-msds-007@lgu.edu.pkIrshad Ahmed Sumrairshadahmed@lgu.edu.pkKhurram Rasheedfa-24-msds-007@lgu.edu.pk<p>The integration of Internet of Things (IoT) and cloud computing is transforming the maritime industry by enhancing efficiency, safety, and environmental sustainability in global shipping operations. Real-time vessel monitoring, predictive maintenance, and intelligent data analytics contribute to reduced operational downtime and improved fleet performance. Cloud-based platforms facilitate seamless communication between ships and ports, streamlining logistics while strengthening cybersecurity protocols. Despite these advancements, heightened connectivity introduces new vulnerabilities to cyber threats, posing potential risks to global supply chain continuity. This study investigates the transformative impact of digital technologies on maritime logistics, using the Port of Rotterdam as a case study, where IoT sensors and digital twin technology are employed to optimize traffic flow and maintenance. Additionally, Blockchain solutions such as Trade Lens are explored for their role in enhancing cargo visibility and transparency across the supply chain. While these innovations offer significant benefits, they also present challenges, including cybersecurity threats and evolving regulatory landscapes. The findings underscore the critical role of advanced digital infrastructure in shaping a secure, efficient, and resilient maritime future.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/960Content-Based Image Retrieval Using Image Features and Database Signature Indexing2025-05-03T09:37:20+00:00Yosha Jawadyosha.jawad@gmail.comHaider Akashakashalvii512@gmail.comMuhammad Imranmuhammadyasir@uet.edu.pkMuhammad Munawar Iqbalmunwariq@gmail.comMuhammad Yasir muhammadyasir@uet.edu.pkZeeshan Zafarzeeshanuet27@gmail.com<p>Billions of images across the internet offer simple access to visual information and rich texture details. Social media users, business tycoons, and researchers sometimes need related pictures. The methods of searching images are very important for users and applications. Content-Based Image Retrieval (CBIR)(Ms., March 2013, Content Based Image Retrieval Algorithm Using Colour Models) technique used to find and retrieve images from databases. In this paper, we introduced a CBIR technique that uses texture, color, and morphological features of images as input queries in order to search related images from a particular dataset. The image features are stored separately in a feature database in the form of a matrix. The proposed system is not only fast but efficient enough and takes less storage. The databases are categorized and indexed with stored images’ signatures to get more accuracy and efficiency. Histogram intersection and Euclidean distance are used to compare the distance of YCBCR color features. Our proposed method accumulates a recall rate of less than 0.041, which is an exceptional result of the approach.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/974Cyber Sentry: Strengthening Security Infrastructure for Industrial Cyber-Physical Systems Using Federated Deep Learning2025-05-18T17:32:28+00:00Muhammad Huzaifa Rashidhuzaifarashid6447@yahoo.comAhmad Naeemahmad.naeem@nfciet.edu.pkNaeem Aslamnaeemaslam@nfciet.edu.pkMuhammad Fuzailfuzail@nfciet.edu.pkFaheem Mazharfaheemmazharbalouch@gmail.comMuhammad Umarumaryounus000@gmail.com<p>Industrial Cyber-Physical Systems (ICPS) represent the backbone of applications, such as manufacturing, energy, and healthcare, but they are also under the threat of advanced cyberattacks, e.g., zero-day and data leak ones. The shortcomings of centralized IDS on the aspects of data security, privacy preserving and efficiency have been discovered. To address this challenge, we propose Cyber Sentry, a federated deep learning (DL) framework which enhances the security of ICPS by allowing decentralized, collaborative model training with no access to sensitive data. Data-centeredness: Here, the RT-IoT2022 dataset is vertically sliced, and dynamic pre-processing is achieved to train deep NNs, such as CNNs and LSTMs, in a local training fashion at edge devices. Based on those models, a strong global model for an anomalous situation is aggregated for detection. The framework is validated experimentally by achieving 92.5% detection accuracy with negligible false positive, while preserving the privacy of data through encryption mechanism. It has also been analyzed for enhancing the security at the edge layer by leaning on edge computing and blockchain security systems to achieve improved scalability and defense capabilities against cyber-attacks. It also shows advantages in terms of reduced communication cost and increased operation availability. We offer future research directions from an academic perspective and some implications for the industry on the adoption of federated learning to cybersecurity for ICPS. Some future work is to enhance the adversarial attack resistance, to integrate federated learning in blockchain networks, and to explore how to implement explainable AI to make the model more explainable. The quality of the experimental result offered by the proposed method demonstrates the need for federated deep learning to protect the industrial infrastructures in a connected world.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/973Hybrid Machine Learning Models for Optimizing Retail Market and Inventory Forecasting2025-05-17T06:03:48+00:00Umair Ismailumairhammad911@gmail.comSaima Noreen Khosasaimakhosa@yahoo.comSaba Tahirsaba.tahir@iub.edu.pkMuhammad Altaf Ahmadmuhammadaltaf.ahmad@iub.edu.pkWajahat Hussainfaheem.mushtaq@iub.edu.pkUrooj Akramurooj.akram@iub.edu.pkMuhammad Faheem Mushtaqfaheem.mushtaq@iub.edu.pk<p>Effective inventory management is critical for retail operations, relying heavily on accurate sales data analysis to optimize stock levels, forecast demand, and minimize supply chain inefficiencies. Traditional machine learning models such as Random Forest (RF), K-Nearest Neighbors (KNN), Logistic Regression (LR), Multinomial Naïve Bayes (MNB), and Support Vector Machine (SVM) have been used to classify sales data, but their limitations in handling complex retail datasets often result in suboptimal performance. In this regard, this research proposed a novel hybrid model based on K-Nearest Neighbors and Support Vector Machine that strategically combines the capabilities of KNN's in local pattern recognition with SVM's in high-dimensional classification for optimizing retail market and inventory forecasting. Through this research on real-world sales data, the proposed hybrid model shows superior performance, by achiving accuracy of 0.9999, precision, 0.9808 recall, 0.9808 F1-score, and 0.9991 accuracy showing enhanced performance in comparison to the baseline models. These findings support the effectiveness of hybrid machine learning algorithms in retail analytics, which provide significant gains in sales classification accuracy and inventory prediction reliability. As a result, retailers can make more informed stocking decisions, reduce waste, and enhance overall operational efficiency through data-driven insights.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/990Integration of Genomics and Bioinformatics for Personalized Medicine: Predicting Drug Responses and Optimizing Treatment Plans2025-06-11T12:08:33+00:00Shankar Laldrshanker_jesni@yahoo.comSaeed Ahmedsaeed.maitlo@lumhs.edu.pkTabeel Tariq Bashir tabeel.aryan@gmail.comAvesh Kumar darziaveshkumar@gmail.comMuhammad Furquanmuhammad.furqan@lumhs.edu.pk<p>The integration of genomics and bioinformatics has revolutionized individualized medicine by enabling precise prediction of medicine responses and optimization of personalized treatment plans. This paper reviews current methodologies in genomic data accession, bioinformatics channels and multi omics data integration to identify clinically practicable biomarkers. We discuss machine literacy models that work high- dimensional genomic and clinical data to prognosticate remedial efficacy and adverse medicine responses with high accuracy. Clinical operations across oncology, psychiatry and cardiovascular drug demonstrate significant advancements in patient issues when treatment opinions are guided by genomic perceptivity. Also, we address critical ethical, legal and social counteraccusations, emphasizing the significance of data privacy, informed concurrence and indifferent access to genomic technologies. Challenges similar as data diversity, limited population diversity and clinical implementation walls are anatomized, alongside unborn directions including real- world data integration, advanced single- cell genomics and AI interpretability. This comprehensive approach underscores the transformative eventuality of genomics and bioinformatics in advancing substantiated healthcare and perfecting treatment efficacy while promoting responsible and indifferent use.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/972Webpage Classification for Search Engine Optimization using Machine Learning2025-05-17T06:00:49+00:00Khurram Zeshan Haiderkhurram.zeeshan@gcuf.edu.pkRimsha Zafarrimshazafar1996@gmail.comQamas Gul Khan Safiqamas.gul@uettaxila.edu.pkMuhammad Awaismuhammadawais@gcuf.edu.pkMuhammad Munwar Iqbalmunwariq@gmail.com<p>Webpage classification for SEO is an essential area of study where machine learning, especially Deep Neural Networks (DNNs), plays a crucial role. This paper aims to develop an accurate Malicious & Benign page classifier using Deep Neural Networks (DNNs) for webpage classification in SEO. Data collection, selecting features, model construction, training, and evaluation, handling data that is imbalanced, & practical implementation considerations are just a few of the elements that make up the research approach. This dataset contains features like raw webpage content, geographical location, JavaScript length, obfuscated JavaScript code of the webpage, etc. The dataset has about 1.5 million web pages. 300,000 are used for testing, while 1.2 million are used for training. This dataset is highly skewed as 98.35% of the dataset are Benign webpages, and 2.27% are Malicious webpages, with a training dataset totaling 40,1806 instances, consisting of 25,770 good webpages, 6.41%, and 9472 harmful webpages, 2.35%. Our model is trained rigorously to identify patterns indicative of malicious intent. Our algorithm demonstrates robustness in classification in a test dataset of 398125 instances, including 23298 good webpages 5.8% and 9344 harmful webpages (2.34%). So, choosing the evaluation metrics carefully is essential, as just accuracy won’t give the correct evaluation, so I use an F1-score of 97.73%, a recall score of 95.2%, a precision score of 96%, and a confusion matrix. As a result, this paper solves the challenge of accurately differentiating between malicious and benign websites. The outcomes of this research contribute to webpage classification in SEO by leveraging DNNs to accurately classify malicious and benign webpages.</p> <p> </p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/981Meta-Sharding: A Novel Approach to Scaling Byzantine Consensus in High-Frequency Trading Blockchains2025-05-29T09:42:08+00:00Tayyaba Akhtartayyabaakhtar013@gmail.comTehzeen Faisaltehzeenfaisal04@gmail.comKhushbu Khalid Buttdrkhushbukhalid@lgu.edu.pkMehak Kausarmehak.kausar@ucp.edu.pkNazish Umar AwanNazishumar@lgu.edu.pk<p>Byzantine fault tolerance (BFT) consensus protocols continue to be a main bottleneck for big-scale blockchain rollouts because of their natural scalability limitation. This paper presents Meta-Sharding, a new consensus protocol that solves the O(n²) communication complexity problem of standard PBFT through the use of sharding methods. This method splits the network into parallel processing shards under the control of a meta-committee, allowing near-linear scalability of throughput while keeping Byzantine fault tolerance promises. By simulations with network sizes between 50 and 1000 nodes, experimental results show that Meta-Sharding has roughly 23,000 transactions per second (TPS) at 1000 nodes, as opposed to 400-600 TPS for standard PBFT. Although Meta-Sharding suffers a bit more from latency (175ms compared to 30ms), its efficiency in processing (expressed as TPS/latency) improves exponentially to 130 TPS/ms at network sizes at which conventional PBFT is less than 20 TPS/ms. This design includes resilient fault tolerance features such as view updates and coordination of cross-shard transactions via a two-stage commit protocol. The envisioned architecture has tremendous implications for blockchain applications that demand both high transaction throughput and Byzantine fault tolerance at scale.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/956Predict the Outbreak of Sudden Heart Failure in Dialysis Patients using Machine Learning2025-04-29T18:49:44+00:00Noman Khannoman.khan@students.uettaxila.edu.pkMuhammad Javed Iqbaljaved.iqbal@uettaxila.edu.pkMuhammad Munwar Iqbalmunwariq@gmail.comQama Gul Khan Safiqamas.gul@uettaxila.edu.pkZeeshan Saleemzsaleem009@gmail.com<p>The technological advancements in IoT, Artificial Intelligence, and Machine Learning enable modern and convenient patient monitoring opportunities for paramedics. The integration of machine learning into contemporary medical diagnosis platforms enables healthcare practitioners to diagnose potential heart failure in dialysis patients at an early stage. The medical situation alongside treatment in dialysis patients leads to elevated heart attack probabilities. Accurate predictions about these events remain a difficult task for paramedical staff who provide care during dialysis sessions. The time period brings numerous complications to patients characterized by blood pressure changes and heartbeat abnormalities, as well as temperature fluctuations and psychological challenges. To handle this problem, in the research, we use machine learning techniques to predict sudden heart failure early during the dialysis period by collecting data from dialysis patients and passing it to the machine learning model. We use a dataset from dialysis patients and then predict sudden heart failure in dialysis patients. We utilize logistic regression, KNN, Naïve Bayes, Decision Tree, Support Vector Machine, Artificial Neural Network, and XGBoost models to predict sudden cardiac arrest. We measure each model's accuracy, precision, Recall, and F-score. The results indicate that we achieve 73.9% accuracy for Logistics Regression, 88.9% accuracy for KNN, 94.1% accuracy for Decision Tree, 70.9% accuracy for Naïve Bayes, 83.9% accuracy for SVM, 95.6% accuracy for XGBoost, and 89.4% accuracy for ANN. Therefore, according to the final analysis, the XGBoost model predicts a higher incidence of sudden heart failure in dialysis patients.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/969Enhanced Liver Disease Prediction using Multi-Dataset Integration and CNN Optimized with Pelican Algorithm2025-05-11T06:02:30+00:00Najjia Karimnajjiakarim815@gmail.comNaeem Aslamnaeemaslam@nfciet.edu.pkMuhammad Fuzailfuzail@nfciet.edu.pkAhmad Naeemahmad.naeem@nfciet.edu.pk<p>Several prediction techniques for liver disorders have been developed. However, they are more costly and sophisticated. This endeavor aims to develop an effective approach for detecting liver disorders in their early stages. This research describes convolutional neural network (CNN) infrastructure for harmless he-patic failure forecast. The Pelican Optimisation Algorithm (POA) balances bounding box regression and branching training losses for the CNN model. The liver disease characteristics were taken from three da-tasets: Indian liver patient records(ILPR), Hepatitis C, and Cirrhosis Prediction dataset. The POA-modified CNN model mostly identifies relationships between various laboratory values and diag-noses. The proposed model outperforms SOTA methods, including Opposition-based Laplacian Equi-librium Optimiser, Adaptive Hybridised Deep CNN, SVM, and Tree-based classifiers, in terms of accuracy, precision, recall, F-measure, and Mathews Correlation Coefficient. The proposed model has an MCC value of 94.8945, accuracy of 98.6743%, precision of 96.2436%, F1-measure of 97.5524%, and recall of 95.7887%, respectively. The findings show that the suggested strategy effectively predicts liver illness early on through automated screening, reducing strain on caregivers.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/979Healthcare and Management Strategies to Improve Balance and Proprioception of Motor Control in Neuro Patients: A Narrative Review2025-05-26T10:44:46+00:00Fatima Ejazfatimaijazpt@gmail.comSaleh shahsalehshah83@gmail.comM. Naveed Baburnaveed.babur@superior.edu.pk<p>Neurological disorders such as stroke, Parkinson’s disease, multiple sclerosis, cerebral palsy, and traumatic brain injury often result in significant impairments in balance and proprioception, leading to compromised motor control, increased fall risk, reduced mobility, and decreased quality of life. These motor deficits stem from disruptions in sensory-motor integration due to damage within the central and peripheral nervous systems. This narrative review synthesizes evidence from 30 to 40 peer-reviewed studies published between 2020 and 2025, focusing on healthcare and management strategies aimed at enhancing balance and proprioception in neuro patients. The review covers neurophysiological mechanisms, including the role of the cerebellum, basal ganglia, somatosensory cortex, and proprioceptors in motor control. It highlights the clinical efficacy of task-oriented training, proprioceptive neuromuscular facilitation (PNF), sensory reeducation, and complementary techniques such as Tai Chi and aquatic therapy. Furthermore, the review evaluates advanced rehabilitation technologies such as robotic-assisted therapy, virtual reality systems, wearable sensors, and biofeedback mechanisms that enable high-intensity, repetitive, and task-specific motor training. Non-invasive neuromodulation techniques (e.g., tDCS, TMS) and pharmacological interventions are also discussed for their supportive roles in enhancing rehabilitation outcomes. Multidisciplinary rehabilitation programs are emphasized for their holistic, team-based approach to restoring function and independence. Emerging strategies such as personalized rehabilitation through machine learning, stem cell therapy, and regenerative medicine offer promising future directions. This review provides clinicians, rehabilitation professionals, and policymakers with a comprehensive, evidence-based framework to optimize neurorehabilitation outcomes. Early intervention and integrative strategies tailored to individual needs remain crucial for maximizing recovery and improving quality of life in neuro patients.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/855Transforming Agriculture with IoT and Deep Learning: A Smart Approach to Precision Farming and Sustainability2025-02-07T11:07:56+00:00Nida Batoolbatoolnida053@gmail.comIrshad Ahmed Sumrairshadahmed@lgu.edu.pkHamza Shahab Awanbatoolnida053@gmail.comAli Razabatoolnida053@gmail.com<p>There is tremendous challenges in the agricultural sector, like climate change, depletion of resources and rapidly rising demand for food across the world. Advanced technologies are required in traditional farming methods that, if not efficient and sustainable, demand such integration. A combination of Internet of Things (IoT) and Deep Learning (DL) is a transformative solution to modern agriculture problems. In this paper, three vital areas where this technology could, which are precision irrigation, crop disease detection, and resources optimization, are explored employing deep learning and IoT. Smart farming can use Long Short-Term Memory (LSTM) networks for time series predictions and Convolutional Neural Networks (CNNs) for disease diagnosis from imaging to increase efficiency, reduce resource wastage, and increase crop yield. With real-time sensor data combined with predictive analytics farmers have data driven decision making power, enabling them to reduce water consumption, pesticide overuse and operational costs. While data integration, high implementation costs and connectivity constraints hinder IoT adoption, development in IoT infrastructure and AI models make the scalability and accessibility more bearable. The contributions of this paper are to highlight the potential of IoT and deep learning to lead to a more productive, sustainable, intelligent agricultural ecosystem.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/965A Deep Instance-Based Learning Model for Content-Based Image Retrieval2025-05-06T08:23:46+00:00Ayesha Bibiayeshariaz028@gmail.comHumayun Salahuddinhumayun.salahuddin@riphahsahiwal.edu.pkKhawaja Tehseen Ahmedhumayun.salahuddin@riphahsahiwal.edu.pkSayyam Zahrahumayun.salahuddin@riphahsahiwal.edu.pkMamoona Shafiquemamoonashafique72@gmail.com<p>Over a decade, content-based image retrieval has been an active field. It is not possible to compare the performance of two of these systems using objective means. Consequently, finding successful or hopeful ways forward is very challenging which delays the progress of the field. Finding out if a CBIR application is of good quality is tough which influences how well such systems can be commercialized. A severe application cannot be developed or grow commercially unless its reliability can be proved. The TREC metric is frequently used for operations within a text document and TPC is usually used for database processing. Because of the framework in place, systems can now be checked against little, open-to-the-public test databases. This work sets out to build an image retrieval system that uses deep learning to understand the similarity in images belonging to certain classes because of how learnable features and a similarity measure are used, all supported by inception-v3 CNN technology. To achieve simplicity, good retrieval and efficiency, the CNN features with a Siamese design are put to work.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/952An Image Processing System for the Detection of Cotton Crop Diseases2025-04-27T17:12:55+00:00Benish Khalidirshadahmed@lgu.edu.pkKhushbu Khalid Buttirshadahmed@lgu.edu.pkHamza Shahab Awanirshadahmed@lgu.edu.pkIrshad Ahmed Sumrairshadahmed@lgu.edu.pk<p>The cotton sector is a significant part of Pakistan's industrial economy, substantially contributing to the nation's GDP. However, sustaining cotton yields has become increasingly difficult due to diseases exacerbated by climate change, which threaten both export revenue and local livelihoods. Traditional methods for identifying these diseases are often inaccurate and inefficient, resulting in significant crop losses and delayed responses. To address this critical issue, this study proposes an AI-driven system that leverages digital image processing and machine learning to provide a scalable, real-time solution for cotton leaf disease detection. By enabling early and accurate identification of diseases, the system facilitates proactive crop management and helps farmers make timely decisions. Integrating a mobile application with cloud-based analytics supports policymakers in effectively tracking disease outbreaks and allocating resources. While promising, future improvements should focus on enhancing data diversity and computational efficiency to ensure widespread adoption. This study offers a valuable contribution to improving agricultural sustainability and productivity in regions heavily reliant on cotton cultivation.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/987Sentiment Analysis of Urdu Language Using Machine Learning Models2025-06-04T06:01:09+00:00Haroon Yousafimrankhalil@uetpeshawar.edu.pkM. Imran Khan Khalilimrankhalil@uetpeshawar.edu.pkZia Ur Rahmanimrankhalil@uetpeshawar.edu.pkFirdous Ayubimrankhalil@uetpeshawar.edu.pkYousaf Khanimrankhalil@uetpeshawar.edu.pkAsif Nawazimrankhalil@uetpeshawar.edu.pkZeeshan Najamimrankhalil@uetpeshawar.edu.pkSheeraz Ahmedimrankhalil@uetpeshawar.edu.pk<p>The purpose of the study was to develop sentiment analysis tool and system for Urdu language through the use of Machine Learning models and deep learning algorithms. In modern age of social media communication and interaction, the anlysis of opinion of the users and customer’s of business products, political discourses, religious and cultural crtiques and debates to explore their sentiments abouth the services, products, scenarios and stories are much needed. The methodology used by the researchers was to create dataset from tweets and posts on social media platforms in Urdu language. The researchers then used pre-processing and Lemmatization data to make it suitable for trainig and testing when using machine learning algorithms such as Support Vector Machine, Naïve Bayes, BiLSTM and other realted models.The sentiment analyzer categorized the sentiments in Urdu language text with nine categroies and thus the resulted tool or system was tested for its effectivness and efficeincy with more than 50% accuracy and high level of validaity were found. Thus, the sentiment analysis of Urdu language using machine learning models was deceloped to bbridge the gap in sentiment analysis for Urdu language in Pakistan.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/975Secure and Interpretable Intrusion Detection through Federated and Ensemble Machine Learning with XAI 2025-05-19T09:32:31+00:00Sikander Javedsikanderjaved1986@gmail.comNaveed Mukhtarnaveedmukhtar000@gmail.comShahid Iqbaldrshahan@niit.edu.pkSyed Asad Ali Naqvisyedasad.alinaqvi@superior.edu.pkAmna Ishtiaqamna.ishtiaq@giu.edu.pkShahan Yamin Siddiquidrshahan@niit.edu.pkMuhammad Ammarammardev921@gmail.com<p>In today’s digital era with the expansion of internet-connected systems, the security of network system is becoming increasingly critical along with the risk of sophisticated cyber-attacks. A system i.e., Intrusion Detection System (IDS) is required that can identify these unauthorized and harmful attacks while protecting network environment. Despite this attribute, ITS raises concerns related to the privacy of data, generalizability, scalability and transparency for machine learning based (ML) systems. Thus, to address these challenges, a novel framework is proposed in this study with ML and explainable artificial intelligence (XAI). Federated learning is a machine learning technique that enhances security and data privacy in network system. FL is integrated in this study along with the ensemble learning in IDS systems. FL ensures data privacy while training models locally at distributed nodes without sharing raw data to meet regulatory requirements. Powerful ensemble algorithm is incorporated to enhance the accuracy in predicting attacks from diverse patterns and types. Moreover, Explainable AI is an advanced tool in AI that provides explanation of predictions, its applications include Shapley Additive explanations (SHAP) incorporated in this study to provide interpretation for the model’s predictions. SHAP highlights the contribution of each individual feature thereby enabling better human understanding and ensuring trust in AI based models. The FL based ensemble learning model is evaluated on NID data set which is widely accepted benchmark dataset to detect intrusions thereby providing validation. Superior performance is achieved in terms of accuracy, precision, recall, FI-score and AUROC scores. A powerful solution is developed to provide security and privacy preservation by combining algorithms i.e., FL, ensemble ML and XAI. Thus, the proposed framework contributes significantly to the advancement of AI in cybersecurity and environments were data sensitivity is crucial.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/976Unveiling Hidden Themes of Gender Specific Social Anxiety through Linguistic Exploratory Analysis Using LLM and Topic Modeling Techniques2025-05-24T08:59:01+00:00Muhammad Rizwanrizwan2phd@gmail.comSaima Noreen Khosasaimakhosa@yahoo.comMaryam Rafiqrepunzal.marry6@gmail.comRida Fatimafatima.rida55@gmail.com<p>This article aims to unveil hidden themes related to social anxiety in gender-specific-manner. For this purpose, dataset of over 12,000 Reddit posts related to social anxiety were used. Traditional preprocessing steps including lemmatization were applied to clean the data. Initially, Llama 3 was employed for zero-shot gender classification using an appropriate prompt to label the posts by gender. The zero-shot classification was then evaluated against human judgment and baseline algorithms. Top2Vec was fine-tuned to identify prevalent linguistic traits and topics within the female and male groups. Various embedding methods were experimented with, and coherence scores were used as evaluation metric for searching best embedding for topic modeling with high coherence score. Doc2vec gives the best coherence score. The optimal settings generated topic vectors with relevant keywords for each gender, highlighting key social anxiety themes. A method was devised to identify the most similar and dissimilar topics for both genders. The analysis revealed significant similarities in male social anxiety posts with female posts in themes of social interaction, mental health, daily activities, dating, and professional communication. Conversely, the least similar topics in female social anxiety posts compared to male posts centered around issues like appearance, facial expressions, school interactions, and strategies for overcoming social anxiety. This analysis underscores the diverse contexts of social anxiety experiences across genders.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/980Color Image Segmentation Optimization: Threshold Edge Detection with Harmonic and Wiener Filter Enhancements2025-05-27T08:18:04+00:00Muhammad Yousifmyousif.cs@mul.edu.pkFaiza Rehmanfaizanaseer.cs@mul.edu.pkArfan Ali Nagraarfanalinagra@lgu.edu.pkSohaib Saleemshoaibsaleem.cs@mul.edu.pkAnas Abdulwahidanasabdulwahid.labengr-cs@mul.edu.pkMuhammad Sarfrazsarfrazmoor@gmail.com<p>Digital photos can be segmented to find objects, borders, and other relevant information. Segmentation can be done in a variety of ways, including Watershed Segmentation, Region-Based Segmentation, Edge-Based Segmentation, Threshold-Based Segmentation, and Cluster-Based Segmentation. These methods produce a segmented image, which is a compilation of every pixel in the image. Pixels represent the color, texture, and other elements of an image. An image is divided into separate objects or areas. Color Image Segmentation uses threshold-based edge detection, and it’s improved through harmonic and Wiener filters to reduce noise. This method simplifies and changes the representation of an image into something (line, curve drawing that highlights the intensity change, which is more important and easier to explore. In this concept, convert a color image into a gray-scale image and apply different filters (Robert, Prewitt, Sobel, log, and Canny) with edge detection techniques. This applied edge detection technique is also useful with a filter that gives acute and exact results. This research computes the Threshold 0.67 values for color images to check for better performance. For this purpose, use MATLAB. Our technique will help to improve edge detection by the combination of other types of filters, namely Hormonic and Weiner, to eliminate the noise from the image.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informaticshttps://jcbi.org/index.php/Main/article/view/955 Voice-Based Gender Identification Using Machine Learning2025-04-28T09:45:06+00:00Umair Ijazstevejadav1998@gmail.comMuhammad Munwar Iqbalmunwariq@gmail.comZe Shan Alizeshan.ali3@students.uettaxila.edu.pkAnees Tariqanees.t321@gmail.comRomail Khanromailk818@gmail.com<p>Automatic gender classification (AGC) based on voice signals plays a crucial role in biometric authentication, speech analytics, and human-computer interaction. This study proposes a hybrid machine learning framework that integrates Mel-Frequency Cepstral Coefficients (MFCC) for feature extraction, Principal Component Analysis (PCA) for dimensionality reduction, and a Convolutional Neural Network (CNN) for classification. The model was trained on a curated dataset of 3,497 Urdu-language voice samples collected from publicly available YouTube recordings and processed for gender classification tasks, encompassing speakers of varying genders and dialects. Addressing limitations in prior approaches, the proposed method combines traditional spectral features with deep learning techniques to enhance classification performance. The system achieved an accuracy of 98.4%, along with strong precision, recall, and F1-score metrics, outperforming baseline models such as Support Vector Machines (SVM) and k-Nearest Neighbours (KNN). These findings support the model’s applicability in real-world use cases, including virtual assistants, automated call routing, and emotion-aware computing systems.</p>2025-06-01T00:00:00+00:00Copyright (c) 2025 Journal of Computing & Biomedical Informatics