Journal of Computing & Biomedical Informatics
https://jcbi.org/index.php/Main
<p style="text-align: justify;"><strong>Journal of Computing & Biomedical Informatics (JCBI) </strong>is a peer-reviewed open-access journal that is recognised by the Higher Education Commission (H.E.C.) Pakistan. JCBI publishes high-quality scholarly articles reporting substantive results on a wide range of learning methods applied to a variety of learning problems. All submitted articles should report original, previously unpublished research results, experimental or theoretical. Articles submitted to the journal should meet these criteria and must not be under consideration for publication elsewhere. Manuscripts should follow the style of the journal and are subject to both review and editing. JCBI encourage authors of original research papers to describe work such as the following:</p> <ul> <li>Articles in the areas of computational approaches, artificial intelligence, big data, software engineering, cybersecurity, internet of things, and data analysis.</li> <li>Reports substantive results on a wide range of learning methods applied to a variety of learning problems.</li> <li>Articles provide solid support via empirical studies, theoretical analysis, or comparison to psychological phenomena.</li> <li>Articles that respond to a need in medicine, or rare data analysis with novel methods.</li> <li>Articles that Involve healthcare professional's motivation for the work and evolutionary results are usually necessary.</li> <li>Articles show how to apply learning methods to solve important application problems.</li> </ul> <p style="text-align: justify;">Journal of Computing & Biomedical Informatics (JCBI) accepts interdisciplinary field that studies and pursues the effective uses of computational and biomedical data, information, and knowledge for scientific inquiry, problem-solving, and decision making, motivated by efforts to improve human health. Novel high performance computing methods, big data analysis, and artificial intelligence that advance material technologies are especially welcome.</p>Journal of Computing & Biomedical Informaticsen-USJournal of Computing & Biomedical Informatics2710-1606<p>This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under<a href="http://creativecommons.org/licenses/by/4.0"> CCBY 4.0 International License</a></p>Evaluating Analysis of Different Machine Learning Models for Identification of Fake News
https://jcbi.org/index.php/Main/article/view/558
<p>As is clear from the arguments made earlier in this paper, the issue of fake news is rapidly becoming a major threat to the society. This research looks into different ML techniques to classify fake news in an attempt to overcome previous approaches’ deficits. Historically its effects have posed only relatively moderate threat, however as it has been established it is sufficiently dynamic phenomenon that requires more effective methods for efficiently combating it. Studies use all the synthetic and real news articles in its entirety with enhanced preprocessing techniques to ensure data credibility. We used varieties of models including conventional models such as Naive Bayes, Linear SVM as well as latest state of the art neural network models including LSTMs, GRUs and more complex architectures with multiple layers. Evidently, the work delivers the substantial improvement of the classical method and reaching accuracy of more than 96% using the custom models, based on the bidirectional LSTM and attention mechanism. This study contributes to the field by showing the application and effectiveness of deep learning approaches in detecting fake news in specific and offers basis for further studies to achieve better outcomes and enforcement of the values<strong>.</strong></p>Misbah AbidMuhammad AshrafNazia NazirJawad AhmadAbid Ali Hashmi
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Spatial Inequalities in Education: An Analysis of Infrastructure, Teacher Quality, Parental Involvement, and Technology Integration
https://jcbi.org/index.php/Main/article/view/583
<p>The literature review takes the leading question in educational research: what are the major factors that influence educational outcomes? This review takes a synthetic redesign to examine previous studies on spatial inequalities, infrastructure, teacher quality, parental involvement, and technology integration in shaping educational performance. The analysis shows how differences in these variables translate into quality educational outcomes across different regions and demographic groups. In that regard, it captures challenges faced by rural and economically disadvantaged areas with regard to limited resources that mostly lead to low quality of education. Low-quality school structures, levels of teacher qualifications, and parent involvement are major leading indicators for student achievement. The review further goes on to indicate that the role of technology in education is one that bears opportunities and challenges. The results reflected that there are a lot of variables, interwoven in a complex dance, that affect educational outcomes and call for comprehensive strategies that will involve a number of interrelated issues.</p>Tayyaba AbbasMuhammad AzamMomina Ayesha
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Cognitive Architectures for Enhancing Self-Directed Learning in Humanoid Robots
https://jcbi.org/index.php/Main/article/view/578
<p>Feature of robots “on-going development” refers to the power to ceaselessly depend on what the system already is aware of, in association with an on-going method that acquires new skills and information, and achieves a lot of subtle levels of behavior. Human infants are presumably the most effective illustrious demonstrators of this ability. Developmental psychology has several results documenting the fact that what infants will and will not do at different ages. On the robots aspect, making a procedure system that displays on-going development continues to be an associate unresolved drawback. There are some questions that what features can be covered by the cognitive robotics development and what features are beyond the range of robots? In this research, comparative study is used for human infants and robots for realizing that why and what are features necessary for robots? How can they be useful to us? By which technology can they be achieved? We also realize that what features in infants are beyond the range of robots.</p> <p> </p>Muhammad KamranFurrakh ShahzadBinish RazaKiran NazM. Rashid
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Comparative Analysis of AI Chatbot Performance in IoT Environments
https://jcbi.org/index.php/Main/article/view/552
<p>This research undertakes a comprehensive comparative analysis of AI chatbots to identify strengths, weaknesses, and potential areas for improvement. By scrutinizing key performance metrics such as natural language understanding, response generation, dialogue management, and task completion, this study aims to contribute to the comparison of chatbot technology. A rigorous evaluation of existing chatbots will provide valuable insights into the underlying algorithms, architectures, and training data that influence their performance. Furthermore, by benchmarking chatbots across diverse domains and applications, this research seeks to establish a deep learning approach for assessing chatbot capabilities. The findings of this study will inform the development of more sophisticated and effective chatbots, benefiting both researchers and industry practitioners. Ultimately, this research contributes to the broader field of computer science by advancing the state-of-the-art in natural language processing, machine learning, and human-computer interaction. Conversational agents (CAs), or chatbots, powered by Artificial Intelligence (AI), have emerged as a promising solution. However, selecting the optimal chatbot platform for a specific connected environment can be challenging. This paper proposes a novel approach utilizing Machine Learning (ML) techniques to compare and analyze functionalities and user experience (UX) of leading AI chatbot platforms. By leveraging user reviews, technical specifications, and user testing data, our ML-driven framework will rank and categorize chatbot platforms based on pre-defined criteria, empowering users to make informed decisions for their specific IoT needs.</p>Mehran ShafiqueGohar MumtazSaleem Zubair AhmadSajid Iqbal
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Dynamic Load Balancing and Task Scheduling Optimization in Hadoop Clusters
https://jcbi.org/index.php/Main/article/view/571
<p>Hadoop is a widely utilized distributed file system and processing framework for handling large-scale data. Nonetheless, the inherent load balancing and task scheduling mechanisms in Hadoop exhibit inefficiencies that may result in performance bottlenecks. In this paper, we propose a novel dynamic load-balancing algorithm designed specifically for Hadoop clusters. Our algorithm continuously monitors the performance indicators of nodes and dynamically adjusts task-node allocations to ensure equitable load distribution within the cluster. Furthermore, we consider the execution states of tasks to optimize resource allocation effectively. The primary contribution of this study resides in the analysis and resolution of load balancing and scheduling issues within Hadoop. In addition, our proposed dynamic scheduling algorithm also accounts for task execution states, thereby facilitating optimized resource allocation. We validate our algorithm across various workloads, demonstrating that it surpasses existing methods in job completion time, scalability, and resource utilization. The findings indicate that the proposed algorithm efficiently balances cluster loads, expedites task completion, and reduces both costs and resource consumption.</p> <p><strong> </strong></p>Haiqa MansoorBilal AslamUsman Akhtar
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Mobility Computing Based on Cloud Computing: A Survey
https://jcbi.org/index.php/Main/article/view/242
<p>A new era in computing has begun with mobility on cloud computing, where users of the cloud are drawn to a variety of services via the internet. Mobility on cloud computing has an innovative, adaptable, and economical platform for service delivery that makes advantages of the internet to offer services to users of mobile service cloud. Mobility on cloud computing, a process that eliminates obstacles to the performance of mobile devices, integrates in concept of cloud computing into a wireless networking device environment. This paper provides a comprehensive survey in the architecture of mobility on cloud-based computing, types and models of mobility on cloud computing and discuss some problems and their solutions. The research also examines the new trends and issues in mobile cloud computing and analyzes the relevant research on the mobility in cloud computing environment.</p>Benish KhalidSunny ul HassanIrshad Ahmed SumraAwais Khan
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Usability Design and Evaluation of Smartphone for Elderly Users
https://jcbi.org/index.php/Main/article/view/545
<p>This study looks into the usability issues older people have when using smartphone interfaces, concentrating on pinpointing particular obstacles to efficient use. We evaluated all of the functions available on smartphones now by conducting usability tests on senior citizens to gauge how accessible and user-friendly these interfaces are. The study identifies important areas—such as small touch targets, intricate navigation, and a lack of customization options—where older folks' demands are not well met by current smartphone designs. The research makes specific design recommendations based on these findings to enhance the usability and inclusivity of smartphones for senior citizens. These suggestions act as a manual for designers and developers to produce smartphone interfaces that are easier to use and more accessible, therefore improving the elders' digital experience.</p> <p> </p>Yasmin SaifullahMuhammad Ikram Ul HaqMuhammad TahirMisbah NoorKhowla KhaliqMuhammad Waseem Iqbal
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Securing DynamoDB: In-Depth Exploration of Approaches, Overcoming Challenges, and Implementing Best Practices for Robust Data Protection
https://jcbi.org/index.php/Main/article/view/549
<p>Protecting data and system integrity is critical for any database, and especially critical for flexible, distributed systems like Dynamo. Dynamo is a highly flexible and distributed database system specifically created to cater to the requirements of contemporary web-scale applications. As with any database system, ensuring security is overriding to protect sensitive data and hold the integrity of the system. This paper explorer a range of methods for enhancing the security of DynamoDB, covering access control policies, encryption methods, auditing and monitoring systems, data integrity measures, and authentication mechanism.</p>Rimsha SajidGohar MumtazHijab Zehra ZaidiZeeshan Mubeen
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702ML-Powered ICU Mortality Prediction for Diabetic Patients
https://jcbi.org/index.php/Main/article/view/610
<p>Diabetes mellitus is one of the most important causes of mortality globally, particularly for critically ill patients undergoing treatment in ICUs. This study aims to enhance mortality prediction among diabetic ICU patients using advanced machine learning (ML) models. We tested several ML algorithms using a comprehensive dataset from the MIMIC III database, including Logistic Regression, Decision Tree, Random Forest, Support Vector Machine, and Multilayer Perceptron and compared their performances. The Random Forest model achieved the highest performance, with an AUC of 0.98, proving its effectiveness in managing complex datasets. Our models incorporate novel features such as patient demographics, lab results, and comorbidity indices, offering superior predictive power. This study highlights the critical role of ML in improving patient care by enabling timely interventions for high-risk ICU patients. Future research will focus on integrating real-time clinical data and refining the models to further enhance predictive accuracy.</p>Zaheer AlamAhsen khalidMuhammad HaroonDanish IrfanFawad Nasim
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Optimizing Malicious Website Detection with the XGBoost Machine Learning Approach
https://jcbi.org/index.php/Main/article/view/722
<p>The rising threat of malicious websites demands advanced detection methods for robust cybersecurity. Traditional approaches, such as rule-based systems and machine learning models like Random Forest and Support Vector Machine (SVM), often struggle to balance precision and recall. This research introduces an innovative methodology using the XGBoost algorithm to detect malicious URLs. The study follows a four-step approach: (1) Dataset Acquisition—utilizing the "Malicious Website URLs" dataset from Kaggle; (2) Data Preprocessing—including data cleaning, feature selection, and transformation to optimize model training; (3) Model Implementation—applying XGBoost, an ensemble learning algorithm known for its superior performance, to train the model on the preprocessed dataset; and (4) Model Evaluation—assessing performance through metrics such as accuracy, precision, recall, and F1-score. The results show that XGBoost achieves 88.89% precision and 86.6% accuracy, outperforming conventional methods and offering a balanced trade-off between precision and recall. This research highlights the significance of precise feature selection and model optimization, reducing human intervention and enhancing cybersecurity defenses. The findings demonstrate XGBoost's effectiveness in minimizing false positives and negatives, making it a valuable addition to existing cybersecurity frameworks. This study underscores the critical role of advanced machine learning techniques and accurate feature selection in strengthening defenses against evolving cyber threats<em>.</em></p>Fazal MalikMuhammad SulimanMuhammad Qasim KhanNoor RahmanKhairullah KhanMuhammad Khan
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Rule-Based Approach for Automatic Generation of Class Diagram from Functional Requirements Using Natural Language Processing and Machine Learning
https://jcbi.org/index.php/Main/article/view/546
<p>Requirement analysis is the initial and most crucial phase of the software development life cycle (SDLC). In this phase, the requirements after gathering from the user and different stakeholders are evaluated and abstraction is created in terms of a model. The generation of UML class diagrams from requirements is a very time-consuming task and hence demands the automation of the process. The researchers have proposed a number of tools and methods for the transformation of natural language requirements to UML class diagrams in the last few years. Different approaches like Natural Language Processing (NLP) and Rule based approaches were used for this purpose, but they have certain limitations. Moreover, these approaches do not extract all the relationship types of class diagrams. To resolve this issue machine learning based approaches have been used for a few years. Machine learning requires large and precise datasets to train models. In this research, a new model is proposed to generate class diagrams from requirements written in natural language more accurately using Natural Language Processing as well as the machine learning approach. NLP has helped to extract the classes, attributes, and methods while machine learning is used to extract the class relationships. To implement machine learning models we have created a dataset containing class names and relationship types i.e. aggregation, association, composition, and inheritance. The effectiveness of models is analyzed by comparing the results using accuracy metrics.</p> <p> </p>Muhammad RamzanGhunwa Saeed SadiqiMuhammad Salman BashirSummair RazaAsma Batool
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Enhanced Lung Cancer Detection and Classification with mRMR-Based Hybrid Deep Learning Model
https://jcbi.org/index.php/Main/article/view/518
<p>Among the most common and fatal malignant tumors worldwide is lung cancer (LC). Generally, it has a poorer five-year survival rate than many other well-known tumors. Pulmonary nodules in the lung are an indicator of the most deadly and lethal kind of lung cancer. Improving a patient's chances of survival requires early detection and evaluation of lung cancer. In the field of lung cancer, broad use of deep machine learning techniques has led to notable progress in recent years in reaching high performance in early diagnosis and prognostic prediction. This study presented a novel hybrid deep learning model by applying the method, of two pre-trained deep model architectures ResNet101 and SqueezeNet obtain feature mappings from the CT images in the dataset. Out of the seven deep-learning CNN architectures that were tested, these two were chosen for evaluation based on their superior performance. Minimizing Redundancy and Maximize Relevance (mRMR) is also used to extract the best features from both of models in order to improve the computational efficiency and performance of the proposed technique. As a result, characteristics that have little bearing on accuracy are removed. All features are ranked to create a new set of feature maps. Next, the technique of feature concatenation is implemented. The best feature map is obtained and then classified using SVM and KNN, two machine learning (ML) classifiers. The accuracy of the newly presented hybrid model was 99.09% with SVM. The results of the experiment show that the suggested hybrid model performed exceptionally well in terms of accuracy on the IQ-OTH/NCCD dataset.</p>Shahroz ZafarJawad AhmadZeeshan MubeenGohar Mumtaz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Improving Software Maintenance Offshore Outsourcing Quality Assurance Using Mixed-Methods Analysis
https://jcbi.org/index.php/Main/article/view/592
<p>The growing dependence on offshore software maintenance outsourcing (OSMO) highlights the necessity of efficient quality assurance systems to guarantee better delivery results. The present quality control frameworks have no connection with contemporary technology such as machine learning, even if outsourcing techniques have advanced. This paper tries to fill in the gaps in the existing literature by utilizing machine learning approaches to assess customer proposals and reduce risks. It also creates a new quality assurance framework for OSMO. Our mixed-methods approach comprised qualitative case studies, industry expert interviews, and numerical analysis. The project employed controlled learning models to assess customer bids and direct decision-making procedures. Information collected from several sources was progressively coded and verified using Rooted Theory. The suggested platform greatly increases the accuracy and dependability of OSMO's job selection and deployment procedures. Among the most important conclusions are the significance of strong risk management, social flexibility, and efficient communication. By highlighting the challenges brought on by linguistic and geographical limitations, the research shows how machine learning may improve decision-making and project outcomes. The case studies also show how resolving cultural and time zone disparities may enhance collaboration and productivity. By executing the suggested quality assurance platform into practice, OSMO procedures will rise dramatically, improving task completion expenses and elevating client happiness. The paper offers insightful analysis and useful recommendations for enhancing quality control in offshore software maintenance.</p>Muhammad Haseeb IqbalSyed Irtaqa Naqi NaqviKhurram ShahzadSyed Asad Ali Naqvi
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Comprehensive Review on Challenges and Prevention of Cybercrimes in Social Media
https://jcbi.org/index.php/Main/article/view/565
<p>Social media is no doubt a great source of communication in today’s modern world but it has various advantages and disadvantages. Because of its widespread usage cyber-attacks on social media becoming the major risk for data security. Attackers use new technologies such as AI and other tools for performing attacks on social media leading to more cybercrimes on social media. Therefore, the understanding of cybercrimes and their counter techniques are very important to address this challenge. The goal of this paper is to analyze the causes of cybercrimes, and their preventive techniques, and propose new countermeasures. The paper also discusses the major reasons and challenges of these cybercrimes on social media including the lack of information, malicious users, and weak privacy measures. This paper also explores major cybercrime types including targeted, modern, and usual attacks, and also evaluates how the prevention techniques are suitable and effective in combating cyber-attacks, and proposes new preventive measures such as authentication, and privacy settings considered important against attacks. For effective prevention of attacks, there is need to develop more effective strategies to minimize the increasing number of cybercrimes on social media,a nd awareness of users are necessary factors. </p>Rana AdeelSyed Asad Ali NaqviGohar Mumtaz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Exploring Phishing Attacks in the AI Age: A Comprehensive Literature Review
https://jcbi.org/index.php/Main/article/view/567
<p>Over the years, phishing attacks have also evolved to more difficult types of phishing such as spear-phishing and clone-phishing. By nature, these attacks target not only human but also technical vulnerability resulting in massive financial losses and data exposure. So the more complex these methods become – such as phishing, cyberbullying or Packet Sniffing — offers in need new cybersecurity protocols to get rid of those threats. We have done this study by using Kitchenham Systematic Literature Review (SLR) framework, which consists of three phases: planning; conducting and reporting. These programs were reviewed because of the increased use in AI oriented operations such as financial phishing attacks, with new difficulties for detection systems. Furthermore, the study undertook extensive database searches on computers routines like IEEE Access, ResearchGate and Google Scholar rich in recent scientific studies. Methods: A two-phase screening process rigorously identified 20 high quality articles out of an initial pool of 250 studies for further analysis. The results are a good demonstration that AI-driven phishing campaigns continue to evolve in complexity, and in many cases can be more difficult to spot. Moreover, the current AI-based detection systems are mainly not claimed fully secured as they can easily tricked through adversarial attacks which may needs to updated or refine time after one another. The research indicates that combining contextual and behavioral investigation may improve the ability to detect threats as they take place. In addition, it is advisable to deploy a multi-layered security approach that combines traditional AI methodologies with human oversight through machine learning for more effective threat detection and prevention. The research highlights the need for preventative security strategies and continued detection innovations against ever more sophisticated phishing campaigns.</p>Muhammad Saeed LiaqatGohar MumtazNazish RasheedZeeshan Mubeen
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Machine Learning Sentiment Analysis Approach on News Headlines to Evaluate the Performance of the Pakistani Government
https://jcbi.org/index.php/Main/article/view/524
<p>The growing amount of unstructured online data presents challenges in efficiently organizing and summarizing relevant information, hindering knowledge development and opinion-building on various topics. Sentimental analysis is key technique to understand public views, as news significantly influences people's perceptions and emotions on various subjects, including politics, economics, and art. The study assesses Pakistani governments' performance using machine learning sentiment analysis of news headlines scraped from Dawn news, focusing on, PMLN, and PTI political party regimes, which hold the government authorities in last ten years. This study uses machine learning and pre-trained models for textual representation, recording term context and semantics, and incorporating feature reduction to enhance sentiment analysis accuracy by selecting useful features and applying labels. The SVM and sentiment intensity analyser model performed well, in experiments on two news headline datasets, from which gaining accuracy on the dawn news dataset with the sentiment intensity analyser pre-trained model. The system evaluates government efficacy using predicted labelled news, displaying sentiment scores from headlines from four regimes containing that ranking them and assessing their impacts. </p>Hajira NoorJawad AhmadAmmar HaiderFawad NasimArfan Jaffar
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Enhanced Dimensionality Reduction in Time-Domain Optimization through PCA and Eigenvector Integration
https://jcbi.org/index.php/Main/article/view/523
<p>This study integrates “Principal-Component-Analysis (PCA)” with eigenvector integration techniques to provide a novel method for dimensionality reduction in time-domain optimization. Effective dimensionality reduction is increasingly hampered by the complexity of the data, which is essential for raising computing effectiveness and boosting model performance. “Principal-Component-Analysis (PCA)” is a significant tool in machine learning and data processing and is especially useful for high-resolution data. This study investigates the impact of “Principal-Component-Analysis (PCA)” on the performance and accuracy of three classification algorithms: Support Vector Machine (SVM), Random Forest (RF), and Convolutional Neural Network (CNN)) for medical image classification. Using data images of melanoma and eczema, Visual Geometry Group 16(VGG16) was used for feature extraction and then “Principal-Component-Analysis (PCA)” was used to reduce dimensionality. The results show that “Principal-Component-Analysis (PCA)” improves processing time and does not notably affect accuracy or other performance. The accuracy of the training model on PCA-reduced data is 99% (SVM), 98.75% (RF), and 98.75% (CNN), respectively, while the accuracy of the non-reduced data is 99.75%, 99.25%, and 99.75%, respectively. Additionally, the role of “Principal-Component-Analysis (PCA)” in accelerating the training process without compromising performance by shortening the training time is emphasized. This work highlights the importance of “Principal-Component-Analysis (PCA)” as the first step in ensuring fast and effective training of machine learning models while having a minimal effect on accuracy thus highlighting the importance of “Principal-Component-Analysis (PCA)” for high dimensional data while maintaining the accuracy and other performance measures with minimal negative effect and improved time complexity considerably.</p>Mehak AliMuhammad AzamMuhammad AshrafAbid Ali Hashmi
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Real-Time Vehicle Detection with Advanced Machine Learning Algorithms
https://jcbi.org/index.php/Main/article/view/594
<p>Vehicle detection not only increases road safety but also helps regulate traffic flow and improves the functioning of intelligent transportation systems. The model chosen was highly trained and tuned for an accuracy of 99%. It is far improved, considering that the detection accuracies in the literature were usually between 85% and 88%. It proposes a real-time vehicle detection system using a Convolutional Neural Network (CNN) model. The dataset “Vehicle Detection” consisted of 14,208 training images, 1,776 validation images, and 1,776 test images of size 1024x1024 pixels each and annotated with 20,000 bounding boxes. It is a 23-layer deep Convolutional neural networks model with 14.7 million parameters using ReLU and softmax as the activation functions, trained for 40 epochs using the Adamax optimiser and categorical cross-entropy loss. It used a custom callback for the hyperparameter tuning process, arriving at an initial learning rate of 0.001 and a batch size of 40. The model performed very well, with an accuracy of 99.4% on training, 99.3% on validation, and 99.4% on testing, with precision and recall of 99.3% and 99.4%, respectively.</p>Syeda Saliha KazmiShahbaz ShamshadFawad NasimSaira HashimNisha AzamBushra Ambar
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Navigating Side-Channel Attacks: A Comprehensive Overview of Cryptographic System Vulnerabilities
https://jcbi.org/index.php/Main/article/view/626
<p>Side-channel attacks have become increasingly important in the dynamic field of cybersecurity, posing a challenge to the security paradigms of cryptographic systems and implementations. The goal of this thorough review paper is to give a thorough understanding of side-channel assaults by examining the various kinds, subtypes, and difficulties that they present for modern security solutions. Understanding the taxonomy of side-channel attack vulnerabilities is essential for creating effective defense methods because these assaults target a wide range of vulnerabilities, from power consumption patterns to acoustic emanations. <strong> </strong></p>Muhammad KaleemMuhammad Azhar MushtaqSadaqat Ali RamayAamir MahmoodTahir Abbas KhanSayyid Kamran HussainAdeel AnwarHassnain Abdullah Bhatti
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Review Paper on AI-Based User Privacy & Security Concerns in Smart Wearable IOT Devices
https://jcbi.org/index.php/Main/article/view/615
<p>Smart technologies including wearables have emerged as the bridge between artificial intelligence (AI) and the internet of things (IoT). On the one hand, it has changed the way people regarding efficiency and utility provision. On the other hand, the very fact relationships become deep and interconnected as they are, raised great issues of privacy and security concerns that are now in need of seeking reliable solutions from both scholars and practitioners. The current review focuses on the state of the art of privacy and security issues as an effect of AI in smart wearable IoT devices, the existing, and further, advanced AI technologies that are still undeveloped for the purpose. The results were aggregated through ten studies wherein each posed relatable problems and solutions regarding smart wearable IoT devices security. A comprehensive table with challenges, solutions, AI methods, datasets, and limitations of the studies is also provided.</p>Muhammad Faheem YounasMuhammad Muzammal FarooqGohar MumtazZeeshan Mubeen
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Spelling Variation of Roman Urdu Using Machine Learning
https://jcbi.org/index.php/Main/article/view/529
<p>Spelling variations are common in languages without standardized orthography, such as Roman Urdu (RU), where no established criteria exist for spelling. For example, "2mro" is a nonstandard spelling for "tomorrow." In South Asia, Roman Urdu is widely used, especially on social media and in online product reviews, leading to a proliferation of user-generated spellings. This research compiles a dataset of Roman Urdu words (RUWs) with their spelling variations, collecting 5,244 distinct RUWs, each with one to five. To validate this dataset, in this study, we apply six machine learning (ML) classifiers: Support Vector Machine (SVM), Logistic Regression (LR), Decision Tree (DT), Naïve Bayes (NB), K-Nearest Neighbors (KNN), and Random Forest (RF). Among these, the SVM classifier performs better, achieving an accuracy of 99.96%, surpassing all other algorithms.</p>Mudasar Ahmed SoomroRafia Naz MemonAsghar Ali ChandioMehwish LeghariMuhammad Khalid
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Dark Data in Accident Prediction: Using AdaBoost and Random Forest for Improved Accuracy
https://jcbi.org/index.php/Main/article/view/531
<p>Dark data, or unused information included into routine activities, poses significant hurdles in the era of data-driven decision-making because of its volume and complexity. The goal of this publication is to increase the accuracy of accident prediction by proposing an efficient procedure for dark data extraction and analysis. Data extraction, classifier implementation, and performance evaluation are all done in a methodical manner by using AdaBoost and Random Forest classifiers. According to the results, the Random Forest classifier outperforms the AdaBoost classifier with an accuracy of 89.50%, compared to the former's 78.4%. These results highlight the potential of dark data to yield insightful information by demonstrating how well these classifiers improve accident prediction models. In addition to emphasizing the value of dark data for decision-makers and urban planners looking to improve prediction models and access hidden information, the study offers a methodology for using it. Our research highlights the increasing significance of dark data in enhancing decision-making procedures and forecast precision as data quantities increase.</p>Masroor ShahFazal MalikMuhammad SulimanNoor RahmanIrfan UllahSana UllahRomaan KhanSalman Alam
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Multi-Model Machine Learning Analysis of Environmental Risk Factors for Lung Cancer
https://jcbi.org/index.php/Main/article/view/599
<p>One sort of malignant growth that catches the lungs is cellular breakdown in the lungs(LC). It is one of the main sources of mortality in the modern world. The smoke created by the deficient ignition of biomass fuels contains different unsafe synthetic compounds or chemicals that can be incredibly risky to human health. Around 25% of examples of cellular breakdown in the lungs universally not connected to tobacco use. The genomic scene of cellular breakdown in the lungs likewise incorporates modifications to DNA repair pathways, hereditary genetic risk variables, and variation in gene expression. Air pollution, toxins, and tobacco smoke are a couple of representation of ecological factors that extraordinarily lift the danger of cellular breakdown in the lungs, even while hereditary factors remain a major influence on lung cancer susceptibility and progression. There's no believable exploration that could give data about Pakistan's ongoing indicative strategies. The prediction and early identification of cellular breakdown in the lungs save endless lives. Accordingly, strong machine learning algorithms calculations are expected to distinguish event of LC in its beginning phases. Perceiving the different characters of abnormal growth of cells in the lungs etiology, this study focuses on various ecological causes including air contamination, tobacco smoke, exposure to radiation and hereditary inclination. Models created utilizing ML algorithms like SVM, KNN, and NB etc. As by all the premier accuracy obtained by the classifier DT which is 99.67. In our review, we additionally endeavoured to reveal relationships between the different elements in the dataset utilizing traditional machine learning approaches. Clinical specialists in their facilities can involve this model as a choice help framework.</p>Naeem AbbasMuhammad AzamMuazzam AliMafia MalikM U HashmiAbdul Manan
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702CPU vs. GPU: Performance comparison of OpenCL Applications on a Heterogeneous Architecture
https://jcbi.org/index.php/Main/article/view/612
<p>The objective of researchers and developers has always been to attain superior performance for their computing applications. In this regard, the use of Graphic Processing Unit (GPU) is very common and initially it is used to accelerate the performance of graphic applications. The success of GPU has attracted researchers and they have shown keen interest to use GPU acceleration for regular applications. However, there have been many studies in recent past claiming, even though the application is well suited for parallelism it is not guaranteed to run faster on the GPU. In this this paper we compare performance of commonly used OpenCL applications both on CPU and GPU platforms. We measure the execution time of each application on both platforms and investigate why an application performed better on a particular platform. In this regard, we analyze the source code of each application and identify program features which contributes towards the better performance on a particular platform. The study has identified that loop unrolling and data dimensionality are crucial program features that can be leveraged to utilize the parallel processing capabilities of a GPU platform. We find that when maximum loop unrolling is used with two-dimensional input data, the 2D Convolution application executes around 20 times faster on GPU. Similarly, when the level of loop unrolling reduces, the performance gain also decreases on GPU. Ultimately, in the absence of loop unrolling along single-dimensional input data, CPU performs better. In this case, the ATAX application executes around 9x faster on CPU as compared to GPU.</p>Muhammad Nadeem NadirMuhammad Siraj RathoreAsad HayatJunaid Abdullah Mansoor
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Improving DeepFake Detection: A Comprehensive Review of Adversarial Robustness, Real-Time Processing and Evaluation Metrics
https://jcbi.org/index.php/Main/article/view/572
<p>This review analyzes 30 studies on deepfake detection. It focuses on three areas: adversarial robustness, real-time processing, and evaluation metrics. Deepfake technology is making rapid progress. It poses serious threats to digital security. We need strong, efficient detection models. The review exposes three key factors that boost detection system power. They are: adversarial training, GAN-based methods, and lightweight designs. They boost both resilience and efficiency. But challenges remain. We need real-time processing and standardized tests. They must capture the nuances of deepfake detection. The findings show a need for more research. It must address new threats, improve detection models, and set real-world benchmarks. The study stresses the need to improve deepfake detectors. We must integrate advanced training, optimize methods, and refine metrics. They must be robust, accurate, and adaptable to new digital threats.</p>Najaf SaeedGohar MumtazMuqaddas YaqubMuhammad Haroon Ahmad
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Enhancing National Cybersecurity and Operational Efficiency through Legacy IT Modernization and Cloud Migration: A US Perspective
https://jcbi.org/index.php/Main/article/view/536
<p>This review paper discusses the issue of IT modernization in US, emphasizing on the shift from centralized system and frameworks to cloud-based framework and its impacts on the improvement of nations’ cybersecurity, operational effectiveness, and environmental footprints. This paper discusses the key aspects emphasizing the idea of cloud migration, such as security features to minimize risks, optimization of resource utilization to cut costs, and energy-saving measures to decrease consumption and worldwide carbon footprints in US. These studies get support from actual case of federal agencies and other private sector organizations to give an example of modernization projects and the implications of the respective success. The paper also looks at the issues of risk factors when migrating to cloud and possible solutions to the risks of concerns to security and data integrity, risks of possible downtime, and the risks of cost implications that may arise in the country. There are policy prescriptions in USA that are offered based on survey findings on the role of the government to fund and promote IT modernization by offering more financial rewards and engage in investment partnership to enhance IT modernization on its own projects and other projects carried out in the country. Lastly, the new tendencies in cloud computing and IT infrastructure like, hybrid cloud, artificial intelligence and edge computing have been elaborated. This paper also emphasizes the significance of IT modernization for the nation by protecting, improving, and sustainably utilizing data in a world that is rapidly going digital.</p>Hassan NawazMuhammad Suhaib SethiSyed Shoaib NazirUzair Jamil
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Revolutionizing Schizophrenia Diagnosis: A Transfer Learning Approach to Accurate Classification
https://jcbi.org/index.php/Main/article/view/591
<p>Schizophrenia is one of the most serious mental disorders that has been both widely researched and extensively feared. It is among those mental illnesses which are relatively less unveiled to the world but its effects upon people living with it are undoubtedly profound. Significance of identification of schizophrenia is of vital importance because it has become a critical and challenging problem. In the past decade, different artificial intelligence techniques have been introduced to assist mental health providers, but no satisfactory results have been obtained for identification of schizophrenia and there is no front-end application to which doctors are interacted and to use these techniques or algorithms knowledge of programming required. Furthermore, Neuroimaging technique like functional magnetic resonance imaging (FMRI) is not able to perform adequate temporal sampling due to slow bold response. In this research work, pre-defined fMRI regions specifically related to the schizophrenia are used for mapping. Schizophrenia disease has been classified through transfer learning algorithm Alex Net. In order to evaluate algorithm results, we used the standard measures accuracy, sensitivity, specificity, prevalence, likelihood ratio positive, and likelihood ratio negative. This project provides front end window form designed using Tkinter python module where doctors upload user’s details and FMRI 4D image and window form first convert into 2d images and then take specific slice to classify is image is positive or negative and store result in database.</p>Muhammad Imran Khan KhalilHamza UmerKaleem UllahSaif Ullah KhamoshSyed Fahad Murtaza NaqviAsif NawazSheeraz Ahmed
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Comprehensive Biological Risk Assessment of Genetically Modified Organisms: Evaluating Human Health and Environmental Impacts
https://jcbi.org/index.php/Main/article/view/581
<p>This study presents a comprehensive biological risk assessment of genetically modified organisms (GMOs), focusing on their potential impact on human health and the environment. Utilizing a multi-faceted approach, we analyze existing literature, conduct data-driven case studies, and perform experimental evaluations. The health assessment examines short- and long-term risks, including Allergenicity, toxicity, and nutritional alterations in GMOs. The environmental assessment addresses concerns such as gene flow, pest resistance, and non-target species effects. Preliminary findings emphasize the need for context-specific, robust risk assessment frameworks that consider both direct and indirect impacts within ecological systems. This study advocates for a science-based regulatory approach to GMOs, balancing their potential benefits with associated risks. Future research will focus on developing predictive models to support proactive risk management and inform policy-making.</p>Zuhaib NishtarIlman KhanTalat IqbalAyisha BibiWasfa SanaKalim Ullah
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Security Issues in Internet of Things (IoT): Challenges and Solutions
https://jcbi.org/index.php/Main/article/view/561
<p>IOT is a general topic for research purpose. IOT is quite popular and has become a vital part of our lives. IOT is crucial to self-sufficiency, smart environments, health care, smart cities, and micro-grid systems. If we talk about the meaning of the internet of things, it is defined as the paradigm which involves objects that provide actuators, sensors and processors that communicate with one another for serving the meaningful purposes. Sensing, Processing and data transmission are the three important components of IOT. IOT technology improved great in recent year but there are still some problems in IOT which require the attention. One of the main problems is security. Security is crucial for IOT devices due to sensor’s generation and sharing of personal data and blending the physical and digital realms. Implementation of encryption methods are crucial for IOT system. So, the main objective of this article to pinpoint the challenges for the security and problems which arise in environment of IOT. Security is one of the key concern with IOT technology. In this paper we explained security problems, challenges and solution of internet of things. </p>Syeda Hijab ZahraMudassar RehmanGohar MumtazSajid Iqbal
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702IDD-Net: A Deep Learning Approach for Early Detection of Dental Diseases Using X-Ray Imaging
https://jcbi.org/index.php/Main/article/view/624
<p>Early detection of dental diseases such as cavities, periodontitis, and periapical infections is crucial for effective management and prevention, as these conditions can lead to severe complications if left untreated. However, traditional diagnostic methods are often manual, time-consuming, and heavily reliant on expert judgment, which can introduce variability and delay in diagnosis. To address these critical challenges, we propose IDD-Net (Identification of Dental Disease Network), a novel deep learning-based model designed for the automatic detection of dental diseases using panoramic X-ray images. The proposed framework leverages Convolutional Neural Networks (CNN) to enhance the accuracy and efficiency of dental condition classification, thereby significantly improving the diagnostic process. In our comprehensive evaluation, IDD-Net’s performance is rigorously compared to four state-of-the-art deep learning models: AlexNet, InceptionResNet-V2, Xception, and MobileNet-V2. To tackle the issue of class imbalance, we employ the Synthetic Minority Over-sampling Technique with Tomek links (SMOTE Tomek), ensuring a balanced sample distribution that enhances model training. Experimental results showcase IDD-Net’s exceptional performance, achieving a 99.97% AUC, 98.99% accuracy, 98.24% recall, 98.99% precision, and a 98.97% F1-score, thus outperforming benchmark classifiers. These findings underscore the transformative potential of IDD-Net as a reliable and efficient tool for assisting dental and medical professionals in the early detection of dental diseases. By streamlining the diagnostic process, IDD-Net not only improves patient outcomes but also has the potential to reshape standard practices in dental care, paving the way for more proactive and preventive approaches in oral health management.</p>Muhammad Adnan HasnainZeeshan AliKhalil Ul RehmanMuhammad EhtshamMuhammad Sajid Maqbool
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Integrative QSPR and VIKOR Multi-Criteria Decision Analysis for Optimizing Anti-Parkinson Drug Candidates
https://jcbi.org/index.php/Main/article/view/516
<p>Developing efficient anti-Parkinson medications poses a considerable challenge in the field of pharmacology, necessitating sophisticated techniques for assessing and refining potential therapeutic agents. This research presents a unified method that merges Quantitative Structure-Property Relationship (QSPR) analysis with VIKOR Multi-Criteria decision-making (MCDM) to enhance the selection and refinement of anti-Parkinson drug candidates. QSPR analysis aims to elucidate the connection between molecular descriptors and the pharmacological characteristics of different anti-Parkinson compounds. By pinpointing essential molecular elements that influence both drug efficacy and safety, QSPR models yield predictive insights that direct the design and choice of new drug candidates. Subsequently, the VIKOR method is utilized to prioritize and choose the most promising drug candidates according to their anticipated performance. This method incorporates a range of pharmacological and safety considerations, enabling a balanced evaluation that weighs therapeutic advantages against potential risks. The collaborative QSPR-VIKOR approach facilitates a thorough assessment of drug candidates, reconciling conflicting goals and offering a definitive ranking system for decision-making. By integrating the benefits of both strategies, this study seeks to identify ideal anti-Parkinson drug candidates with improved efficacy and safety profiles. The results offer a solid groundwork for the systematic assessment and enhancement of new therapeutic agents, potentially hastening the creation of more effective treatments for Parkinson’s disease and enhancing patient outcomes and their quality of life.</p>Fatima SaeedNazeran Idrees
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Security Framework for Data Migration over the Cloud
https://jcbi.org/index.php/Main/article/view/602
<p>The adoption of cloud services has ushered in a new era of business efficiency. However, major organizations' migration of critical software and data has encountered unanticipated obstacles, chiefly driven by concerns regarding data privacy and security. This article highlights the profound ramifications of migration delays, emphasising their potential to disrupt the uninterrupted flow of information, particularly in public and hybrid cloud environments. Our study reveals a well-organized framework that highlights essential security measures, such as the use of SSL/TLS protocols, which provide a secure channel of communication over the internet, data confidentiality and integrity (transmitted between a user's web browser and a website's server), encryption, authentication, building trust, and supporting data transmission integrity. In addition, we support the thoughtful application of restricted migration tickets, which effectively control access privileges and thwart unauthorized access. An innovative addition to this framework is the incorporation of Prediction-Based Encryption (PBE), a cutting-edge methodology uniquely suited to the intricacies of the healthcare and e-commerce sectors. PBE inherently segregates sensitive data, isolating it for separate storage, thereby mitigating the risk of data breaches during migration. It also refers to a theory wherein encryption techniques combine models or prediction algorithms to improve security. This could entail anticipating possible security risks or modifying encryption settings in response to expected shifts in the security environment. In conclusion, by embracing these meticulously devised security measures, organisations can surmount the challenges posed by migration delays and fortify their data protection strategies in the digital age.</p>Muhammad AzamFawad NasimJawad AhmadSohail Masood Bhatti
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Evaluating Quantum Cybersecurity: A Comparative Study of Advanced Encryption Methods
https://jcbi.org/index.php/Main/article/view/557
<p>The emergence of quantum computing presents unprecedented challenges to cybersecurity, particularly in encryption. With digital platforms crucial for communication, commerce, and data storage, the rise of sophisticated cyber threats underscores the need for robust encryption. Traditional methods like RSA, AES, and ECC are now vulnerable to quantum algorithms such as Shor's algorithm, which can resolve complex mathematical problems exponentially faster than classical algorithms. This study delves into the principles of quantum computing— qubits, superposition, and entanglement and their implications for current encryption standards. It evaluates advanced solutions like Quantum-Key-Distribution (QKD) and Post-Quantum-Cryptography (PQC), assessing their potential to secure digital communications against quantum threats. Through a detailed literature review and comparative analysis, the study highlights the critical need for quantum-resistant cryptographic methods and explores the challenges of their implementation. The findings emphasize the importance of future research in developing more efficient quantum cryptographic protocols, overcoming technical and practical hurdles, and fostering international cooperation for global standardization. These efforts are vital to ensuring secure digital communications in the rapidly evolving quantum landscape.</p>Ezzah FatimaAhmad Naeem AkhtarMuhammad Arslan
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Word Computational Screening of Green Color Colonies from Petri Dish through Python Tool GD.A1.
https://jcbi.org/index.php/Main/article/view/600
<p>The digital image processing and deep learning techniques are considerably applied in the field of microbiology, and it has a countless perspective. Counting microorganism colonies in petri dishes through the necked eye is a time-consuming and slow process. Therefore, the need for a colony enumeration system that counts all colonies and also discriminates against green colonies is the key concern. Counting only green colour in cell profiler through the image processing method is wide area of interest in micro biological estimation, so in Python, a simple algorithm is frim-up to aid in visual discrimination. Thus, it can be concluded that enumeration of microorganisms can be performed automatically and reliably and become easier with the developed GD.A1 software. The acceptable result is 95.1 percent confidence interval.</p>Muhammad AzizNazima Yousaf Khan
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702CNN and Gaussian Pyramid-Based Approach For Enhance Multi-Focus Image Fusion
https://jcbi.org/index.php/Main/article/view/539
<p>Achieving sharp focus in both foreground and background areas simultaneously in digital photography is challenging due to the limited depth of field (DOF) in DSLR cameras. This often results in blurred images with isolated details, hindering the capture of clear, high-quality photographs. Existing multi-focus image fusion models struggle with issues related to image quality and managing input images taken from varying angles. To address these challenges, we introduce a novel multi-focus image fusion method that integrates Convolutional Neural Networks (CNNs) with Gaussian Pyramid techniques. Our approach follows a four-step process: visual search, initial segmentation, compression analysis, and final synthesis. The Gaussian pyramid technique enhances edge detection by progressively reducing the image size and facilitating the identification of automatic objects. By leveraging these combined techniques, our method ensures the generation of uniformly focused images. We rigorously evaluate our model using metrics such as Perceptual Image Quality Evaluator (PIQE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and connectivity metrics. Our enhanced model demonstrates superior performance, achieving a 4.70% improvement in PIQE compared to previous CNN-based methods. This confirms the effectiveness of our approach in producing clear, high-quality fused images.</p>Kahkisha AyubMuhammad AhmadFawad NasimShameen NoorKinza Pervaiz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Improving Stroke Prediction Accuracy through Machine Learning and Synthetic Minority Over-sampling
https://jcbi.org/index.php/Main/article/view/566
<p>Strokes are a leading cause of death and disability worldwide. Accurate prediction and early intervention can significantly improve patient outcomes. The objective of this study is to develop a model that will effectively predict stroke events based on the application of machine learning methods using a Harvard Dataverse Repository dataset containing 43,400 samples with 10 features. The dataset was imbalanced, with 42,617 non-stroke cases versus 783 stroke cases; hence, SMOTE was applied to balance the dataset. Models were evaluated using “accuracy, precision, recall, F1-score, and ROC AUC”. ML models included “logistic regression, decision tree, random forest, gradient boosting, adaboost, XGBoost, support vector machine, k-nearest neighbors, Naive Bayes, bagging classifier, and voting classifier”. The best model was that of the Bagging Classifier at an accuracy of 98.3%, precision of 98.7%, recall of 98.0%, an F1-score of 98.3%, and a ROC AUC of 99.5%. Then, it proved the robustness and reliability of this model. The current research demonstrates the power of SMOTE in solving class imbalance and underlines the possible role of advanced machine learning techniques in building feasible predictive tools for detecting stroke incidents in their incipient stage. Improvements such as these in the field may have a significant effect on bettering patient outcomes and reducing burdens on healthcare. Moreover, the implementation of such predictive models within clinical workflows could enable timely medical interventions, hence improving the quality of care for those people who are at risk of stroke. The work also opens up a variety of possibilities for deep learning and other sophisticated machine-learning techniques in healthcare, underlining the fact that further innovating and developing this area is necessary.</p>Muhammad Abdullah AishAmina Abdul GhafoorFawad NasimKiran Irfan AliShamim AkhterSumbul Azeem
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Reducing Carbon Emissions and Costs of Electricity with Solar PV Systems in QUEST Nawabshah, Pakistan Administration Building
https://jcbi.org/index.php/Main/article/view/590
<p>The ongoing electricity crises in Pakistan have led to significant uncertainty and increased costs associated with fossil fuels. This circumstance substantially challenges the country in meeting its escalating electricity demands. This study focuses on the Techno-Economic Feasibility of implementing Solar Photovoltaic (PV) systems for the administrative building of Quaid-e-Awam University of Engineering, Science, and Technology (QUEST), Nawabshah in Pakistan. The primary objectives of this research include evaluating the design feasibility of solar PV systems, analysing the building's electricity consumption sourced from grid and diesel generators, and conducting a comprehensive cost-benefit analysis. The study calculates the levelized cost of electricity (LCOE) and introduces the concept of net metering to assess its potential benefits. Findings from this research indicate that the annual carbon emissions from the building due to grid electricity usage amount to 220,614 kg of CO<sub>2</sub>. In contrast, adopting solar PV systems would result in zero carbon emissions. The LCOE for the solar PV system is determined to be 33.7 Rs/kWh, providing a fixed cost of solar-generated electricity over the lifespan of the solar plant. The analysis also highlights the effectiveness of net metering in promoting the adoption of solar PV systems by enabling surplus energy to be fed back into the grid, thereby offering financial incentives. Results of this study underscore the viability and benefits of investing in solar energy, particularly in the context of rising energy demands and environmental concerns. The findings advocate the transition to solar power as a sustainable and economically feasible solution for reducing reliance on fossil fuels and mitigating carbon emissions in university administrative buildings.</p>Gordhan Das WalasaiStefano LandiniMuhammad Nawaz ChandioAbdul Azeem KumbharShoaib Ali BhattiSheeraz Ahmed
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Real-Time Intrusion Detection with Deep Learning: Analyzing the UNR Intrusion Detection Dataset
https://jcbi.org/index.php/Main/article/view/554
<p>In current years, the escalation of cyber threats has underscored the need for advanced intrusion detection systems (IDS). This study explores the application of deep learning (DL) strategies to enhance IDS capabilities, utilizing the university of Nevada Reno Intrusion Detection Dataset (UNR IDD) because the benchmark. The UNR IDD dataset, known for its diverse set of network traffic patterns gives a wealthy foundation for schooling and comparing deep learning (DL) models. We investigated numerous deep learning architectures, consisting of Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and synthetic Neural Networks (ANNs), as well as a hybrid version combining CNNs and RNNs. Our fashions were evaluated based on detection accuracy, false positive quotes, and computational performance. results display that deep learning techniques, particularly the hybrid model, offer full-size enhancements over traditional techniques, attaining a detection accuracy of up to 96.2% and a false positive rate as little as 1.5%. This paintings contributes to the sphere by showcasing the efficacy of superior neural community techniques in actual-world intrusion detection scenarios, paving the manner for greater sturdy and adaptive safety solutions.</p>Fakhra ParveenSajid IqbalGohar MumtazMuqaddas Salahuddin
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Comparative Risk Analysis and Price Prediction of Corporate Shares Using Deep Learning Models like LSTM and Machine Learning Models
https://jcbi.org/index.php/Main/article/view/604
<p>The prediction of share prices and risk analysis have always posed significant challenges for investors due to the influence of various economic, financial, and political factors. Inaccurate price predictions can lead to severe financial losses, particularly for investors with limited financial market knowledge. Recent research and advancements in Artificial Intelligence, Machine Learning, and Deep Learning models have greatly improved the accuracy of stock price predictions. This research focuses on applying the Long Short Term Memory model, a specialized Deep Learning technique, to predict the closing prices of Tech Industry stocks. The study calculates the Root Mean Squared Error (RMSE) and Mean Absolute Percentage Error (MAPE) to evaluate the model’s performance. Additionally, the results are compared with other models, including Feed Forward Neural Networks, Recurrent Neural Networks, and Machine Learning Models like Support Vector Machine and Gradient Boosting, using the Weighted Average metric. The Long Short Term Memory models showed the lowest weighted average error value of 0.0115, establishing it as one of the most effective models for predicting stock prices. The findings have significant implications for investors and risk analysts, particularly in the Tech Industry, offering a robust tool for improving stock price prediction accuracy.</p>Muhammad MehdiFawad NasimMuhammad Qasim Munir
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Embedded Descriptor Carriers Computation using Multi-Layer Neural Networks on Large Datasets
https://jcbi.org/index.php/Main/article/view/520
<p>Intelligent and effective visual extraction due to extensive data sets is unavoidable need included in today. Raw image labels must correspond to visual characteristics in order to extract images based on content based image retrieval (CBIR). CBIR is extensive method progressively applied on retrieval methods. CNN major job is to retrieve authentic and useful images. Various methods have been employed to enhance the effectiveness and reliability of image exploration, such as filename-based searches and image tagging. However, these techniques have not proven successful in real-world applications. To effectively categorize images and function as a filter, the feature vector must include comprehensive visual information. This information should encompass elements such as color, shape, objects, and different types of spatial data. By incorporating these details, the feature vector can more accurately define the image's category and improve the overall efficiency of image exploration. The proposed method excels at detecting, describing, recognizing, and correlating image signatures that accurately reflect the true content of an image. It achieves this by categorizing semantic groupings of nearly identical images. This technique is particularly effective during image retrieval and feature detection processes. The provided methodology details a convolutional neural network (CNN) based method for colorizing grayscale images. The approach begins with defining the area of interest and potentially converting the image to grayscale. Pixel intensities are compared, and patterns within the image are identified using techniques like concentric, retinal, or log-polar methods. Selective pixel sampling, differentiation to analyze neighboring pixels, and smoothing to reduce noise are all employed. Convolution and pooling operations further refine the data. Activation functions, like ReLU or SoftMax, are then applied. Finally, fully connected layers likely within a neural network come into play. The later stages involve sample collection, redundancy measures, distance calculations, and potentially techniques like Bag-of-Words (BoW) and K-Nearest Neighbors (KNN) for image classification. An FV is aggregated, and Bow, KNN, and results are generated. The presented method is capable of identifying, characterizing and correlating images signatures that precisely represent the actual content of a picture by categorizing semantically grouped, nearly identical images. The proposed technique occur during image retrieval and feature detection. Extensive experimentations are conducted on highly recognized datasets such as ALOT-250 and 17-Flowers with mismashed of RestNet-50, Inception and VGG-19. Remarkable results indicated that the presented technique demonstrates significant precision rate and recall rate for huge image groups of challenging datasets.</p>Khawaja Tehseen AhmedSidra SarwarAiza ShabbirSyed Burhan ud Din TahirNaila SattarNosheen SaeedAdeeba Rashid KhanSobia Sarfraz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Deep Learning Tool for Early Detection and Control of Lumpy Skin Disease Using Convolutional Neural Networks
https://jcbi.org/index.php/Main/article/view/547
<p>Lumpy skin disease (LSD), a highly contagious viral disease of cattle, continues to pose a significant threat to animal welfare and global economic stability. Early detection and intervention are crucial for mitigating its impact. This research explored the potential of convolutional neural networks (CNNs) for automated LSD classification based on clinical and laboratory data. We compared two prominent CNN architectures, Inception and Xception, in their ability to identify patterns and predict LSD occurrence. Both models were trained on a large dataset of labeled images, effectively learning to distinguish LSD-infected animals from healthy ones. However, Xception emerged as the superior technique, achieving a remarkable 98.8% accuracy compared to Inception's 94%. This 4.8% improvement in accuracy demonstrates the potential of Xception for more precise and reliable LSD detection. These findings suggest that CNNs, particularly Xception, can be valuable tools for early LSD diagnosis, enabling prompt veterinary intervention and reducing disease spread. Integrating this technology into veterinary practices can significantly improve animal health management and disease control efforts, ultimately minimizing LSD's global impact on cattle populations.</p> <p> </p>Muhammad Zain ShakeelNusratullah TauheedMuhammad Toseef JavaidTayyaba AslamMuhammad UbaidullahNabeela YaqoobMuhammad Ayaz Zafar
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Enhancing Database Security through AI-Based Intrusion Detection System
https://jcbi.org/index.php/Main/article/view/563
<p>Cybersecurity attacks on network database systems are becoming widespread, causing many problems for individuals and organizations. In order to improve access to the search system for database security, this study proposes the use of cognitive-based models. Artificial intelligence algorithms are used as the first step in determining the most important parts of network data. Database security is improved by using advanced technology. Four classification algorithms are used for intrusion detection: K nearest neighbor (KNN), support vector machine (SVM), decision tree (DT), and a combination of neural network (CNN). The performance of the penetration testing model is demonstrated and analyzed using the test model and NSL-KDD datasets. According to the empirical results, the deployment method improves the access. The conclusion is that the proposed model is better than the original model. This study uses four classification algorithms to identify four types of network attacks, such as DoS attacks, U2R attacks, R2L attacks, and packet sniffing attacks, but the accuracy of the CNN classifier is higher than other classifications with 98.4% accuracy.</p>Rafeeq AhmadHumayun SalahuddinAttique Ur RehmanAbdul RehmanMuhammad Umar ShafiqM Asif TahirMuhammad Sohail Afzal
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Graphical and Computational Analysis of Cayley-Dickson Algebra Elements over Zp
https://jcbi.org/index.php/Main/article/view/617
<p> In order to determine the number of idempotent, nilpotent, and zero Divisors inside this algebraic structure, this research work gives a thorough Study of Cayley Dickson algebra over . We also look at how many unit components there are in the algebra and offer a basic method for figuring out how many units are often present. We write pseudo code and techniques that may be used to locate these algebraic components in order to make actual implementation easier. Furthermore, we create a graphical depiction of the idempotent, nilpotent, zero divisor and units using MATLAB. By utilizing these discoveries, we expand knowledge in the area of Cayley Dickson algebras and offer an important resource for future study and application in related fields. The computed result are useful in Time like, light like and space like.</p>Saba ZahidMuazzam AliM Usman HashmiAbdul MananAffan Ahmad
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Optimized XGBoost-Based Model for Accurate Detection and Classification of COVID-19 Pneumonia
https://jcbi.org/index.php/Main/article/view/723
<p>The accurate diagnosis of COVID-19 pneumonia is a critical global health challenge, particularly for vulnerable populations. Existing diagnostic methods often lack precision due to limited algorithm sophistication and insufficient dataset validation. This study addresses these issues by introducing a customized XGBoost algorithm for classifying COVID-19 pneumonia. The methodology follows a four-phase approach: (1) data acquisition from a comprehensive GitHub dataset, (2) data preprocessing with augmentation and normalization, (3) model training using XGBoost, and (4) evaluation against existing models. The model achieves an average accuracy of 87.35%, demonstrating superior performance in accuracy and diagnostic precision compared to current methods. The findings of this research provides a systematic framework for improving pneumonia classification and sets the stage for future AI-driven healthcare advancements in respiratory diseases.</p>Fazal MalikMuhammad SulimanMuhammad Qasim KhanNoor RahmanMohammad Khan
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Analyzing Breast Cancer Detection Using Machine Learning & Deep Learning Techniques
https://jcbi.org/index.php/Main/article/view/542
<p>The most recent statistics show that of all cancers, cancer of the breast is the most common, killing about 900,000 individuals annually. Finding the disease early and correctly diagnosing it can increase the chances of a good result, which lowers the death rate. Early diagnosis can, in fact, prevent the disease from spreading and prevent premature victims from experiencing it. In this work, a comparison is made between advanced deep learning techniques and traditional machine learning for the analysis of breast cancer. We evaluated a deep learning model based on neural networks and traditional machine learning approaches such as Support Vector Classifier (SVC), Decision Tree, and Random Forest. Several demographic and clinical data were included in the diverse dataset of this investigation. This study compared traditional machine learning models (Random Forest, Decision Tree, SVC) with a neural network-based deep learning model in breast cancer analysis using features such as age, family history, genetic mutation, hormone therapy, mammogram results, breast pain, menopausal status, BMI, alcohol consumption, physical activity, smoking status, breast cancer diagnosis, frequency of screening, awareness source, symptom awareness, screening preference, and geographical location. SVC obtained an 86.36%, Decision Tree an 86.18%, and Random Forest an 86.00%. The deep learning model more precisely, a neural network outperformed these results with a highest 93% accuracy. To evaluate their diagnostic usefulness for breast cancer analysis, this study compares deep learning algorithms with more traditional machine learning methods. Accuracy ratings for the machine learning models were 86.00% for Random Forest, 86.18% for Decision Tree, and 88.36% for Support Vector Classifier.</p>Aiman FatimaAiman Shabbir Jamshaid Iqbal JanjuaSadaqat Ali RamayRizwan Abid BhattyMuhammad IrfanTahir Abbas
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702RapidMiner-based Clustering Techniques for Enhancing Intrusion Detection System (IDS) Performance
https://jcbi.org/index.php/Main/article/view/521
<p>Cybersecurity is the process of protecting networks, computers, servers, mobile devices, electronic systems, and data against hostile intrusions. It is the need of hour to be protected from the latest cyber-attacks. By examining traffic, Intrusion Detection Systems (IDS) assists in identifying possible dangers, unauthorized access, and unusual activity and notifies administrators to take appropriate action. Machine Learning (ML) clustering techniques are being used widely to make IDS better. In this research study, by utilizing clustering and classification techniques, such as Support Vector Machines (SVM), Boosting Naïve Bayes (BNB), K-Mean, and K-Medoids, the efficiency of the clustering techniques is examined. Further, we divided our research study in to cyber-attacks prediction and cyber-attacks detection categories. We used SVM and BNB clustering approaches for cyber-attacks prediction and compared the results. K-Mean and K-Medoids clustering approaches are used for cyber-attacks detection and the results are compared. Finally, we concluded that SVM is better approach for cyber-attacks prediction and K-Medoid is better approach for cyber-attacks detection.</p>Johar MumtazSyed Asad Ali NaqviMuhammad Haroon AhmadMudassar RehmanGohar Mumtaz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Sentiment Analysis of Social Media Data: Understanding Public Perception
https://jcbi.org/index.php/Main/article/view/585
<p>This paper provides a sentiment analysis model that combines dimensionality reduction, part-of-speech tagging, and natural language processing (NLP) for social media data. The model uses machine learning methods (Naive Bayes, Support Vector Machine, and K-Nearest Neighbor) to categorize sentiment as positive, negative, or neutral properly. The model's performance was assessed using two datasets and compared to other sentiment analysis algorithms that were already in use. The outcomes show increased performance and offer perceptions of the public's thoughts on various topics. This work addresses the problem of language-specific models and advances the creation of accurate sentiment analysis models. In contrast to conventional polls, the study's conclusions present a novel viewpoint on public opinion and offer suggestions for improving the platform so that users can access additional options and conveniences. The proposed model has potential applications in social media monitoring, market research, and political analysis. Future work can extend the model to accommodate multiple languages and explore the use of deep learning techniques. By providing a more accurate and efficient sentiment analysis tool, this research contributes to the growing field of social media analytic and its practical applications.</p> <p> </p>Shumaila MughalArfan JaffarM Waleed Arif
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Comparative Analysis of Machine Learning Algorithms for Breast Cancer Classification
https://jcbi.org/index.php/Main/article/view/564
<p>Breast cancer is one the most fatal diseases among women. Therefore, the need to develop reliable diagnostic tools to early detect breast cancer for treatment. Machine Learning is the powerful approach to breast cancer classification. This systematic review aims to provide a comprehensive comparison of various machine learning algorithms used to the classification of breast cancer. The key algorithms, including support vector machines (SVM), decision tree (DT), random forest (RF), K-nearest neighbor (KNN), Logistic Regression (LR) and Convolutional neural network (CNN), evaluate the performance of the metrics and the accuracy of the model. We analyze the numerous data from the various papers, and then find the strengths and the limitations of each algorithm in the different scenarios. In addition, we discuss the impact of the data preprocessing techniques, selection methods and the role of ensemble learning in the classification performance. We found that no single algorithm consistently outperforms others across all metrics, suggesting a hybrid approach may offer the most robust solution.</p>Muhammad Shaharyar RamzanGohar MumtazNazish RasheedZeeshan Mubeen
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Enhancing Breast Cancer Diagnosis with Integrated Dimensionality Reduction and Machine Learning Techniques
https://jcbi.org/index.php/Main/article/view/573
<p>Breast cancer remains a significant cause of cancer-related mortality worldwide, highlighting the critical need for advancements in diagnostic techniques. Recent diagnostic methods, while effective, often face limitations in accuracy and efficiency. This paper aims to differentiate between tumorous (malignant) and non-tumorous (benign) cases of breast cancer using three publicly available datasets: Wisconsin Breast Cancer (WBC), Wisconsin Diagnostic Breast Cancer (WDBC), and Wisconsin Prognostic Breast Cancer (WPBC) datasets. We applied popular supervised machine learning classifiers, including Multi-layer Perceptron (MLP), Support Vector Machine (SVM), Random Forest (RF), Decision Tree (DT), Naïve Bayes (NB), and K-Nearest Neighbor (KNN), in combination with dimensionality reduction techniques such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Factor Analysis (FA). The classifiers were evaluated based on accuracy, precision, recall and F1 score. The results show that, due to FA's emphasis on feature selection and noise reduction, the SVM with FA achieved the highest accuracy of 98.64% on the WBC dataset. MLP without any dimensionality reduction performed best with an accuracy of 98.26% on WDBC. Conversely, MLP and SVM with LDA yield an accuracy of 89.80% on the more complex and noisy WPBC dataset.</p>Aqeel Ahmed KhanMuhammad Abu Bakr
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702IoT Based Smart Vehicular Identification using Machine Learning Techniques
https://jcbi.org/index.php/Main/article/view/540
<p>Technology is growing fast, so a secure way of life and travel are in high demand among the community. A massive growth in traffic is caused by an excessively growing world population on hi-tech modern roads. There are more and more vehicles and trucks on the road. It is demand of time to identify vehicles and compute traffic jam on roads. For the control in addition monitoring systems, the identifying of vehicles is crucial. Number plates, which are a specific pairing of numbers and letters are used to identifying vehicles. However, physically identifying every single parked or moving car's license plate is a difficult and time-consuming task for anyone. Due to the daily tremendous growth of the vehicle industry, tracking individual vehicles has become a difficult undertaking. In the majority of applications involving the movement of vehicles, the identification and detection of a vehicle Number Plate (NP) are crucial techniques. In the field of image processing, it is also a hotly debated as well as ongoing research topic. For the purpose of discovering and identifying vehicle NPs, numerous techniques, methods, and algorithms have been created. You Only Look Once (YOLO) Algorithm is applied to detect as well as classify the vehicle NP. This proposed model shows more accuracy than the previous models.</p>Abdul AzizUzair AhmadShoaib SaleemMunahil KhursheedArslan MunirManal AhmadMuhammad Ahsan Jamil
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Moderating Role of Hope in Relationship between Anhedonia and Emptiness among Schizophrenic Individuals
https://jcbi.org/index.php/Main/article/view/555
<p>The present study examined the moderating role of hope between Anhedonia and emptiness in schizophrenic patients. The term "Anhedonia" refers to the schizophrenic patient’s lessened capacity to experience pleasant emotions and satisfaction in response to events and activities that normally trigger these sensations. Hope is thought to play a role in the recovery and rehabilitation process following mental illness and a major contributor to subjective well-being. The study was based on cross-sectional survey research design. Participants comprised of schizophrenic patients (<em>N</em> = 300) from hospitals situated in Multan and Sargodha. Both men and women participated in the study. Data were collected using purposive sampling technique. Four self-report measures including Demographic data sheet, Subjective Emptiness Scale. Hope in Schizophrenia Scale, Anhedonia in Adolescents Scale were used for data collection. Moderation analysis was applied for testing the hypotheses supposing that as the element of hope will increase in schizophrenic individuals, it will lessen the degree of anhedonia and emptiness among them. It was also hypothesized that hope would act as a moderator between anhedonia and emptiness. The findings revealed that the high level of hope in schizophrenic patients decreased the effect of anhedonia on emptiness. The findings empirically established that high level hope can be used to prevent the effects of Anhedonia on emptiness in schizophrenic patient.</p>Syeda MuskanNeelam ZafarAimen Fatima
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Evaluating CNN Effectiveness in SQL Injection Attack Detection
https://jcbi.org/index.php/Main/article/view/550
<p>SQL injection attacks are among the most prominent threats against Web application security, intended to illegitimately access sensitive information by exploiting related vulnerabilities. Their detection with traditional rule-based approaches is futile in view of this evolving nature and complexity of SQL Injection Attack (SQLIA). This paper proposes a new approach towards detecting SQLIA using Convolutional Neural Networks, one of the deep learning techniques very famous for its capability of automatically learning intricate patterns and representations from large-scale datasets. We focus on leveraging this strength of CNNs while working on the structure and semantics of SQL queries to help in differentiating malicious and benign inputs. In this paper, we describe a detailed method-ology that includes data preprocessing, feature extraction, model training, and evaluation. In this paper, we propose a CNN model trained and tested using a large dataset containing 109,520 SQL queries with an accuracy of 97.41%. Further, we have tested the efficiency of the model with the help of precision, recall, and F1-score, and it turned out to be effective for the identification and classifications of SQLIA properly. The model showed high precision, 96.50%, and high recall, 99.00%, which gives it the capability to reduce false positives and false negatives. The balanced F1-score was 97.00%, thereby confirming that this model performed well in detecting and classifying SQLIAs. These results may indicate that deep learning techniques, and particularly CNNs, have some potential to be very useful in enhancing web application security by providing a robust, adaptive solution for mitigating risks caused by SQL injection attacks.</p>Muhammad ShahbazGohar MumtazSaleem ZubairMudassar Rehman
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Identification of Skin Cancer Using Machine Learning
https://jcbi.org/index.php/Main/article/view/627
<p>Skin cancer, characterized as a chronic disease, demands time-consuming and costly medical tests for accurate detection, thereby introducing risks associated with treatment delays. Acknowledging the critical need for efficient skin cancer detection, this thesis endeavors to make a significant contribution by proposing an advanced deep learning methodology. The innovative approach involves enhancing the ResNet model with SE modules and integrating a maximum pooling layer within the ResBlock shortcut connection. In comparison to established models (ResNet-50, SENet, DenseNet, and GoogleNet), the proposed method surpasses them in accuracy, parameter efficiency, and computation speed, achieving an impressive average recognition accuracy of 97.48% on a comprehensive 2142-image dataset. This transformative solution aspires to not only revolutionize skin cancer detection but also elevate the standard of patient care in this critical domain.</p>Iqra IbraheemSadaqat Ali RamayTahir AbbasRizwan ul HassanShouzab Khan
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Evaluating Rainfall Prediction Models on Time Series Data from Bangladesh and Pakistan: A Comparative Approach
https://jcbi.org/index.php/Main/article/view/718
<p>Predicting rainfall is obviously a challenging task due to the multitude of factors and elements that impact climate conditions. Accurate rainfall forecasts are extremely important, especially for the agriculture industry, which depends heavily on timely and adequate rainfall for crop growth and result. The need for precise rainfall forecasts is further highlighted by the economic contribution that agriculture makes. Around the world, a variety of techniques have been employed to forecast rainfall patterns. This paper presents a comparative analysis of various rainfall prediction models, including statistical, machine learning on time series data and statistical approaches. We evaluate the performance of ARIMA, Random Forest, Linear Regression, Gradient Boosting and SVM models on diverse datasets from different geographical regions and climatic conditions. This study evaluates rainfall prediction using both machine learning and statistical model on distinct datasets from Bangladesh and Pakistan. The ARIMA (3, 2, and 1) model is applied to both datasets, demonstrating consistent performance on the Pakistan dataset with minimal differences between in-sample and out-of-sample results, indicating reliable forecasting ability. In contrast, the Bangladesh dataset shows a noticeable drop in performance from in-sample to out-of-sample, suggesting potential over fitting. Additionally, machine learning models such as Gradient Boosting (GB), Random Forest (RF), Support Vector Regressor (SVR), and Linear Regression (LR) are utilized. For the Bangladesh dataset, Gradient Boosting outperformed others with the lowest error values (MSE: 6435.975, RMSE: 80.225, MAE: 51.064) and the highest R² score (0.842). On the Pakistan dataset, Linear Regression produced the best results with the lowest MSE (270.138), RMSE (16.436), and MAE (11.572).Our findings highlight the strengths and limitations of each model, offering insights into their applicability for accurate rainfall forecasting.</p>Hira farmanNoman HasanyHamna SalmanGovarishankarAlisha Farman
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Predictive Modeling for Early Detection and Risk Assessment of Cardiovascular Diseases Using the Ensemble Stacked Neural Network Model
https://jcbi.org/index.php/Main/article/view/603
<p>Medical experts feel difficulty in making the right decision about cardiac arrest. Early detection of cardiovascular disease means a better chance of survival for the patient otherwise it can lead to death. This study proposes a state-of-the-art model named Ensemble Stacked Neural Network (ESNN) that combines both machine learning (ML) and deep learning (DL) techniques for early detection and risk assessment of cardiac disease. Our model pools multiple widely recognized cardiac disease datasets (Cleveland, Hungarian, Switzerland, Long Beach VA, and Statlog) to create a comprehensive dataset for training and evaluation. The ESNN model begins with extensive data pre-processing, including the elimination of null values, outliers, and duplicates, followed by Z-score normalization to standardize the feature scales. We address class imbalance using the Randomized Oversampling technique. Principal Component Analysis (PCA) is applied for feature reduction, ensuring the most informative components are retained. We employed a diverse set of nine ML algorithms, consisting of XGBoost and naïve Bayes, achieving individual accuracies ranging from 69% to 89%. ESNN model integrates a neural network that is trained and validated on the processed data, producing predictions that serve as additional features for the subsequent ML models. The chosen nine classifiers are used as base models and these are tuned using GridSearchCV for optimal Hyperparameters. The base models consist of a DT, RF, LR, gradient boosting, XGBoost, AdaBoost, SVM, KNN, and NB. RF is used as a meta-model in a stacking ensemble framework. The proposed model (ESNN) was carefully trained and tested and achieved an accuracy of 95%. This integration of machine learning and deep neural network methods within the ESNN framework establishes strong predictive modelling. It provides a reliable tool for cardiologists to diagnose heart disease risk.</p>Muhammad ImranSadaqat Ali RamayTahir Abbas
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Deep Learning-Based Brain Tumor Detection
https://jcbi.org/index.php/Main/article/view/553
<p>One of the main causes of death in the globe is brain tumors. Medical image has recently advanced significantly in both case methodologies and applications, enhancing its effectiveness in healthcare management. The brain tumor and pancreatic tumor databases, yield the most accurate and comprehensive results and are crucial resources in medical research. In terms of efficiency, precision, creativity, and other factors, these strategies improved performance. This dataset was preprocessed before being used to assess how well deep learning models identified and categorized brain cancers. Gliomas of low grade, categorized as grades I and II are often treatable through complete surgical removal. Conversely, grade I gliomas of high grade III and IV usually necessitate additional treatment with radiation. The accuracy of the proposed model yields highly effective results, achieving a performance of 96% accuracy. Secondly, the tumor is classified using an enhanced thresholding method informed by the binomial mean, variance, and standard deviation. To highlight the performance of the suggested framework and the novelty of the method are rigorously contrasted with accepted techniques. On the other hand, both geometric features and four texture attributes are obtained. These features are then combined using a step-by-step process, and the optimal features are selected using a Genetic Algorithm (GA).</p>Shaz Mumtaz KhanFawad NasimJawad AhmadSohail Masood
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Abstractive Text Summarization for Urdu Language
https://jcbi.org/index.php/Main/article/view/596
<p>The quantity of textual data is increasing in online realm with the blink of eye and it has become very difficult to extract useful information from such enormous bundle of information. An area of natural language processing known as automated text summarization is responsible for producing gist’s, abstracts, and summaries of written text in a variety of human languages. These are of a high quality and contain relevant information. Extractive and abstractive summarization are the two methods that can be used to summarize information. A lot of research is being conducted especially in extractive summary. In Urdu language, there is no research efforts in abstractive summarization up till now, so it is much needed of having research works to be done in this domain. Urdu language is mainly spoken in South Asia. In the proposed research work we use amalgam of extractive and abstractive algorithms to generate summaries. Sentence weight, TF-IDF, word frequency algorithms are used for extractive summaries. A hybrid technique is utilized so that the findings of extractive summaries can be improved. The abstractive summaries will be produced once the summaries provided by the hybrid approach have been processed using the BERT model. In order to analyses the summaries that were automatically created by the system, we have enlisted the assistance of certain Urdu language experts.</p>Asif RazaMuhammad Hanif SoomroSalahuddinInzamam ShahzadSaima Batool
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Analyzing the Limitations and Efficiency of Configuration Strategies in Hybrid Cloud Environments
https://jcbi.org/index.php/Main/article/view/562
<p>The improvement in complexity and size of today's computing requirements has prompted the creation and widespread adoption of computing in the cloud as an accepted model for data processing and storage. The hybrid cloud architecture, which combines private as well as public cloud structures, is very appealing to organizations that want both the security benefits of private clouds and the scalability of public clouds. Numerous approaches of setting up and overseeing resources have been created in order to maximize the hybrid cloud environment. These tactics seek to strike a compromise between competing goals like security, performance optimization, cost effectiveness, and regulatory compliance. This observational research's aim was to obtain the effectiveness and drawbacks of various setup approaches in hybrid cloud settings. The results show that every strategy offers unique benefits. Policy-based resource management has several benefits, including increased resource efficiency and automated governance procedures, which reduces costs. Through intelligent traffic routing, cross-cloud load sharing improves performance and raises availability of services. By centralizing control, the Hybrid Cloud-Based Mesh makes cross-service connectivity secure and effective. One noteworthy aspect of Container orchestration across clouds is its capacity to streamline application migrations between various cloud environments. Log Management and Analytics enable real-time monitoring for prompt threat detection and regulatory compliance. On the other hand, policy-based resource management can be rigid and complicated. An additional expense associated with data transport across several cloud providers is a disadvantage of cross-cloud load sharing. Latency problems arise in Hybrid Cloud Service Mesh topologies when there are extra network hops. Cross-cloud Container Orchestration may put the system at risk for security issues if it is set improperly. Lastly, substantial storage and sophisticated analytical skills are needed for log management and analytics.</p>Waddiat U ZahraMuhammad Talha AmjadAnam AhsanGohar Mumtaz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Development of OWL Structure for Recommending Database Management Systems (DBMS)
https://jcbi.org/index.php/Main/article/view/618
<p>This research focuses on the development of an OWL (Web Ontology Language) structure designed specifically for recommending Database Management Systems (DBMS). The proliferation of various types of DBMSs and their diverse features pose a challenge for users seeking optimal choices based on specific requirements. OWL provides a standardized framework for representing knowledge and semantics, making it suitable for modeling the complex relationships and characteristics of DBMSs. The methodology involves defining OWL classes and properties to capture essential attributes such as data model, scalability, performance, security features, and compatibility with different operating systems and programming languages. Additionally, the ontology incorporates user preferences and requirements as input to refine the recommendations. The implementation includes the creation of an OWL ontology populated with information about existing DBMSs, their capabilities, and user reviews. Reasoning mechanisms are employed to infer relationships and derive recommendations based on the user's specified criteria. Evaluation of the OWL structure involves testing its effectiveness in accurately recommending suitable DBMSs compared to traditional methods. Metrics such as recommendation accuracy, coverage of DBMS features, and user satisfaction will be used to assess the performance of the ontology-based recommendation system. The outcomes of this research are expected to provide valuable insights into the application of semantic technologies, specifically OWL, in enhancing the selection process of DBMSs. By leveraging structured knowledge representation, the developed OWL structure aims to facilitate informed decision-making and improve the efficiency of DBMS selection for diverse applications and user requirements.</p>Salahuddin Abdul Manan RazzaqSyed Shahid AbbasMohsin IkhlaqPrince Hamza ShafiqueInzimam Shahzad
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Human Gene Characterization and Pedigree Analysis for Genetic Disease Prediction Using Machine Learning
https://jcbi.org/index.php/Main/article/view/541
<p>A disorder is a disease that disrupts the regular operation of any part of the human body. Some gene mutations lead towards genetic disorders. Autosomal Dominant Disorder and Autosomal Recessive Disorder are two forms of genetic disorders. This study categorized genetic disorders into Single Gene Inheritance Disorder, Multifactor Genetic Disorder, and Mitochondrial Genetic Disorder. In single-gene inheritance, the mother or the father is affected, and their genome has a genetic mutation that causes a specific genetic condition. The interaction of different environmental factors, such as radiation, pollution, medicines, and smoke exposure, among others, with muted genes resulted in a mutation that could lead to a multifactor genetic disorder. Almost 1 in 213 children are affected by any gene mutation. The increasing prevalence of genetic diseases demands proper measures for early identification, which could help to tackle and lessen the damage. Traditional identification is time-consuming and expensive, so it's elementary to miss the signs of early disease. The subjectivity of geneticists for the interpretation of the same disease can be different. There is a critical need for early-stage detection of genetic diseases. Different researchers have deployed many AI, ML, and DL methods for early Classification and identification of genetic diseases, which have proved cost-effective and time-saving. These algorithms are intended to reveal related information from data that could assist in clinical decision-making. Keeping in view the problems mentioned above, this study automated the process of Classification and detection of major multi-genetic diseases such as Leigh Syndrome, Tay-Sachs, CS, Diabetes, LHON, Hemochromatosis and Mitochondrial Myopathy using KNN, LR, DT, RFC, Multinomial Naïve Bayes, and Gaussian Naïve Bayes using six different Machine learning algorithm with the aim of accuracy improvement. The proposed models achieve accuracy of 98%, 26%, 70%, 97%, 27% and 25 %, respectively. The Proposed system will further help geneticists make diagnostic decisions.</p>Tayyaba AslamToseef JavaidShahan Yamin SiddiquiUmer FarooqNusratullah TauheedMuhammad Zain ShakeelUnaiza Rehman
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Literature Analysis for the Prediction of Chronic Kidney Diseases
https://jcbi.org/index.php/Main/article/view/586
<p>It is a global health issue that will, nowadays, affects millions of people with severe relief on their quality of life due to the presence of Chronic Kidney Disease (CKD). Many can progress from short-term to long term kidney failure which has complications such as anaemia, osteoporosis, cardiovascular diseases and end stage renal disease. This makes early identification and management of CKD paramount especially for the slow progression of the disease and better results for the patients. The current research aims at focusing on the capability of trained machine learning models to identify community, clinical, and laboratory variables to estimate the likelihood of the initial identification of CKD. Demographic factors which include age, sex, race and economic status are fundamental in determining possibility of contracting CKD. Proteinuria, serum creatinine and estimated glomerular filtration rate (eGFR) are the essential markers of the renal function and CKD development. Algorithms from the machine learning family such as decision trees, random forests as well as the neural networks are used to find out the patterns that may be associated with the asymptomatic high risk patients and in turn help in the prediction of risk towards development of CKD. This holistic approach improves identification and management of risk factors; perhaps keeping off or remitting the early development of CKD.</p>Ahmad AbdullahAsif RazaQaisar RasoolUmair RashidMuhammad Minam AzizSaad Rasool
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-017023-Channel Motor Imagery Classification using Conventional Classifiers and Deep Learning Models
https://jcbi.org/index.php/Main/article/view/560
<p>Brain-computer interfaces (BCIs) are one of the important applications based on motor imagery classification using EEG signals. BCIs are designed to help patients afflicted with motor disabilities. The purpose of this study is to assess how well various conventional machine learning and deep learning models work for motor imagery task classification from EEG data analyzed by three channels C3, C4, and Cz. A comprehensive methodology employed including preprocessing of raw EEG signals (Time, Frequency, Time-frequency domains) multi-feature extraction followed by classification based on conventional models (decision Tree, SVM, Random Forest) as well as deep learning methodologies like CNN, RNN, and TSFFnet-based architectures. The results indicate that random forest is consistently performed well across different domains. As it achieves high accuracy and the lowest mean absolute error among other conventional classification models. The accuracy of TSFFnet among deep learning models was 99.75%, precision is maximum seems like it has been configured to have a good recall with the values for recall being close to that, and mean absolute error is minimal at 0.0038. These results reveal that deep learning models especially the TSFFnet model outperform in the tasks of motor imagery classification.</p>Muhammad RehmanImran Kamal MirzaFawad NasimM.Arfan Jaffar
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Internet of Medical Things (IoMT) Enabled Intelligent System for Chronic Disease Prediction Using Deep Machine Learning in Healthcare 5.0
https://jcbi.org/index.php/Main/article/view/628
<p>Accurately diagnosing human diseases is still challenging, even with the advances in Healthcare 5.0, especially with chronic diseases. The Internet of Medical Things (IoMT) has grown quickly worldwide, from tiny wearables to extensive applications in many different industries. Osteoarthritis (OA) is a prevalent chronic disease that has a major negative influence on life quality, especially in older people. Osteoarthritis is a common chronic joint disease that contributes significantly to morbidity and disability globally. It is the most frequent kind of OA. Knee Osteoarthritis (KOA) has a significant negative influence on the quality of life for those affected and presents an increasing challenge to public health systems as the world's population ages and life expectancy rises. Even though the precise causes of osteoarthritis (OA) are yet unknown, the complicated condition frequently affects joints that experience heavy weight and repetitive action. The knee joint is particularly susceptible because of its intricate structure and function as a weight-bearing joint our work investigates the use of deep machine learning techniques, including transfer learning, with X-ray image datasets to predict KOA in an Internet of Medical Things (IoMT) enabled system. This strategy is in line with the ideas of Healthcare 5.0, which places a strong emphasis on using cutting-edge technology to provide individualized, patient-centered care. This method makes use of the ResNet 18 architecture and transfer learning to produce precise and effective predictions from knee X-ray images. Our system's integration with the Internet of Medical Things (IoMT) facilitates real-time data processing and gathering, improving the predictive model's usability and accessibility in clinical settings. Using a dataset of knee X-ray images, the suggested model was trained and validated, yielding a high training accuracy of 98.26% and a validation accuracy of 95.79%. These findings support the model's efficacy in correctly diagnosing osteoarthritis, facilitating prompt diagnosis and treatment. Our results highlight the potential of IoMT and deep machine learning to improve personalized healthcare, especially for the treatment of long-term conditions like osteoarthritis. By incorporating these technologies into Healthcare 5.0 frameworks, it may be possible to improve outcomes and lessen the burden of chronic disease while providing more focused and patient-centered care. We seek to improve OA early diagnosis and customized therapy by utilizing IoMT and state-of-the-art deep machine learning algorithms, utilizing x-ray image data for improved healthcare results.</p>Rabia JavedTahir AbbasMuhammad Uzair BaqirSadaqat Ali RamayHina Batool
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Comparative Study of CAINE Linux: A Digital Forensics Distribution
https://jcbi.org/index.php/Main/article/view/614
<p>Global crimes have necessitated digital forensics to be a more important factor in solving criminal incidences hence effectively needs some measures. Some of the best-known distributions of digital forensics are CAINE (Computer Aided Investigative Environment) Linux one of the distributions contain tools developed to assist investigators to recover, analyze and preserve Data & Digital Evidence. This review article’s purpose is to describe major characteristics of CAINE Linux as the tool used for digital forensics and its performance mainly usefulness. This work also contrasts CAINE Linux with other distributions in digital forensics to show how CAINE works and the limitations when used in various forensic scenarios. As part of the analysis, the writer will attempt to understand how effective CAINE Linux is as a forensic tool based on factors such as the software’s operational capability, integration of tools and how it is supplemented with new threats. It is admired that the findings of this study will enable practitioners to select the right forensic distribution that meets the need of an investigation, hence enhancing efficiency and efficacy in cybercrime investigation.</p>Talha SaleemZata Ul NataqainMueed SaleemGohar Mumtaz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702 Evaluating the impact of COVID-19 on Educational system: Challenges, Modifications, and Future Directives
https://jcbi.org/index.php/Main/article/view/544
<p>The World Health Organization proclaimed the COVID-19 pandemic in December 2019, and it has severely disrupted many industries globally, including education. In this study, it is shown that how the outbreak of COVID-19 had affected education system along with the others areas of life. Indefinite holidays molded the system into a new form that was new or unacceptable to the majority. Especially, how the technical difficulties encountered in online classes affected the students and teachers. In this research, we utilized six supervised machine learning techniques to forecast the impact on the annual system of education by using qualitative data obtained from a sample of 1280 in Punjab, rural and urban areas. The Random Forest algorithm had the maximum accuracy and execution time efficiency both in the presence and absence of Principal Component Analysis (PCA). This study also identified the association between educational disruption and other categorical factors such as course completion, delivery mode, technology use, and disparities.</p> <p> </p>Umar RaufMuhammad RashadLaraib NoorMuhammad Azam
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702PFed-TG: A Personalized Federated Learning Framework for Text Generation
https://jcbi.org/index.php/Main/article/view/601
<p>In recent years, advancements in deep learning and machine learning have spurred the development of various text generation models, particularly through Python programming. This paper introduces PFed-TG, a novel personalized federated learning (PFL) framework for text generation (PFed-TG) tasks that integrates personalized model training with federated learning principles, leveraging Python's Natural Language Processing (NLP) tools, including the Hugging Face Transformers library. The framework's efficacy is evaluated using the Shakespeare dataset, demonstrating consistent production of contextually relevant text. Performance is assessed using metrics such as ASL, ROUGE-L, BLEU, METEOR, and Perplexity, focusing on readability, coherence, and alignment. Results indicate that PFed-TG enhances efficiency and offers insights into optimizing personalized FL models for practical applications across diverse domains like healthcare, finance, and education. This research comprehensively evaluates PFed-TG's methodology, highlighting its potential to advance the field of NLP through innovative FL approaches.</p>Shameen NoorMuhammad AzamFahad SabahFawad NasimKahkisha AyubKinza Parvaiz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Anomaly Detection using Clustering (K-Means with DBSCAN) and SMO
https://jcbi.org/index.php/Main/article/view/598
<p>In recent times, AI has become a useful tool for describing the properties of information because it can support the Data Mining (DM) procedure by analysing data for identifying patterns or routines. Anomaly detection is of vital importance in DM that helps in the discovery of hidden behaviour within the most vulnerable data. It also aids in the detection of network intrusion. This research proposed a model for detecting anomalies using machine learning (ML) techniques. By leveraging ML, the model can achieve higher detection rates and reduce the number of false positives, resulting in an overall improvement in intrusion classification. This study evaluated a proposed hybrid ML technology using dataset of Network Security Knowledge and Data Discovery. In this study, we used K-means and Density-Based Clustering Algorithm for clustering and Sequential Minimal Optimization for classification purposes. By putting the suggested method for detecting anomalies to test, it is demonstrated by the findings that this hybrid model can increase positive detection rate and anomaly detection accuracy while decreasing rate of false-positives. The proposed algorithm showed superior performance compared with recent closely related studies using similar variables and environments. This algorithm achieved lower false alarm probability (FAP) and high accuracy. This is due to the hybrid nature of producing an optimal detectors quantity that exhibit high accuracy and low FAP. The required time will decrease if the given false alarm probability is small for pre-processing and processing when compared to other algorithms.</p>Umair RashidMuhammad Faheem SaleemSaad RasoolAhmad AbdullahHira MustafaAiman Iqbal
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702An Optimized VGG-19 Architecture Integrated with Support Vector Machines for Lung Cancer Detection and Classification
https://jcbi.org/index.php/Main/article/view/575
<p>Lung cancer is an identical serious and deadly disease, normally signaled by small growths in the lungs called nodules. It usually happens because cells in the lung start increasing uncontrollably. Finding these lung nodules is important for detecting lung cancer, often done using CT scans. Catching the disease early significantly improves the chances of effective treatment and survival. To recuperate lung cancer detection, this study introduces an automated method for finding nodules in CT images, called Enhanced Visual Geometry Group (EVGG-SVM). This method uses an improved version of a well-known neural network model (VGG19) combined with a Support Vector Machine (SVM) to classify nodules as either inoffensive or cancerous. The proposed model was evaluated using the well-known LUNA16 dataset and demonstrated high levels of accuracy, sensitivity, and specificity. In comparison with other current techniques, the EVGG-SVM model demonstrated remarkable performance.</p>Zeeshan MubeenNazish RasheedMudassar RehmanSajid IqbalShehroz Zafar
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Enhancing User Experience: Cross-Platform Usability in Digital Banking Apps
https://jcbi.org/index.php/Main/article/view/543
<p>Word is a Global village so banking services are performing vital role in the modern world to manage the trades, but still there is a problem to intercommunication between different banking services. Digital banking, ensuring uniform User Experiences (UX) across diverse platforms which is also a significant challenge. Through a review of existing literature on UX design principles, platform-specific considerations, and measurement methodologies, a structured framework is synthesized to assess the consistency and seamlessness of interactions within digital banking apps across web browsers, mobile devices, and desktop applications. The proposed model Cross-Platform Usability Measurement (CPUM) addresses a comprehensive measurement model tailored to evaluate cross-platform uniformity in digital banking applications. Includes both qualitative and quantitative metrics, focusing on key dimensions such as visual consistency, navigation fluidity, feature parity, and responsiveness. Leveraging principles from usability engineering, human-computer interaction, and cognitive psychology, the model provides a robust methodology for evaluating design strategies aimed at harmonizing UX across diverse platforms. Additionally, we offer a practical guidelines and best practices for digital banking app developers, and designers. Highlighting to optimizing cross-platform consistency to enhance user satisfaction, trust, and engagement. By establishing a systematic approach to measuring and improving cross-platform uniformity, this aims to empower stakeholders in the digital banking industry to deliver seamless interactions, ultimately fostering greater user loyalty and retention in a competitive market.</p>Hamza AbbasSehrish MunirMuhammad TahirMisbah NoorQaisra HoneyM. Ameer HamzaMuhammad Waseem Iqbal
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Enhancing Heart Disease Detection in Echocardiogram Images Using Optimized EfficientNetB3 Architecture
https://jcbi.org/index.php/Main/article/view/568
<p>Despite the progress in treatment options available for heart disease, it remains one of the most common causes of death worldwide emphasizing a necessity to identify informative and effective diagnostic protocols. This paper proposed an efficient deep learning model, EfficientNetB3, to detect Critical Congenital Heart Disease (CCHD) from echocardiogram images and provides accurate predictions. EfficientNetB3 is further fine-tuned specifically for the heart disease diagnosis in echocardiogram images using advanced techniques like dropout, momentum, and batch normalization to improve overall accuracy of our model. To improve our model architecture and hyperparameters, we carried out a set of experiments. The first experiment conducted was of a baseline model having the pre-training weight, getting the test accuracy as 91.80%. Later improvements, such as the dropout and momentum addition increased this accuracy to 94.14% Finally, the best model configuration with fine-tuned dropout rates and dense layer tweaks yielded a test accuracy of 94.53% Accuracy metrics, confusion matrices and F1 scores were used to assess the performance of the model which outperformed current implementations. This research suggest that the optimized EfficientNetB3 model can be powerful to classify different heart diseases with high accuracy, which has a bright outlook for early and accurate diagnosis. This work is a significant addition to the field of computer-aided diagnostic systems for cardiology and promises to improve clinical practice by offering robust, timely diagnostics assistance in cases where expert echocardiogram interpretation resources may not be readily available. Moving forward, additional deep learning architectures, and multi-modal data will be integrated to enhance diagnostic proficiency.</p>Muhammad MahtabZainab SadiqMuhammad RaoofSohail Masood Bhatti
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Diabetic Retinopathy Detection Using Machine Learning
https://jcbi.org/index.php/Main/article/view/611
<p>It is a disease of the retina initiated through poorly controlled diabetes. In addition to damaging the retina, Diabetic Retinopathy causes irreversible damage to the human eye. It must be detected early to prevent permanent vision loss. In this work, we consider the five stages of diabetic retinopathy, also including a healthy retina. We use APTOS 2019 Blindness Detection Diabetic Retinopathy to train our model. We implement multiple approaches for forecasting machine-learning models in five stages. This proposed model predicts the stage of diabetic retinopathy: This model best works for South Asian people because there is some variation in the retinal image of different e geographical locations of the people. This proposed system only identifies diabetic retinopathy. And not applicable to other retinal diseases. This project aims to classify the diabetic retinopathy stage using images of any size of the patient’s retinal fundus. The patient provides the retinal fundus image, and this program converts the image into the required size and predicts the stage of diabetic retinopathy. Also, other researchers can see the predictions of different models used in this project.</p>Muhammad AizazSohail MasoodFawad Nasim
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Comprehensive Review on U-Net Architectures for Skin Lesion Segmentation and Its Variants
https://jcbi.org/index.php/Main/article/view/608
<p>Skin lesion, ranging from benign growths to malignant tumors, like melanoma, pose significant diagnostic challenges because of their different morphological features. Early and accurate segmentation of these lesions from medical images is critical for effective diagnosis and treatment. Traditional image processing approaches, such as, thresholding and edge recognition, often fail to capture the complexity and variability of skin lesions. In contrast, deep learning methods, particularly Convolutional Neural Networks (CNNs), have modernized this field by providing robust solutions. The encoder-decoder architecture of the U-Net, for example design and skip connections, has showed significant consequence in describing lesion limits accurately. This review systematically examines the current state of skin lesions segmentation using u-net and its variants. It also highlights the performance of different models using metrics such as Dice coefficient and Jaccard index, addressing challenges such as the need for extensive annotated datasets. Our findings suggest that deep learning models like U-net significantly enhance the segmentation of skin lesions, improving diagnostic accuracy and clinical consequences.</p>Nimra TanveerNimra TariqArooj AkramFaiza Shabbir
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Forensic Strategies for Revealing Memory Artifacts in IoT Devices
https://jcbi.org/index.php/Main/article/view/574
<p>Forensics of Ram plays an important role when used in the field of digital forensics, during the examination of Memory to identify signs of unauthorized or unusual activities within computer systems. This area has gained significant attention because it allows for the recovery of fleeting data that typically disappears when a system is powered down, thus helping investigators piece together the sequence of events that led to security breaches. Recent developments in memory forensics have focused on improving the methods used for acquiring and analyzing memory. This paper seeks to assess the effectiveness of different memory forensic tools and techniques, particularly in their application to malware detection and the extraction of evidence. It wraps up by proposing a framework aimed at enhancing memory forensic practices, addressing current shortcomings in the field, and outlining potential research avenues to strengthen memory analysis in increasingly complex digital landscapes.</p>Hafiz Ahmad MujtabaGohar MumtazMuhammad Haroon AhmadMudassar Rehman
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Optimizing Skin Disease Classification: Evaluating the Effectiveness of Hybrid CNN with Batch Normalization and L2 Regularization for Enhanced Accuracy
https://jcbi.org/index.php/Main/article/view/519
<p>Skin disease classification remains a significant challenge in medical image analysis, demanding robust models for accurate diagnosis. In this research paper, we meticulously explore and compare the performance of various neural network architectures on a curated skin disease dataset. Our study includes traditional models such as Multi-Layer Perceptron (MLPs) with different hidden layers, a powerful deep architecture like ResNet and proposed hybrid Convolutional Neural Networks (CNNs) with Batch Normalization and L2 Regularization. MLP classifiers, equipped with one and two hidden layers, demonstrate promising F1 scores, sensitivity, and specificity, showcasing their effectiveness in skin disease classification tasks. The Sequential Neural Network, a versatile architecture, exhibits commendable accuracy, precision, recall, and F1 score, highlighting its suitability for medical image analysis. Our standout performer is the proposed hybrid CNN model with Batch Normalization and L2 Regularization, achieving an impressive accuracy of 97%. This hybrid architecture outclasses other classifiers, emphasizing the impact of advanced techniques in enhancing model performance. By using various assessment criteria and diagrams, the suggested paper offers crucial information about the advantages and disadvantages of each model, helping researchers and practitioners choose the optimal architecture for skin disease classification. These findings augment the ongoing study in medical image analysis using deep learning techniques and reemphasizes the effectiveness of model selection in diagnosing diseases accurately and comprehensively.</p>Abdul RasheedMuhammad AzamHafiz Muhammad AfzaalMuhammad AshrafAbid Ali Hashmi
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Cross-Domain Sentiment Analysis: A Multi-Task Learning Approach with Shared Representations
https://jcbi.org/index.php/Main/article/view/605
<p>This research aims to evaluate the use of Multi-Task Learning (MTL) in sentiment analysis in various domains. The primary limitation of dominant models of sentiment classification lies in the fact that most of them are domain-specific and because of assorted styles of language usage, context sensitivity and users’ behavior they fail to work across different domains. As pointed out earlier, transferring knowledge and feature learning are the key steps in transferring learnt models from one domain to the other in handling these challenges, this study aims at adopting an MTL based approach for better representation of a shared representation that can foster better generalization of the model across the different domains. This way, the proposed model allows for establishing the mixture of separating and common features for sentiment analysis across the contextual domains, with minor tuning of the model on each of them. These experiments prove that the accuracy, precision, recall, and F1 scores of MTL-shared are superior to those of the traditional single-domain and domain-adaptation models. The model also has high performance in cross-domain as seen in evaluation on unseen domains, and hence it can be a useful method in real-world circumstances where sentiment analysis must be conducted in diverse domains.</p>Kinza ParvaizMuhammad AzamFawad NasimShameen NoorKahkisha Ayub
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702A Systematic Literature Review on Performance Evaluation of SQL and NoSQL Database Architectures
https://jcbi.org/index.php/Main/article/view/548
<p>The Maintaining large volumes of data in SQL and NoSql databases depends on programming architecture. NoSql databases excel in horizontal scalability and handling unstructured data, while SQL databases are designed for data organization and horizontal scalability. Choosing between SQL and NoSql databases can be challenging due to differences in database design and hierarchical needs. NoSql databases use a mixed-model strategy, complicating data movement between cloud storage providers, as different platforms experiment with various concepts. Systematic literature reviews (SLRs) analyze papers on NoSql and SQL database designs, interoperability, and cloud data portability. Recent research comparing Oracle RDBMS and MongoDB suggests that NoSql databases are better for big data analytics due to their customized architectures, whereas SQL databases are more suitable for online transaction processing (OTP).</p>Muqaddas SalahuddinSamra MajeedSammia HiraGohar Mumtaz
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702Water Quality Assessment Through Predictive Modeling Employing Machine Learning Methods
https://jcbi.org/index.php/Main/article/view/607
<p>Water quality declines pose serious problems that need creative methods to ensure proper monitoring. In order to gather and evaluate water quality data in real time, this study presents the Water Quality Measurement Application, which integrates cutting-edge sensors with artificial intelligence (AI). By creating a tool that is easily accessed by environmentalists and scientists, the goal is to enhance existing techniques that depend on labor-intensive and less accurate manual sampling and analysis. The accuracy of current sensor-based systems is frequently restricted, and they are unable to accurately forecast problems with water quality. To get around this, machine learning (ML) techniques are used in the application to assess and forecast water quality situations while IoT sensors are integrated for continuous data collecting. Safely moving data to a cloud platform is made possible by the Blynk IoT framework, which guarantees accessibility and security. When it came to identifying water quality characteristics, the Random Forest (RF) approach outperformed other machine learning models, including K-Nearest Neighbors (KNN), Gaussian Naive Bayes (GNB), and Multinomial Naive Bayes (MNB). When compared to conventional techniques, this breakthrough yields forecasts that are more trustworthy. Subsequent efforts will concentrate on improving the application by extending the scope of observable metrics and adding user input. Accuracy improvement is another goal of ongoing research into ML algorithms. By providing a more sophisticated, automated method of comprehending and regulating water health, this creative solution benefits environmental professionals as well as labs.</p>Mehak AfzalShujaat AliHafiz Burhan Ul HaqRabia YounisHamid AliAmna KosarHafiz Muneeb Akhtar
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702The Intelligence Spectrum: Unraveling the Path from ANI to ASI
https://jcbi.org/index.php/Main/article/view/779
<p>The transformation of natural intelligence into artificial intelligence (AI) has generated significant excitement regarding its current and future impact. AI’s integration into various domains has established its presence in nearly all aspects of life, leading to high expectations from narrow AI to general AI and even super AI. This paper provides a comprehensive analysis of AI’s current achievements and future prospects by examining the journey from natural intelligence to AI. It explores the foundational principles of natural intelligence, focusing on human information processing, reasoning, and learning. The paper then traces the development of AI models i.e. machine learning models, neural networks, and advanced algorithms, that emulate and enhance human cognitive abilities. Looking ahead, it speculates on the future of intelligence, particularly the potential emergence of Artificial General Intelligence (AGI) with human-like cognitive capabilities across diverse domains. The synthesis of natural and artificial intelligence presents both opportunities and challenges, necessitating careful consideration, ethical deliberation, and collaborative efforts to ensure intelligent systems benefit humanity responsibly. The author also proposes hypotheses based on philosophical and religious beliefs to enhance AI performance. Ultimately, the paper envisions a limited but expanding role for AI within a defined scope.</p>Sajid Iqbal
Copyright (c) 2024 Journal of Computing & Biomedical Informatics
2024-09-012024-09-01702