Journal of Computing & Biomedical Informatics https://jcbi.org/index.php/Main <p style="text-align: justify;"><strong>Journal of Computing &amp; Biomedical Informatics (JCBI) </strong>is a peer-reviewed open-access journal that is recognised by the Higher Education Commission (H.E.C.) Pakistan. JCBI publishes high-quality scholarly articles reporting substantive results on a wide range of learning methods applied to a variety of learning problems. All submitted articles should report original, previously unpublished research results, experimental or theoretical. Articles submitted to the journal should meet these criteria and must not be under consideration for publication elsewhere. Manuscripts should follow the style of the journal and are subject to both review and editing. JCBI encourage authors of original research papers to describe work such as the following:</p> <ul> <li>Articles in the areas of computational approaches, artificial intelligence, big data, software engineering, cybersecurity, internet of things, and data analysis.</li> <li>Reports substantive results on a wide range of learning methods applied to a variety of learning problems.</li> <li>Articles provide solid support via empirical studies, theoretical analysis, or comparison to psychological phenomena.</li> <li>Articles that respond to a need in medicine, or rare data analysis with novel methods.</li> <li>Articles that Involve healthcare professional's motivation for the work and evolutionary results are usually necessary.</li> <li>Articles show how to apply learning methods to solve important application problems.</li> </ul> <p style="text-align: justify;">Journal of Computing &amp; Biomedical Informatics (JCBI) accepts interdisciplinary field that studies and pursues the effective uses of computational and biomedical data, information, and knowledge for scientific inquiry, problem-solving, and decision making, motivated by efforts to improve human health. Novel high performance computing methods, big data analysis, and artificial intelligence that advance material technologies are especially welcome.</p> Journal of Computing & Biomedical Informatics en-US Journal of Computing & Biomedical Informatics 2710-1606 <p>This is an open Access Article published by Research Center of Computing &amp; Biomedical Informatics (RCBI), Lahore, Pakistan under<a href="http://creativecommons.org/licenses/by/4.0"> CCBY 4.0 International License</a></p> Confidence-Calibrated Dual-Branch Detection of Oral Cancer from Tongue and Lips Images https://jcbi.org/index.php/Main/article/view/1211 <p>In order to detect oral cancer early from photos of the tongue and lips, this research introduces a confidence-calibrated, dual-branch framework. A lightweight texture branch (MLBP/HOG) maintains micro-texture, a global CNN encodes colour-shape context, and an attention gate fuses branches per image. Since pixel-level annotations are unavailable, we guide the model’s attention using CAM-consistency regularization to improve lesion localization under weakly supervised training. Improved cross-site robustness is achieved through domain-adversarial alignment, while probability outputs are calibrated through temperature scaling. With stratified evaluation, the model achieves the following on the Oral Cancer (Lips &amp; Tongue) dataset: Brier 0.092, Accuracy 0.892, Macro-F1 0.883, AUROC 0.912, AUPRC 0.884, and ECE reduces from 0.067 to 0.031 after calibration. Low post-calibration ECE (0.029/0.033) and high site-wise performance (Lips AUROC 0.922; Tongue 0.902) are maintained. By combining the texture branch, CAM-consistency, and domain alignment, ablation demonstrates cumulative benefits: when compared to a baseline CNN, the combined performance is the best with minimal compute overhead (AUROC 0.872; AUPRC 0.834; ECE 0.050). When considering utility, a threshold θ* = 0.50 equals Includes a PPV of 0.846, NPV of 0.897, Coverage of 87.2%, and Referral of 12.8%; Sensitivity of 0.892; and Specificity of 0.852. Trustworthy triage is supported by the system's calibrated probabilities and CAM overlays, and real-world deployment on cloud or mobile platforms is encouraged by its robustness to site variability. Practical and reliable photo-based oral-cancer screening relies on complementary features, targeted regularization, and explicit calibration, according to the results.</p> V. Gokula Krishnan Arvind Kumar Tiwari M. Sumithra G. Mahalakshmi N. Subhash Chandra M. Ganesan Copyright (c) 2026 Journal of Computing & Biomedical Informatics 2026-02-18 2026-02-18 10 02 10.56979/1002/2026/1211 A Robust Explainable Deep Learning Ensemble for Early Skin Cancer Diagnosis https://jcbi.org/index.php/Main/article/view/1214 <p>Skin cancer is one of the most common types of malignancies around the world, and the ability to detect skin cancers in an early stage is crucial for improving overall patient outcomes. This study introduces a hybrid deep learning framework that utilizes self-supervised pretraining, multi-architecture ensemble learning, and explainable AI approaches to enable accurate and interpretable skin cancer diagnosis. This framework uses SimCLR-based contrastive learning techniques to generate powerful feature representations from large data sets of unlabeled images of dermatoscopic images before implementing either supervised fine-tuning processes or feature-level fusion processes on three different types of architectures (EfficientNetV2-L, Swin Transformer, and ConvNeXt). In order to classify patients using the features derived from the different architectures, a meta-learning classifying component based on LightGBM is built into the model and provides explainability through the Grad-CAM and SHAP explainable AI methods. The results of the experiments performed with benchmark datasets (ISIC, and HAM10000) demonstrate the proposed method outperformed previously established baseline models by a wide margin, achieving 94.5% accuracy, 92.55% precision, and 93.26% recall, providing evidence of the robustness, high sensitivity, and reliability of the proposed method in the early detection of skin cancer.</p> Hammad Ali Muhammad Rizwan Rahsid Rana Abdul Sami Copyright (c) 2026 Journal of Computing & Biomedical Informatics 2026-02-18 2026-02-18 10 02 10.56979/1002/2026/1214 Primary User Detection in Cognitive Radios: Challenges, Techniques, and Emerging Solutions https://jcbi.org/index.php/Main/article/view/1195 <p>Cognitive Radio Networks (CRNs) address spectrum scarcity through intelligent spectrum management, enabling dynamic spectrum access for secondary users. However, traditional spectrum sensing techniques struggle with noise sensitivity and unstable Primary User (PU) dynamics, particularly in low Signal-to-Noise Ratio (SNR) environments. This paper proposes an Attention-based Deep Cognitive Network (ADCN) that integrates convolutional layers for spatial feature extraction, Long Short-Term Memory (LSTM) networks for temporal dependency modeling, and a self-attention mechanism to dynamically prioritize critical time-frequency characteristics. The paper presents a prototype of Attention-based Deep Cognitive Network (ADCN), which aims at improving the detection of PU under noisy and dynamic conditions. The suggested architecture combines the convolutional layers (as a spatial feature extractor) with Long Short-Term Memory (LSTM) networks (as a practical model of time dependencies) as well as the use of self-attention to highlight important time–frequency features. The data utilized to train and test the model is the CSRD2025, and the levels of SNR used are between -20 dB and 10 dB. As shown in the experimental results, ADCN attains a bit error rate of 0.12 at -20 dB, which is considerably better than Energy Detection (0.60) and Matched Filter Detection (0.30). The model also provides lesser false alarm rates and greater rates of detection and is adaptable to various patterns of PU activity. These results indicate that ADCN would be a powerful and efficient solution to next-generation CRNs, which can be used to optimize the spectrum and work in low-SNR settings.</p> Shraddha Nitin Magdum Tanuja Satish Dhope Shendkar Copyright (c) 2026 Journal of Computing & Biomedical Informatics 2026-02-18 2026-02-18 10 02 10.56979/1002/2026/1195 Synergistic Fusion of Clinical Interview EEG and Video for Depression Detection: A Cross-Modal Attention Approach https://jcbi.org/index.php/Main/article/view/1222 <p>Objective quantification of Major Depressive Disorder (MDD) remains a substantial clinical challenge due to the inherent subjectivity of traditional diagnostic interviews. This paper presents a novel multimodal deep learning framework that synergistically integrates neurophysiological signals and behavioural cues for automated depression detection. Utilizing the Multi-modal Open Dataset for Mental-disorder Analysis (MODMA), we analyze synchronized 128-channel EEG and video recordings obtained during professional clinical assessments. Our architecture employs a dual-stream approach: a Graph Convolutional Network (GCN) combined with a Long Short-Term Memory (LSTM) network to capture the spatiotemporal dynamics of brain activity, and a 3D Convolutional Neural Network (3D-CNN) with a temporal attention mechanism to extract behavioral markers from facial expressions. A sophisticated cross-modal attention module is implemented to fuse these modalities, allowing the model to learn the complex interdependencies between neural states and overt behavior. To ensure clinical generalizability and prevent data leakage, the framework was evaluated using a strict subject-independent 10-fold cross-validation scheme. Experimental results demonstrate latest performance, achieving an Accuracy of 92.1 % and an F1-Score of 92.5 %. These findings suggest that the proposed multimodal integration offers a powerful and objective tool for mental health screening, enhancing diagnostic precision through the fusion of brain and behavioral biomarkers.</p> Janaswami Hymavathi Chokka Anuradha Copyright (c) 2026 Journal of Computing & Biomedical Informatics 2026-02-20 2026-02-20 10 02 10.56979/1002/2026/1222 Systematic Literature Review on Computational Models Used For Sign Language Recognition https://jcbi.org/index.php/Main/article/view/1140 <p>Sign Language Recognition (SLR) is a popular research area, but it’s not much focused due to its complex nature and resource limitation. In this review, a unique method for developing a SLR have been studied in which an automatic sign-language recognition system has been proposed. A comprehensive review of different studies and working models from 2015 to 2025. Total 60 different studies with different methodology are reviewed in this systematic literature review. It has been found that American Sign Language (ASL) is one of the most commonly used data set for various studies. MediaPipe Holistic model, Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), Artificial Neural Network (ANN) and Support Vector Machine (SVM) are some of the techniques which are most focused in various studies. Our work is unique, we have presented a comprehensive taxonomy of approaches and we established timeline of approaches that have been focused in literature guiding us to suggest which approach can be followed in future. We have also identified the most focused dataset, mostly processed in literature and region focused. As valuable contribution in SLR, our systematic literature review presents state of the art review exploring multiple dimensions of SLR field and would serve research.</p> Mohsin Sami Rabia Tehseen Uzma Omer Muhammad Farrukh Khan Shahan Yamin Siddiqui Nabeel Sabir Khan Danish Ali Khan Copyright (c) 2026 Journal of Computing & Biomedical Informatics 2026-02-18 2026-02-18 10 02 10.56979/1002/2026/1140