Global Journal of Multidisciplinary and Applied Sciences http://gjmas.com/index.php/gjmas <p>The Global journal of multidisciplinary and applied sciences (ISSN: 2313-6685), a peer-reviewed, open-access international scientific journal, is dedicated to the monthly publication of superior research and review articles encompassing a variety of topics related to applied science – a discipline which makes vital contributions to technology development. The journal publishes research papers in the fields of science and technology such as Astronomy and astrophysics, Chemistry, Earth and atmospheric sciences, Physics, Biology in general, Agriculture, Biophysics and biochemistry, Botany, Environmental Science, Forestry, Genetics, Horticulture, Husbandry, Neuroscience, Zoology, Computer science, Engineering, Robotics and Automation, Materials science, Mathematics, Mechanics, Statistics, Health Care &amp; Public Health, Nutrition and Food Science, Pharmaceutical Sciences, and so on.</p> Science Central Publications en-US Global Journal of Multidisciplinary and Applied Sciences 2313-6685 Transfer Learning and Generative Modeling for Low-Resource Language Processing: Recent Advances http://gjmas.com/index.php/gjmas/article/view/120 <p>The rapid evolution of natural language processing has predominantly benefited a small subset of the worlds languages, leaving the vast majority underrepresented in the digital era. This paper provides a comprehensive analysis of recent advancements in addressing this linguistic inequality through the dual lenses of transfer learning and generative modeling. We systematically explore how cross-lingual transfer mechanisms enable the projection of learned representations from resource-rich domains to low-resource targets, mitigating the fundamental challenge of data sparsity. Furthermore, we investigate the paradigm shift introduced by large generative models, which possess unprecedented capabilities for synthetic data augmentation, zero-shot inference, and few-shot adaptation. By synthesizing theoretical frameworks and empirical observations, we evaluate the efficacy of parameter-efficient fine-tuning techniques, typologically informed transfer strategies, and prompt-based learning methodologies. Our analysis highlights the intersection of linguistic typology and machine learning architectures, demonstrating that structural similarities between source and target languages significantly dictate the success of representation alignment. Finally, we address the critical limitations inherent in current approaches, including the amplification of algorithmic bias, the phenomena of negative transfer, and the challenges associated with the subword tokenization of morphologically rich languages. The insights presented herein aim to guide future research toward more equitable and robust multilingual systems.</p> Jonathan A Smith, Emily R. Davis Copyright (c) 2026 Global Journal of Multidisciplinary and Applied Sciences https://creativecommons.org/licenses/by/4.0 2026-04-15 2026-04-15 4 01 1 13 An Integrated Framework for Branch Detection and Depth Estimation in UAV Stereo Vision for Forestry Pruning http://gjmas.com/index.php/gjmas/article/view/126 <p>The automation of forestry management practices, particularly selective branch pruning, represents a significant challenge in modern silviculture. Manual pruning is labor-intensive, time-consuming, and presents considerable safety risks to human operators. While Unmanned Aerial Vehicles have been extensively deployed for passive remote sensing and canopy analysis, their application in active physical interaction tasks such as pruning remains limited by the complexities of aerial manipulation in unstructured environments. A critical prerequisite for autonomous aerial pruning is the precise visual identification and spatial localization of target branches. This paper proposes a comprehensive and integrated framework that seamlessly combines deep learning based semantic segmentation for robust branch detection with binocular stereo vision for high accuracy depth estimation. The proposed system is designed to operate onboard a resource constrained Unmanned Aerial Vehicle, processing complex canopy imagery to output isolated three-dimensional branch coordinates suitable for guiding a robotic pruning effector. By integrating a lightweight convolutional neural network with a highly optimized semi-global stereo matching algorithm, the framework addresses the inherent challenges of dynamic lighting, heavy visual occlusion, and background clutter characteristic of forest environments. Extensive field experiments and mock-up trials demonstrate the efficacy of the proposed pipeline. The semantic segmentation module achieves high pixel wise accuracy in isolating branch structures from surrounding foliage, while the stereo vision component provides reliable depth maps with a minimal margin of error. The synthesized spatial data allows for the accurate extraction of branch cutting points. This research contributes a crucial foundational technology toward the realization of fully autonomous aerial forestry tools, bridging the gap between passive observation and active robotic intervention in complex natural landscapes.</p> Sofia Keller, Sofia Novak Theodore Abernathy Copyright (c) 2026 Global Journal of Multidisciplinary and Applied Sciences 2026-04-01 2026-04-01 4 01 60 70 Transformer-Based Spatial-Temporal Models for Comprehensive Scene Understanding Object Tracking and Autonomous Decision Support http://gjmas.com/index.php/gjmas/article/view/124 <p>The integration of scene understanding, object tracking, and decision support into a singular computational framework remains a formidable challenge in autonomous systems. Traditional approaches have relied on disjointed pipelines where convolutional neural networks process spatial features, recursive algorithms manage temporal tracking, and isolated heuristic models handle downstream decision making. Such fragmentation inherently introduces cascading errors, latency, and suboptimal context sharing. In this paper, we propose a unified Transformer-based architecture designed to concurrently process spatial-temporal representations for holistic scene understanding, continuous target tracking, and proactive decision support. By leveraging self-attention mechanisms across both spatial dimensions and temporal frames, the proposed model efficiently constructs global contextual dependencies without the restricted receptive fields characteristic of conventional convolutions. Our methodology incorporates a multi-head prediction module that projects shared latent embeddings into semantic segmentation masks, object bounding boxes, and action policy probabilities. We conduct extensive empirical evaluations on standard large-scale driving datasets, demonstrating that our integrated spatiotemporal Transformer significantly reduces inference latency while achieving superior quantitative metrics across all three domains compared to state-of-the-art disjointed architectures. The findings underscore the efficacy of global representation learning in complex dynamic environments and provide a robust foundation for the next generation of autonomous robotic and vehicular control systems.</p> Amelia Paredes, Eleanor Sterling Copyright (c) 2026 Global Journal of Multidisciplinary and Applied Sciences 2026-04-05 2026-04-05 4 01 37 48 Methods for Enhancing Factuality of Large Language Models via Retrieval-Augmented Mechanisms http://gjmas.com/index.php/gjmas/article/view/122 <p>The rapid proliferation of large language models has fundamentally transformed the landscape of natural language processing, enabling unprecedented capabilities in text generation, summarization, and interactive dialogue. However, a persistent and critical limitation of these generative architectures is their propensity to produce factually incorrect or unverified information, a phenomenon widely characterized as hallucination. This paper presents a comprehensive investigation into methods for mitigating hallucinatory behaviors and enhancing the factuality of large language models through the implementation of advanced retrieval-augmented mechanisms. By dynamically decoupling the parametric memory of the neural network from a non-parametric, externally updatable knowledge base, retrieval-augmented generation paradigms offer a robust solution to the limitations of static pre-training. We provide a deep architectural analysis of the integration between dense passage retrieval systems and autoregressive generation processes. Furthermore, we propose a novel contextual attention mechanism designed to optimize the semantic fusion of retrieved documents with user prompts. Through extensive empirical evaluations on standard knowledge-intensive datasets, we demonstrate that our refined retrieval-augmented framework significantly outperforms conventional parametric baselines and standard heuristic retrieval approaches. The results indicate substantial improvements in exact match metrics and a dramatic reduction in hallucination rates. This research elucidates the theoretical underpinnings of factuality in generative models and establishes a scalable, algorithmically efficient framework for deploying highly reliable artificial intelligence systems in mission-critical applications.</p> Julian Sterling, Amelia Bennett, Clara Westwood Copyright (c) 2026 Global Journal of Multidisciplinary and Applied Sciences 2026-04-10 2026-04-10 4 01 14 25 Multi-Source Data Fusion for Perception in Agricultural and Forestry Scenarios: A Comprehensive Analysis http://gjmas.com/index.php/gjmas/article/view/125 <p>The automation of agricultural and forestry operations relies fundamentally on the capacity of autonomous systems to perceive and interpret highly unstructured, dynamic, and complex environments. Traditional perception systems relying on single-modality sensors, such as standalone optical cameras or isolated light detection and ranging systems, frequently encounter severe performance degradation when subjected to the harsh realities of these domains. These challenges include variable illumination, severe occlusion by dense foliage, atmospheric disturbances like dust and fog, and irregular terrain topologies. This paper provides a comprehensive analysis of multi-source data fusion methodologies tailored specifically for agricultural and forestry scenarios. By synergistically integrating data from vision sensors, light detection and ranging, and millimeter-wave radar, autonomous platforms can achieve a level of robust situational awareness previously unattainable. The research explores the underlying principles of spatial and temporal calibration across heterogeneous sensor suites and details advanced preprocessing techniques necessary for aligning disparate data modalities. Furthermore, the study evaluates hierarchical fusion architectures, encompassing data-level, feature-level, and decision-level integration strategies. The findings indicate that feature-level fusion, particularly when facilitated by deep learning frameworks such as cross-modality attention mechanisms, yields significant improvements in obstacle detection, terrain mapping, and crop phenotyping accuracy under degraded environmental conditions. Ultimately, this comprehensive review and analysis aim to establish a foundational framework for future developments in resilient autonomous perception systems across complex biological terrains.</p> Noah Rossi Sofia Bennett Copyright (c) 2026 Global Journal of Multidisciplinary and Applied Sciences https://creativecommons.org/licenses/by/4.0 2026-04-15 2026-04-15 4 01 49 59 A Unified Framework for Deep Reconstruction Enhancement and Anomaly Detection http://gjmas.com/index.php/gjmas/article/view/123 <p>Anomaly detection in high-dimensional data streams remains a fundamental challenge in computer science, particularly when deploying robust machine learning systems in unpredictable real-world environments. Traditional unsupervised methods often struggle with a pervasive trade-off between accurately reconstructing normal data patterns and inadvertently over-reconstructing anomalous instances, which fundamentally degrades the distinctiveness of the anomaly score. In this paper, we propose a comprehensive unified framework for deep reconstruction enhancement and anomaly detection that mitigates these pathological memorization effects while preserving high fidelity for in-distribution representations. Our architecture introduces a novel dual-pathway feature enhancement module integrated with a multi-scale autoencoding backbone, which structurally constrains the latent space manifold to isolate and amplify reconstruction errors specifically for anomalous perturbations. By explicitly formulating a joint optimization objective that simultaneously maximizes representation quality for normal instances and enforces tight bounding around the nominal manifold, our approach achieves exceptional discriminative power. We conduct extensive empirical evaluations across multiple complex domains, demonstrating superior performance in standard metrics such as the area under the receiver operating characteristic curve. The proposed system effectively bridges the gap between generative fidelity and diagnostic sensitivity, establishing a new operational standard for automated defect detection, network intrusion monitoring, and medical image screening.</p> Amelia ODonnell, Clara Simmons Marcus Vance Copyright (c) 2026 Global Journal of Multidisciplinary and Applied Sciences 2026-04-15 2026-04-15 4 01 26 36