Med-Scout: Curing MLLMs' Geometric Blindness in Medical Perception via Geometry-Aware RL Post-Training

Anglin Liu1, Ruichao Chen2, Yi Lu1, Hongxia Xu2, Jintai Chen1,3*
1The Hong Kong University of Science and Technology (Guangzhou)
2Zhejiang University
3The Hong Kong University of Science and Technology
*Corresponding author

Abstract

Despite recent Multimodal Large Language Models (MLLMs)' linguistic prowess in medical diagnosis, we find even state-of-the-art MLLMs suffer from a critical perceptual deficit: geometric blindness. This failure to ground outputs in objective geometric constraints leads to plausible yet factually incorrect hallucinations, rooted in training paradigms that prioritize linguistic fluency over geometric fidelity.

This paper introduces Med-Scout, a novel framework that “cures” this blindness via Reinforcement Learning (RL) that leverages the intrinsic geometric logic latent within unlabeled medical images. Instead of relying on costly expert annotations, Med-Scout derives verifiable supervision signals through three strategic proxy tasks: Hierarchical Scale Localization, Topological Jigsaw Reconstruction, and Anomaly Consistency Detection. To rigorously quantify this deficit, we present Med-Scout-Bench, a new benchmark specifically designed to evaluate geometric perception. Extensive evaluations show that Med-Scout significantly mitigates geometric blindness, outperforming leading proprietary and open-source MLLMs by over 40% on our benchmark. Furthermore, this enhanced geometric perception generalizes to broader medical understanding, achieving superior results on radiological and comprehensive medical VQA tasks.

Pilot Study: Defining Geometric Blindness

Pilot Study

Our preliminary investigation reveals three core dimensions of geometric failure in medical MLLMs:

  • Scale Inconsistency: Performance drops significantly when spatial scale changes, with models failing to recognize the same lesions in global views compared to local crops.
  • Topological Blindness: Models rely on rigid semantic priors rather than reasoning about actual anatomical layouts, failing to adapt descriptions when images are inverted.
  • Structural Insensitivity: A striking blindness to pixel-level anomalies, where models overlook obvious artificial perturbations and produce standard "hallucinated" reports.

Med-Scout Framework & Reward Design

Med-Scout Framework

Med-Scout utilizes Group Relative Policy Optimization (GRPO) with a specialized Dense Geometric Reward (DGR) mechanism. Unlike sparse binary feedback, DGR provides continuous guidance by calculating the degree of geometric deviation.

Benchmark & Empirical Evaluation

Benchmark Scaling

Quantitative Superiority

Table 1. Performance comparison with SOTA MLLMs across radiological and general medical VQA benchmarks. Rad-VQA represents RadImageNet-VQA. All accuracy metrics are scaled by 100.

Model Radiological VQA Generalization
Rad-VQA VQA-RAD SLAKE PMC-VQA OmniMedVQA MedXpertQA
Proprietary Models
GPT-559.166.473.957.776.954.8
Gemini-3-Flash60.770.276.158.175.356.0
General-purpose MLLMs
InternVL3-8B58.465.672.952.078.222.4
Qwen2.5-VL-3B-Instruct54.160.263.550.261.524.3
Qwen2.5-VL-7B-Instruct55.765.367.951.863.821.9
Qwen3-VL-4B-Instruct41.559.973.442.845.527.0
   + Med-Scout 45.7↑4.2 62.9↑3.0 75.6↑2.2 45.1↑2.3 48.8↑3.3 27.7↑0.7
Qwen3-VL-8B-Instruct41.663.269.643.942.930.4
   + Med-Scout 45.3↑3.7 65.8↑2.6 72.0↑2.4 45.5↑1.6 46.0↑3.1 30.8↑0.4
Medical MLLMs
LLaVA-Med-7B44.350.650.132.446.819.9
MedGemma-4B-IT49.870.877.948.770.322.0
HuatuoGPT-Vision-7B48.867.067.853.075.022.4
   + Med-Scout 52.1↑3.3 70.1↑3.1 71.0↑3.2 55.9↑2.9 75.4↑0.4 22.7↑0.3
Lingshu-7B61.268.982.856.381.427.4
   + Med-Scout 64.0↑2.8 71.0↑2.1 83.0↑0.2 57.4↑1.1 81.9↑0.5 28.0↑0.6

Theoretical Certificate: Energy Landscapes

Energy Landscapes

We justify that Med-Scout internalizes the geometric manifold rather than overfitting to templates. By analyzing the compatibility between visual evidence and textual descriptions via the energy function \(E(x,y) = -\log P_{\theta}(y|x)\), we observe the emergence of a distinct energy barrier. This confirms the model successfully assigns high-energy states to spatial hallucinations while preserving low energy for factual medical descriptions.

Citation

@misc{liu2026medscoutcuringmllmsgeometric,
      title={Med-Scout: Curing MLLMs' Geometric Blindness in Medical Perception via Geometry-Aware RL Post-Training}, 
      author={Anglin Liu and Ruichao Chen and Yi Lu and Hongxia Xu and Jintai Chen},
      year={2026},
      eprint={2601.23220},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2601.23220}, 
}
}