PRISMM-Bench: A Benchmark of Peer-Review Grounded Multimodal Inconsistencies

Lukas Selch1, Yufang Hou2, M. Jehanzeb Mirza3, Sivan Doveh4, James Glass3, Rogerio Feris5, Wei Lin†1 1Johannes Kepler University Linz, 2Interdisciplinary Transformation University Austria, 3MIT CSAIL, 4Stanford University, 5MIT-IBM Watson AI Lab

PRISMM-Bench — benchmarking organic, long-context multimodal inconsistencies.

Abstract

Large Multimodal Models (LMMs) are increasingly applied to scientific research, yet it remains unclear whether they can reliably understand and reason over the multimodal complexity of papers. A central challenge lies in detecting and resolving inconsistencies across text, figures, tables, and equations, issues that are often subtle, domain-specific, and ultimately undermine clarity, reproducibility, and trust. Existing benchmarks overlook this issue, either isolating single modalities or relying on synthetic errors that fail to capture real-world complexity.

We introduce PRISMM-Bench (Peer-Review-sourced Inconsistency Set for Multimodal Models), the first benchmark grounded in real reviewer-flagged inconsistencies in scientific papers. Through a multi-stage pipeline of review mining, LLM-assisted filtering and human verification, we curate 384 inconsistencies from 353 papers. Based on this set, we design three tasks, namely inconsistency identification, remedy and pair matching, which assess a model's capacity to detect, correct, and reason over inconsistencies across different modalities.

Furthermore, to address the notorious problem of choice-only shortcuts in multiple-choice evaluation, where models exploit answer patterns without truly understanding the question, we further introduce structured JSON-based answer representations that minimize linguistic biases by reducing reliance on superficial stylistic cues. We benchmark 21 leading LMMs, including large open-weight models (GLM-4.5V 106B, InternVL3 78B) and proprietary models (Gemini 2.5 Pro, GPT-5 with high reasoning). Results reveal strikingly low performance (27.8-53.9%), underscoring the challenge of multimodal scientific reasoning and motivating progress towards trustworthy scientific assistants.

Method Overview


Dataset Viewer

BibTeX

@article{selch2026prismm,
  author    = {Selch, Lukas and Hou, Yufang and Mirza, M. Jehanzeb and Doveh, Sivan and Glass, James and Feris, Rogerio and Lin, Wei},
  title     = {PRISMM-Bench: A Benchmark of Peer-Review Grounded Multimodal Inconsistencies},
  journal   = {ICLR},
  year      = {2026},
}