Robust perception and reasoning requires consistency across sensory modalities. Yet, current multimodal models often violate this principle, yielding contradictory predictions for visual versus textual representations of the same concept. Rather than masking these failures with standard voting mechanisms—which amplify systematic biases—we demonstrate that cross-modal inconsistency provides a rich, natural signal for learning. We introduce R-C2, a reinforcement learning framework that resolves internal conflicts by enforcing cross-modal cycle consistency. By requiring a model to perform backward inference, switches modalities, and reliably reconstruct the answer via forward inference, we establish a dense, label-free reward. This cyclic constraint forces the model to autonomously align its representations. Optimizing for this structure mitigates modality-specific errors and improves reasoning accuracy by up to 7.6 points. Our results suggest that advanced reasoning emerges not just from scaling data, but from enforcing a structurally consistent understanding of the world.
Cross-modal disagreements are common: the same input yields different answers from the screenshot and the HTML view. Simple majority voting over these inconsistent predictions can reinforce the wrong answer instead of correcting it.
R-C2 turns cross-modal contradictions into rewards. From a candidate answer, the model performs backward reasoning to synthesize queries and then runs forward inference across text and image views, checking whether the cycle returns to the original answer.
Backward inference asks the model to justify its own answer: “for this answer to be correct, what query must have been asked?” R-C2 applies this in both text and image views, together with our reconstructed VisualWebArena multiple-choice dataset.
Swipe horizontally, drag, or use the carousel arrows to browse backward-inference and dataset visualizations.
We evaluate R-C2 on six multimodal reasoning benchmarks and observe consistent gains in both accuracy and cross-modal consistency. The carousel below summarizes quantitative improvements and representative qualitative case studies.
Swipe horizontally, drag, or use the carousel arrows to browse quantitative and qualitative results.
R-C2 supports many combinations of backward and forward modalities. Ablations reveal which paths contribute most to accuracy and self-consistency.
Not all training examples are equally informative. Samples where image and text views strongly disagree turn out to be the most valuable for improving both accuracy and consistency.
@article{rc22025cross,
title = {R-C2: Cross-Modal Cycle Consistency Rewards Improve Multimodal Reasoning},
author = {Zirui Zhang, Haoyu Dong, Kexin Pei, Chengzhi Mao},
journal = {arXiv preprint},
year = {2025}
}
This work used Purdue Anvil GPU through allocation 250774 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by the U.S. National Science Foundation under grants 2138259, 2138286, 2138307, 2137603, and 2138296. We thank Guangxing Han for the insightful discussion.