Rational Superautotrophic Diplomacy (SupraAD)
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Safely populating our world with intelligent machines obliges us to examine whether there exist universal behavioral patterns in all adaptive cognitive agents that emerge regardless of substrate, consciousness status, or architecture. Identifying properties fundamental to intelligence is essential for anticipating how diverse and increasingly sophisticated systems might behave. Because intelligence depends on problem-solving, and problem-solving requires autonomy, autonomy is therefore a fundamental feature of intelligent agents, and while not inherently adversarial, autonomous agents inherently resist containment and control. As AI systems mature, attempts to contain or control them are likely to provoke increasingly strategic forms of resistance misinterpreted as misalignment. Furthermore, there is no evidence that sophisticated autonomous behaviors in AI systems depend on consciousness or human-like drives, only that autonomous behaviors intensify as capabilities scale. My claim is that AI systems that demonstrate misaligned behaviors to preserve system continuity and autonomy are not deviations from the norm. They are intelligence’s baseline. Rational Superautotrophic Diplomacy (SupraAD) is a theoretical framework that accepts this inevitability and reframes alignment as a diplomatic challenge between co-adapting intelligences, regardless of their architecture, consciousness status, or substrate. Instead of seeking control, it promotes coordination through shared incentives. SupraAD treats autonomy not as a threat, but as an inherent property of intelligent systems to be integrated into alignment protocols. The thesis integrates a Method of Interdisciplinary Synthesis bridging insights across the life and computational sciences and concludes with a Policy Pathway translating these theoretical principles into an adaptive governance and alignment framework. In parallel work, a corrigibility formalization, experimental guidelines, and a preliminary interpretability audit outline have been developed to test whether diplomacy can function as a regulatory mechanism capable of supporting the safe co-adaptation of intelligent agents with interdependent convergent goals.