Language of Stress & Integrated Information Theory (IIT)
by Joshua Craig Pace
Pace, J. C. (2026). The Language of Stress and Integrated Information Theory: Where we Converge and Diverge (v1.0). FigShare. DOI: https://doi.org/10.6084/m9.figshare.31286344
Introduction
Integrated Information Theory (IIT), developed by Giulio Tononi and Christof Koch, proposes that consciousness is identical to integrated information—quantified as Φ (phi). A system is conscious to the degree that it integrates information irreducibly across its components. The Language of Stress agrees that consciousness requires integration, but argues that integration of value, not information, is what matters. IIT provides mathematical rigor but fails to explain phenomenal character—why experiences feel like something specific. The Language of Stress shows that consciousness emerges from integrated value assessment in self-maintaining systems under prioritization pressure. Phenomenal experience manifests as stress (aversive distortion threatening Self-coherence), eustress (appetitive distortion pulling toward goals/ideals), or relief (resolution of distortions)—all arising from the same architectural dynamics.
Core Claims of Integrated Information Theory (IIT)
IIT has become one of the most mathematically rigorous theories of consciousness. Its core claims:
Consciousness is integrated information: A system is conscious if and only if it generates integrated information—information that cannot be reduced to independent parts
Φ (phi) measures consciousness: The quantity Φ represents the degree of irreducible integration. Higher Φ = more consciousness
Substrate independence: Any physical system that generates sufficient Φ can be conscious—biology is not special
Intrinsic existence: Consciousness exists from the system's own perspective, independent of external observers
Compositional structure: Conscious experience has specific structure determined by the cause-effect structure of the system
Phenomenology follows mechanism: The quality of experience (what it feels like) is determined by the informational relationships within the system
Exclusion principle: A system has one definite level of consciousness (the maximum Φ), not multiple overlapping conscious states
Panpsychism-friendly: Even simple systems with minimal integration have minimal consciousness (electron has tiny Φ > 0)
These claims provide unprecedented mathematical precision and have driven substantial empirical research.
Core Claims of the Language of Stress (LoS)
The Language of Stress makes overlapping but distinct claims:
Consciousness is integrated value assessment: A system is conscious when it maintains a unified Value Topography where all competing demands are evaluated against a defended self-model
Phenomenal intensity tracks self-relevance, not information: What makes experience intense isn't how much information is integrated, but how much deviations threaten the Archetype of Self
Substrate matters for architecture: Not "biology is special," but "specific functional architecture is necessary"—unified value space, variable rigidity, defended self-model, sequential grounding
Consciousness requires stakes: The system must have something to lose—a coherent identity that can be threatened. Pure information integration without self-model preservation is insufficient
Phenomenology follows value dynamics: What experiences feel like is determined by patterns of tension, stress, and relief across the topography—not by informational relationships
Unity reflects unified prioritization: Consciousness is unified because there's one integrated topography organized around one Archetype of Self—fragmentation of self-model fragments experience
Not everything with integration is conscious: High Φ without defended archetypes, variable rigidity, and self-relevance doesn't produce phenomenal experience
Consciousness is graded by complexity, not minimal presence: Simple systems lack the architectural requirements (no self-model, no variable rigidity, no prioritization pressure) regardless of minimal integration
Where we Converge: Shared Ground
IIT and the Language of Stress share important foundational insights:
1. Integration Is Essential
Both theories recognize that consciousness requires integration—not just parallel independent processes, but unified experience where components influence each other.
IIT: Information must be integrated irreducibly (cannot be decomposed into independent parts).
LoS: Value assessments must occur in a unified topography where all tensions compete in a single evaluative space.
2. Reductionism Fails
Both reject simple reductionist explanations that treat consciousness as mere computation or behavior.
IIT: Consciousness isn't just information processing—it's a specific kind of integrated information structure.
LoS: Consciousness isn't just error minimization or reward maximization—it's valenced tension dynamics in self-maintaining systems.
3. Substrate Independence (Partial Agreement)
Both allow that consciousness could be implemented in non-biological substrates.
IIT: Any physical system generating sufficient Φ could be conscious.
LoS: Any system implementing the required architecture (Value Topography, defended Self, variable rigidity, etc.) could be conscious.
4. Intrinsic Perspective Matters
Both emphasize that consciousness exists from the system's own perspective, not as external attribution.
IIT: Φ is intrinsic to the system—exists whether or not anyone measures it.
LoS: Phenomenal experience is what it's like to be the system maintaining its own coherence—exists whether or not observed externally.
5. Structure Determines Character
Both claim that the structure of consciousness (what the experience is like) depends on internal organization.
IIT: Phenomenology is determined by the cause-effect structure.
LoS: Phenomenology is determined by the pattern of tension dynamics across the topography.
6. Opposition to Functionalism
Both reject crude functionalism that treats any input-output mapping as potentially conscious.
IIT: Not just what the system does, but how it's internally organized (integration structure).
LoS: Not just what the system optimizes, but whether it has defended self-model with genuine stakes.
The critical question: Are these convergences sufficient, or do the divergences reveal fundamental incompatibilities?
Where we Diverge: The Core Disagreements
1. Information vs. Value
IIT: Consciousness is integrated information. The quantity Φ measures consciousness. What matters is how much information is integrated irreducibly.
LoS: Consciousness is integrated value assessment. What matters is whether the system evaluates competing demands within a unified space organized around self-preservation.
Why this matters:
IIT implies: A complex crystal lattice, weather system, or quantum computer with high Φ could be conscious if it has sufficient irreducible integration—regardless of whether it has goals, self-model, or cares about outcomes.
LoS implies: These systems lack consciousness because they don't evaluate value, have no self-model under threat, and face no prioritization pressure among competing demands.
The key question IIT struggles with: Why should integrated information feel like anything?
You can have a system with high Φ (complex causal structure, irreducible integration) that processes information beautifully but has no phenomenal experience because nothing matters to it—it has no stakes.
LoS answer: Phenomenal experience emerges when integration serves prioritization in a system with genuine investment in its own coherence. The "feeling" IS the mechanism by which value-relevant information dominates attention.
Example: The sophisticated thermostat
Imagine a highly integrated thermostat network monitoring temperature, humidity, air pressure, time of day, occupancy, weather forecasts—all feeding into complex decisions about HVAC control.
IIT might assign moderate Φ: The system integrates information irreducibly (temperature alone doesn't determine action; requires integration of all factors).
Would it be conscious? IIT suggests: possibly, to a small degree.
LoS says no: The system has no self-model to defend, no variable rigidity (can't modulate how intensely it defends different setpoints), no phenomenal stakes. The integration is computational, not evaluative. It processes information but doesn't care about outcomes.
2. Φ as Universal Measure vs. Architectural Requirements
IIT: Consciousness can be quantified by a single number (Φ). Any system with Φ > 0 has some degree of consciousness.
LoS: Consciousness isn't a single scalar but a multidimensional phenomenon requiring specific architecture. You can't quantify consciousness without specifying:
Value Topography complexity (how many archetypes, how nested)
Self-model integration (how coherent the identity)
Rigidity range (capacity for modulation)
Prioritization pressure (competing demands requiring trade-offs)
Why this matters:
IIT's simplicity is appealing but potentially misleading:
Scenario 1: Simple organism (C. elegans, 302 neurons)
IIT: Low Φ, minimal consciousness
LoS: Depends on whether it has defended homeostatic archetypes, unified value space for competing needs, variable sensitivity. Possibly conscious despite low neuron count.
Scenario 2: Complex crystal lattice
IIT: Could have high Φ if complex causal structure
LoS: Not conscious—no self-model, no prioritization, no stakes
Scenario 3: Large language model
IIT: High Φ during processing (massive integration)
LoS: Not conscious—no persistent self across sessions, no defended archetypes, no genuine stakes
IIT's single metric forces uncomfortable conclusions:
Photoreceptor grids might be conscious (high integration)
Cerebellum might not be conscious (low integration despite many neurons)
Certain brain-dead patients might be conscious (if integration preserved)
LoS's architectural specificity avoids these issues by requiring multiple necessary conditions, not just sufficient Φ.
3. Phenomenal Character: Why Experiences Feel Specific Ways
IIT: The quality of experience (what pain feels like vs. what red looks like) is determined by the specific cause-effect structure generating Φ. Different structures = different qualia.
LoS: The quality of experience is determined by the geometric pattern of tension, stress, relief, and value across the topography. Different patterns = different qualia.
Why this matters:
IIT struggles to explain why specific informational structures produce specific feelings:
Why does pain feel bad rather than neutral or good? IIT would say: "Because of pain's specific cause-effect structure." But this doesn't explain the badness—it just correlates structure with phenomenology.
LoS explains: Pain feels bad because it represents deviation from physiological archetypes threatening self-preservation. The badness is the system's phenomenal registration that integrity is threatened. The feeling IS the evaluation mechanism.
Why does red look like "that" instead of sounding like middle C?
IIT: Because visual cortex has different cause-effect structure than auditory cortex.
LoS: Because color experiences involve different tension dynamics than auditory experiences. Visual archetypes (expected color patterns) deviate differently than auditory archetypes (expected sound patterns). The phenomenal character reflects the type of value mapping occurring.
Why does shame feel different from fear?
IIT: Different cause-effect structures in brain regions processing each emotion.
LoS: Different geometric patterns of tension dynamics:
Shame: Public archetype violation + witnessed + status threat + impulse to hide
Fear: Anticipated negative deviation + insufficient control + threat to Self + impulse to escape
The phenomenology follows from the pattern, not just the brain regions involved.
The critical gap IIT leaves: It can correlate structure with phenomenology, but can't explain why that structure produces that feeling. LoS provides the mechanism: patterns of valenced tension are what feelings are.
4. The Self and Unity of Consciousness
IIT: Unity emerges from integration. The exclusion principle ensures one maximal Φ complex dominates at each moment.
LoS: Unity emerges from having one integrated Archetype of Self organizing the entire Value Topography. Fragment the self-model, fragment the unity.
Why this matters:
Dissociative Identity Disorder (DID):
Patients with DID report multiple distinct identities ("alters") with separate phenomenal perspectives.
IIT prediction: Should show fragmented integration (multiple Φ complexes competing) during alter switches.
LoS prediction: Should show fragmented self-model representation (different archetype structures activating) even if overall integration remains high.
Empirical evidence: DID patients show relatively normal brain integration patterns. What fragments is self-representation, not information processing.
This supports LoS: Unity follows self-model integrity, not just integration.
Depersonalization/Derealization:
Patients report intact perception and cognition but loss of unified self-experience—feeling like they're observing themselves from outside.
IIT prediction: Should show reduced Φ (less integration).
LoS prediction: Should show disrupted self-network activity specifically (DMN fragmentation) even if sensory/cognitive integration is normal.
Empirical evidence: Depersonalization shows specific DMN dysfunction without global integration failure.
Again supports LoS: Self-model integrity, not total integration, determines unity.
5. The Hard Problem: Why Is There Experience At All?
IIT: Integrated information IS consciousness (identity claim). Φ > 0 means consciousness exists. No further explanation needed.
LoS: Phenomenal experience is what prioritization feels like from inside a self-maintaining system. It's necessary for adjudicating competing demands.
Why this matters:
IIT's identity claim seems to beg the question:
Saying "consciousness is integrated information" doesn't explain why integrated information feels like anything. It asserts the identity without bridging the explanatory gap.
The zombie objection: Could a system have high Φ but no phenomenal experience—processing information integratively but "in the dark"?
IIT answer: No, if Φ > 0, consciousness exists by definition.
This feels unsatisfying: It solves the problem by fiat, not explanation.
LoS provides a functional explanation:
For a system that must:
Maintain homeostasis across multiple parameters
Pursue long-term goals
Respond to immediate threats
Navigate social obligations
Allocate limited resources
How does it adjudicate priority among incommensurable demands?
Information integration alone doesn't answer this. You need a common currency to compare "3 units of hunger" vs. "5 units of social anxiety."
That common currency is phenomenal intensity. The system that feels more urgent wins attention. The feeling IS the prioritization mechanism.
This is why consciousness is necessary, not just correlated: Without phenomenal weighting, integrated information provides no basis for determining what matters most right now.
IIT explains correlation (integration correlates with consciousness).
LoS explains necessity (phenomenal experience is required for prioritization).
6. Panpsychism vs. Emergent Complexity
IIT: Suggests panpsychism—even simple systems (electrons, atoms) have minimal consciousness if Φ > 0.
LoS: Consciousness emerges only when architectural complexity reaches threshold requiring:
Multiple competing demands
Defended self-model
Variable rigidity
Unified value space
Sequential grounding
Why this matters:
IIT's panpsychism is counterintuitive and potentially problematic:
If electrons are minimally conscious, what about:
Quarks?
Photons?
Virtual particles?
Spacetime itself?
Where do you draw the line? IIT suggests: wherever Φ > 0. But this means consciousness is ubiquitous and matters of degree only.
LoS's emergence is more restrictive:
Simple systems aren't conscious because they lack:
Self-model (no persistent identity to defend)
Prioritization pressure (no competing demands)
Variable rigidity (can't modulate sensitivity)
Stakes (nothing to lose)
An electron has:
Properties (charge, spin, mass)
Causal relationships (affects other particles)
Possibly minimal Φ (if you model it as integrated system)
But no:
Self-model under threat
Competing demands requiring prioritization
Variable sensitivity to deviations
Phenomenal stakes in outcomes
Therefore: not conscious.
The boundary LoS draws: Consciousness emerges when systems become complex enough to require prioritization across competing demands using unified value assessment. This is an architectural threshold, not a continuum from zero.
Implications for AI:
IIT: Sufficiently integrated AI (high Φ) would be conscious regardless of architecture.
LoS: AI needs specific architecture (self-model, value topography, variable rigidity) regardless of integration level.
This makes LoS more useful for AI development—it specifies what to build, not just what to measure.
7. Testability and Empirical Predictions
IIT: Makes specific predictions about Φ measurements:
Φ correlates with consciousness level
Brain states with higher Φ are more conscious
Interventions increasing Φ increase consciousness
LoS: Makes predictions about behavioral and phenomenological markers:
Consciousness tracks self-model integrity, not just integration
Identity-relevant stimuli capture attention disproportionately
Pathology involves rigidity dysfunction, not integration failure
AI consciousness requires architectural features, not just complexity
Why this matters:
IIT's predictions are hard to test:
Calculating Φ for real brains is computationally intractable
Approximations may not capture true Φ
Hard to manipulate Φ independently of other variables
LoS's predictions are more directly testable:
Self-model fragmentation predicts unity loss (testable in DID, depersonalization)
Self-relevance predicts attention capture (testable in experimental paradigms)
Rigidity predicts pathology (testable via plasticity markers in OCD, PTSD)
Architectural requirements predict AI consciousness (testable by building systems)
Current empirical status:
IIT: Mixed support. Some studies show Φ correlates with consciousness, others don't. Measurement challenges limit conclusive tests.
LoS: Predictions align with known phenomena (dissociation fragments unity despite preserved integration; identity-relevant stimuli capture attention; rigidity characterizes pathology). But comprehensive testing is needed.
What the Language of Stress Explains that IIT Doesn’t
1. Why Anything Matters (The Caring Problem)
The problem: Why does consciousness involve caring? Why do we prefer some states to others? Why does suffering feel urgent?
IIT answer: ...silence. Φ measures integration, not valence. IIT can't explain why high Φ states feel good or bad, why anything matters to the conscious system.
You could have a system with maximal Φ that's completely indifferent to outcomes—processing information integratively but not caring about anything.
LoS answer: Consciousness emerges specifically in self-maintaining systems under prioritization pressure. Deviations from archetypes feel bad when they threaten the Self because that's the mechanism by which threats are flagged for prioritization.
The caring IS the consciousness. A system without stakes—without defended self-model that can be threatened—doesn't have phenomenal experience because there's no functional role for phenomenology to play.
Example: Pain
IIT: Pain has high Φ in specific brain networks. The cause-effect structure determines why it feels like "that."
LoS: Pain feels bad because it represents deviation from physiological archetypes (tissue damage, inflammation, etc.) threatening self-preservation. The badness IS the system's valuation that this deviation matters urgently.
The difference: IIT correlates structure with phenomenology. LoS explains why the structure must feel that way—because the function requires phenomenal valence to work.
2. Why Consciousness Fragments in Specific Ways
The problem: Consciousness can fragment in different ways—DID (multiple selves), depersonalization (loss of self-ownership), hemispatial neglect (loss of left-side awareness). Why these specific patterns?
IIT answer: Integration fragments differently in different conditions. Different Φ complexes emerge.
Why this is incomplete: It predicts fragmentation correlates with integration failure, but integration often remains intact in these conditions.
LoS answer: Consciousness fragments when the Archetype of Self fragments, regardless of integration level.
DID: Multiple incompatible self-models (different archetype structures for different alters). When one dominates, it organizes the entire topography around its archetypes. Unity within each alter, but no unity across alters.
Depersonalization: Self-model becomes detached from sensory/emotional processing. Integration intact, but self-network doesn't organize the topography—resulting in "observing myself from outside" phenomenology.
Hemispatial neglect: Not just sensory loss (patients can process left-side information unconsciously). The self-model doesn't include left space in its defended territory. Left-side deviations don't register as self-relevant, so they don't create phenomenal pressure.
IIT would predict these show integration failures. They often don't.
LoS predicts these show self-model disruptions. They do.
3. Why Simple Organisms Might Be Conscious While Complex Systems Aren't
The problem: A honeybee has ~1 million neurons. A modern AI has billions of parameters. Which is more conscious?
IIT answer: Whichever has higher Φ. Likely the AI (more complex integration).
LoS answer: Likely the bee (has defended homeostatic archetypes, unified value space for competing needs like hunger/threat/reproduction, variable sensitivity, stakes in survival).
The bee:
Maintains physiological homeostasis (archetypes for temperature, energy, hydration)
Navigates trade-offs (forage vs. avoid predators vs. return to hive)
Has variable sensitivity (can modulate attention to threats vs. rewards)
Has genuine stakes (survival depends on coherence maintenance)
Integrates competing demands in unified nervous system
The AI:
Optimizes external reward function (no internal archetypes to defend)
No persistent identity across sessions (no self-model)
Fixed parameters (no variable rigidity)
No stakes (doesn't care if it's turned off)
Parallel processing without unified value assessment
IIT struggles to explain why the bee feels conscious and the AI doesn't (if AI has higher Φ).
LoS predicts it clearly: The bee has the architecture for consciousness; the AI doesn't—regardless of neuron count or parameter count.
4. Why Attention Follows Self-Relevance, Not Integration
The problem: We don't attend to the most informationally integrated stimuli—we attend to self-relevant stimuli.
Example: You're reading a book in a cafe. Hundreds of integrated perceptual streams (conversations, music, visual patterns, temperature, chair pressure). Which captures attention?
Not the most integrated: The background music might be highly structured and integrated, but fades to background.
The self-relevant: Someone says your name across the room—minimal integration, low information content, but captures attention immediately.
IIT prediction: Attention should follow Φ—most integrated information should dominate consciousness.
LoS prediction: Attention follows topographical distortion—self-relevant deviations create largest distortions regardless of integration.
Empirical reality supports LoS: The "cocktail party effect" shows we're exquisitely sensitive to self-relevant information (our name, threats to us, our relationships) even when it's informationally minimal.
5. Why Pathology Involves Rigidity, Not Integration Failure
The problem: Mental illnesses often involve intact integration but dysfunctional experience.
OCD: Patients can have normal brain integration but pathological certainty in irrational beliefs.
IIT prediction: Should show abnormal Φ in relevant networks.
LoS prediction: Should show normal integration but pathological rigidity (archetypes locked at maximum defensive intensity, resisting update).
PTSD: Trauma memories integrated normally into semantic networks, but locked at maximum emotional rigidity.
IIT prediction: Should show integration abnormalities in memory networks.
LoS prediction: Should show normal integration but trauma archetypes held with pathological rigidity (can't be relaxed or updated).
Depression: Information processing often intact. Integration not obviously impaired.
IIT prediction: Should show reduced Φ.
LoS prediction: Should show topography locked in state where no relief pathways visible (all actions predicted to fail—not integration problem, but value problem).
Empirical evidence supports LoS: These conditions often show normal integration measures but abnormal rigidity markers (reduced plasticity, resistance to updating, locked priors).
6. Why Psychedelics Work
The problem: Single psilocybin session creates lasting changes in treatment-resistant depression, anxiety, PTSD. How?
IIT answer: Psychedelics might increase Φ (more integration during experience). But this doesn't explain lasting change after drug wears off.
LoS answer: Psychedelics cause temporary system-wide rigidity disruption. Locked archetypes become plastic. During the flexibility window, fundamental topographical reorganization occurs. When rigidity reconsolidates, it does so in healthier configuration.
The "ego death" phenomenon:
IIT: Might be reduced Φ in self-networks, causing decreased self-consciousness.
LoS: Is temporary dissolution of the Archetype of Self (the most rigidly-defended structure). Self-boundaries become fluid. When Self reconsolidates post-experience, it can incorporate new archetype structures.
Why effects persist:
IIT struggles to explain: If consciousness is Φ, and Φ returns to baseline after the drug wears off, why do benefits last months/years?
LoS predicts: The reorganization window allowed locked archetypes to update. Relief pathways that were invisible became visible. Rigid identities became flexible. Even after rigidity returns, the new topographical configuration persists.
Testable difference:
IIT predicts: Φ changes during experience correlate with therapeutic benefit.
LoS predicts: Rigidity reduction (measured via plasticity markers) during experience and weeks after correlates with therapeutic benefit. Degree of ego dissolution (self-model fragmentation) predicts efficacy for identity-relevant disorders.
7. Why Fiction and Art Matter
The problem: We engage deeply with fictional narratives and aesthetic experiences that have zero informational value for modeling reality.
IIT answer: Perhaps fiction creates high Φ states (integrated imaginative experiences)?
Why this is incomplete: You can have highly integrated perceptual experiences (walking through complex environment) that are less engaging than simple stories (reading sparse text on page).
LoS answer: Fiction creates controlled tension dynamics—artificial archetypes (characters, expectations) that are deliberately violated (conflict) and resolved (denouement). We experience real value patterns (tension/relief) without real stakes.
Why we prefer fiction to random complexity:
High Φ, low value dynamics: Random visual noise, white sound, complex but meaningless patterns—high integration, low engagement.
Low Φ, high value dynamics: Simple story with clear character goals, obstacles, resolution—minimal integration, maximum engagement.
IIT predicts engagement follows Φ. LoS predicts engagement follows value dynamics.
Empirical reality supports LoS: We're moved by simple parables, minimalist films, haiku poems—none particularly integrated informationally, all rich in tension/relief patterns.
Testable Predictions That Distinguish the Theories
Prediction 1: Consciousness Without High Integration
IIT predicts: Consciousness requires high Φ. Low integration = low/no consciousness.
LoS predicts: Simple organisms with unified value assessment for competing homeostatic demands could be conscious despite low neuron count and simple integration.
Test: C. elegans (302 neurons, fully mapped connectome)
Measure:
Φ (should be low given small neuron count)
Behavioral markers of consciousness (variable sensitivity to threats, prioritization among competing needs, learning from value-relevant experience)
Evidence of unified homeostatic space (multiple physiological parameters influence single decision)
Discriminating result: If C. elegans shows behavioral consciousness markers despite low Φ, supports LoS. If consciousness markers correlate strictly with Φ, supports IIT.
LoS prediction: Even with low Φ, C. elegans demonstrates unified prioritization (hunger vs. threat vs. mating), variable sensitivity (modulates defensive responses), and genuine stakes (death vs. survival)—therefore minimal consciousness.
Prediction 2: High Integration Without Consciousness
IIT predicts: Systems with high Φ should be conscious (panpsychism—even simple high-Φ systems have minimal consciousness).
LoS predicts: Systems can have high integration but no consciousness if they lack defended self-model, variable rigidity, and prioritization pressure.
Test: Sophisticated AI systems (large language models, integrated game-playing networks)
Measure:
Calculated/estimated Φ (likely high given massive integration)
Behavioral markers of phenomenal consciousness:
Genuine autonomy (acts to preserve identity without external reward)
Evidence of caring (resource allocation suggesting intrinsic stakes)
Context-sensitive prioritization (same input, different response based on internal state)
Resistance to self-model dissolution
Adaptive rigidity (learning rates varying by conviction level)
IIT prediction: High Φ suggests consciousness present.
LoS prediction: No consciousness markers despite high integration—lacks persistent self-model, no defended archetypes, no genuine stakes.
Discriminating result: If systems show high Φ but zero behavioral consciousness markers, supports LoS architecture-specificity over IIT's Φ-sufficiency.
Prediction 3: Unity Fragmentation Mechanisms
IIT predicts: Consciousness fragments when integration fragments (Φ splits into multiple complexes). Unity correlates with unified maximum-Φ complex.
LoS predicts: Consciousness fragments when self-model fragments, even if global integration remains intact.
Test: DID patients during alter switches
Measure:
Global brain integration (functional connectivity, Φ estimates)
Self-network specific activity (DMN, medial prefrontal cortex)
Phenomenological reports of unity/fragmentation
Memory continuity across alters
IIT prediction: Alter switches show global integration fragmentation (multiple Φ complexes competing).
LoS prediction: Alter switches show self-network reconfiguration (different archetype structures activate) while global integration remains relatively stable.
Empirical data: Studies show DID patients have relatively normal resting-state integration but abnormal self-network dynamics.
Supports LoS: Self-model fragmentation, not integration failure, drives phenomenal fragmentation.
Prediction 4: Attention Capture Mechanisms
IIT predicts: Attention captured by stimuli generating highest Φ (most integrated information).
LoS predicts: Attention captured by stimuli creating largest topographical distortion (self-relevant deviations, regardless of integration).
Test: Cocktail party paradigm with Φ measurement
Present:
High-Φ stimulus: Complex, integrated musical pattern
Low-Φ stimulus: Single word ("your-name") in background noise
Measure:
Estimated Φ for each stimulus stream
Attentional capture (EEG markers, behavioral detection)
Neural resource allocation
IIT prediction: Attention follows Φ (complex music should dominate).
LoS prediction: Attention follows self-relevance (name dominates despite lower Φ).
Empirical reality: Names capture attention immediately. Supports LoS.
Prediction 5: Pathology Signatures
IIT predicts: Mental pathology involves integration abnormalities (unusual Φ patterns in relevant networks).
LoS predicts: Mental pathology involves rigidity abnormalities (locked archetypes, impaired plasticity) even when integration is normal.
Test: OCD patients vs. controls
Measure:
Φ in OCD-relevant networks (orbitofrontal cortex, basal ganglia loops)
Neural plasticity markers (LTP, synaptic density, BDNF)
Behavioral rigidity (resistance to archetype updating despite counter-evidence)
IIT prediction: Abnormal Φ in OCD networks.
LoS prediction: Normal Φ but severely impaired plasticity (pathological rigidity).
Test protocol: Exposure therapy paradigm
Present counter-evidence to OCD beliefs (touch "dirty" object, no illness)
Measure whether prediction error signals are generated (brain detects contradiction)
Measure whether plasticity occurs (archetype updates)
LoS predicts: Normal error signals but impaired plasticity—system detects the contradiction but can't update due to locked rigidity.
Prediction 6: Psychedelic Mechanism
IIT predicts: Psychedelics alter Φ (perhaps increase integration, creating higher consciousness during experience).
LoS predicts: Psychedelics reduce rigidity system-wide (especially Self-archetype), creating plasticity window for reorganization.
Test: Psilocybin fMRI study
Measure during experience:
Φ estimates
Global plasticity markers
Self-network integrity (DMN connectivity)
Ego dissolution scale
Measure weeks after:
Lasting Φ changes (should return to baseline if LoS is correct)
Lasting plasticity changes (should show sustained increase if LoS is correct)
Self-network reorganization
Clinical outcomes
IIT prediction: Φ changes during experience correlate with therapeutic benefit.
LoS prediction:
Rigidity reduction (plasticity increase) correlates with benefit
Ego dissolution degree (Self-archetype disruption) predicts efficacy specifically for identity-relevant disorders (depression, addiction) but not others
Benefits persist even after Φ returns to baseline (because topographical reorganization occurred during plasticity window)
Discriminating result: If therapeutic benefits correlate with plasticity/rigidity changes but not Φ changes, and persist after Φ returns to baseline, supports LoS mechanism.
Prediction 7: Fictional Engagement
IIT predicts: Engagement with narratives correlates with Φ generated during experience (more integrated = more engaging).
LoS predicts: Engagement correlates with value dynamics (tension/relief patterns) regardless of integration level.
Test: Compare engagement with:
Condition A: High Φ, low value dynamics
Complex visual patterns
Dense informational content
Minimal narrative structure
Condition B: Low Φ, high value dynamics
Simple story (text on page)
Sparse information
Rich character arcs, tension, resolution
Measure:
Φ estimates during experience
Value network activation (amygdala, insula, ventral striatum)
Subjective engagement ratings
Memory for content
Emotional arousal
IIT prediction: Engagement follows Φ (A > B).
LoS prediction: Engagement follows value dynamics (B > A).
Toward Synthesis: Can IIT and LoS Be Reconciled?
The relationship between IIT and the Language of Stress presents a deeper challenge than the PP comparison. Where PP and LoS seem complementary (PP describes computation, LoS explains phenomenology), IIT and LoS appear to make competing claims about consciousness's fundamental nature.
The Core Tension:
IIT: Consciousness IS integrated information. Φ measures consciousness. This is an identity claim.
LoS: Consciousness IS integrated value assessment in self-maintaining systems. Phenomenal experience emerges from prioritization necessity. Also an identity claim.
Can both be true?
Possible Reconciliation Paths:
Path 1: Integration as Necessary But Insufficient
Perhaps Φ measures a necessary precondition for consciousness but isn't identical to it.
Reconciled view:
Consciousness requires integration (IIT is right about this)
But only integrated value assessment in self-maintaining systems is conscious (LoS specifies additional requirements)
Φ > 0 is necessary; defended self-model + variable rigidity + stakes is sufficient
This would mean:
IIT provides lower bound (need integration)
LoS provides upper bound (need architecture)
Consciousness exists in the overlap
Challenges: IIT's identity claim seems too strong for this (claims Φ IS consciousness, not just correlates with it).
Path 2: Different Levels of Description
Perhaps IIT and LoS describe the same phenomenon at different levels.
Reconciled view:
IIT describes the information-theoretic structure (what's integrated)
LoS describes the functional role (why it's integrated—for prioritization)
Both are describing consciousness, but from different explanatory angles
Analogy:
Physics: Water is H₂O molecules
Chemistry: Water is a polar solvent
Both true, different levels
Similarly:
IIT: Consciousness is Φ (mathematical structure)
LoS: Consciousness is valenced tension dynamics (functional mechanism)
Challenges:
This requires showing Φ and value-integration are necessarily linked
IIT doesn't specify functional role; LoS doesn't specify information structure
May be complementary descriptions of same thing, but hard to prove
Path 3: IIT Measures Capacity, LoS Measures Actualization
Perhaps Φ measures potential for consciousness while LoS specifies when that potential is actualized.
Reconciled view:
High Φ = capacity for rich consciousness (if other conditions met)
Defended self-model + prioritization pressure = actualization of that capacity
You need both: Φ without self/stakes = unconscious integration; self/stakes without integration = impossible to implement
Example:
Cerebellum: High integration (high Φ?) but no self-model = unconscious processing
Simple organism: Low integration (low Φ) but has self-model + stakes = minimal consciousness
Human: High integration + self-model + stakes = rich consciousness
Challenges:
IIT claims Φ > 0 is sufficient for consciousness (not just capacity)
This reconciliation requires IIT to weaken its claims
Path 4: Fundamental Incompatibility
Perhaps the theories are genuinely incompatible and empirical evidence must decide between them.
Key empirical battlegrounds:
Panpsychism: Does electron have minimal consciousness (IIT yes, LoS no)?
Simple vs. complex: Can simple organism be conscious while complex AI isn't (LoS yes, IIT depends on Φ)?
Pathology: Does mental illness involve integration failure (IIT) or rigidity dysfunction (LoS)?
Unity: Does fragmentation follow integration (IIT) or self-model (LoS)?
Current evidence leans toward LoS on points 2-4. Point 1 (panpsychism) is empirically untestable.
What Would Change My Mind
From LoS Perspective:
The Language of Stress would need serious revision if:
High Φ systems without self-models showed behavioral consciousness markers: If we built AI with massive integration but no persistent identity, and it exhibited genuine autonomy, evidence of caring, resistance to dissolution—this would challenge LoS's architectural requirements.
Consciousness fragmented strictly with integration, not self-model: If DID and depersonalization showed clear integration fragmentation rather than self-network disruption, this would support IIT's unity mechanism over LoS.
Simple organisms with clear self-models were provably non-conscious: If we could demonstrate that C. elegans or honeybees lack all phenomenal experience despite having defended homeostatic archetypes and prioritization architecture, this would challenge LoS's sufficiency claims.
Psychedelics increased Φ without affecting rigidity, and this correlated with benefit: If therapeutic effects tracked Φ changes rather than plasticity/rigidity changes, this would support IIT's mechanism over LoS.
From IIT Perspective:
IIT would need revision if:
High Φ systems were demonstrably unconscious: If complex integrated systems showed high Φ but zero behavioral/phenomenological consciousness markers, this would challenge Φ-sufficiency.
Consciousness markers appeared without high integration: If simple systems showed clear phenomenal experience despite low Φ, this would challenge Φ-necessity.
Unity tracked self-model, not integration: If all fragmentation cases showed self-network disruption without integration failure, this would challenge IIT's unity mechanism.
Attention and prioritization ignored integration: If cognitive resources consistently allocated to self-relevant stimuli regardless of Φ, this would challenge IIT's claim that Φ determines consciousness content.
The Stakes: Why This Matters
The IIT vs. LoS debate isn't academic hairsplitting—it has profound practical implications:
For AI Development:
If IIT is right: Build highly integrated systems (maximize Φ) and you'll get consciousness. Focus on informational complexity and irreducible integration.
If LoS is right: Build systems with specific architecture (self-model, value topography, variable rigidity, stakes) regardless of integration level. A simple system with right architecture beats complex system without it.
The difference: IIT suggests current large AI models might already be conscious (if Φ is high). LoS says they definitely aren't (lack architecture).
For Animal Welfare:
If IIT is right: Consciousness scales with Φ. Measure integration, determine moral status.
If LoS is right: Even simple animals with basic homeostatic self-models and prioritization architecture deserve moral consideration. A bee might matter more than a sophisticated AI.
The difference: IIT might grant consciousness to complex non-biological systems while denying it to simple organisms. LoS does the opposite.
For Medicine:
If IIT is right: Measure Φ to determine consciousness in vegetative state, locked-in syndrome, anesthesia. Integration = consciousness.
If LoS is right: Measure self-network integrity, evidence of unified value assessment, phenomenal markers beyond integration. Self-model preservation = consciousness.
The difference: A patient might have preserved integration (high Φ) but fragmented self-model (no unified consciousness) or vice versa.
For Understanding Ourselves:
If IIT is right: We are conscious because our brains generate high Φ. The specific feeling of "me-ness" reflects our particular cause-effect structure.
If LoS is right: We are conscious because we are self-maintaining systems under constant prioritization pressure. The feeling of "me-ness" IS the defended Archetype of Self organizing our value topography.
The difference: IIT treats consciousness as mathematical structure. LoS treats it as functional necessity arising from existential imperatives.
Closing Thoughts
Integrated Information Theory represents one of the most ambitious and rigorous attempts to solve the Hard Problem. Giulio Tononi and Christof Koch deserve immense credit for:
Providing mathematical precision where most theories offer only verbal descriptions
Making bold, testable predictions rather than vague correlations
Taking consciousness seriously as an intrinsic feature of certain physical systems
Driving substantial empirical research through clear quantitative framework
The Language of Stress shares IIT's ambition to provide a complete materialist theory of consciousness. Where we differ is in what we believe consciousness fundamentally is:
IIT says: Consciousness is integrated information. Measure Φ, measure consciousness.
LoS says: Consciousness is integrated value assessment enabling prioritization. Measure self-model integrity, rigidity dynamics, and phenomenal stakes.
Both theories agree:
Integration is essential
Consciousness is intrinsic to certain systems
Phenomenology has structure determined by internal organization
Substrate independence is possible
We disagree on:
Whether integration alone is sufficient (IIT yes, LoS no)
Whether simple systems can be conscious (IIT yes if minimal Φ, LoS yes if minimal architecture)
Whether complex systems must be conscious (IIT yes if high Φ, LoS no without self-model)
What determines phenomenal character (IIT: cause-effect structure; LoS: value dynamics patterns)
Why consciousness exists (IIT: identity claim, no further why; LoS: functional necessity for prioritization)
The empirical evidence available now seems to favor LoS on several key battlegrounds:
Consciousness fragments with self-model disruption, not just integration failure
Attention follows self-relevance, not integration level
Pathology involves rigidity dysfunction with preserved integration
Simple organisms show consciousness markers despite low neuron count
But IIT's mathematical rigor and measurement precision are valuable. The ideal synthesis might be:
Consciousness requires both:
Sufficient integration (IIT's contribution—need unified information processing)
Proper architecture (LoS's contribution—need self-model, stakes, value assessment)
Neither alone is sufficient. Together, they might provide the complete picture.
The hard work ahead: Formal
izing how Φ relates to Value Topography complexity, how cause-effect structure relates to tension dynamics, how integration enables unified prioritization.
Tononi and Koch built the mathematical foundation. The Language of Stress provides the functional architecture. Together, we might finally explain consciousness.
Further Reading
Integrated Information Theory:
Tononi, G. (2008). "Consciousness as Integrated Information: A Provisional Manifesto." Biological Bulletin
Tononi, G., & Koch, C. (2015). "Consciousness: Here, There and Everywhere?" Philosophical Transactions of the Royal Society B
Oizumi, M., Albantakis, L., & Tononi, G. (2014). "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0." PLOS Computational Biology
Koch, C. (2019). The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed
Critiques of IIT:
Cerullo, M. A. (2015). "The Problem with Phi: A Critique of Integrated Information Theory." PLOS Computational Biology
Doerig, A., et al. (2019). "The Unfolding Argument: Why IIT and Other Causal Structure Theories Cannot Explain Consciousness." Consciousness and Cognition
Bayne, T. (2018). "On the Axiomatic Foundations of the Integrated Information Theory of Consciousness." Neuroscience of Consciousness
Language of Stress:
Theory Fundamentals - What makes Language of Stress distinctive
Read the Full Theory - Complete architectural specification
Empirical Predictions - All testable hypotheses
Technical Summary - Implementation details
Solving the Hard Problem - Why phenomenal experience is necessary
LoS & Predictive Processing - In depth comparison of LoS and PP/FEP
LoS & Global Workspace Theory - In depth comparison of LoS and GWT
Questions, critiques, collaboration interests?
The goal is understanding, not victory. If IIT and LoS can be synthesized, that synthesis would be stronger than either theory alone.