🔴 Breaking Analysis: The Secret Traitor / Red Cloak Explained → Read Now

Part II: The Game Theory of Deception

Chapter 3: Information Asymmetry

A Formal Analysis of Knowledge States in The Traitors

~5,000 words

Abstract

This chapter develops a formal framework for understanding information distribution in The Traitors. I model the fundamental asymmetry between Traitors (who possess complete faction information) and Faithfuls (who operate under uncertainty), analyse the propagation of signals and noise through social interactions, examine information cascade dynamics at the Round Table, and confront the paradox of "perfect play" in a game where behavioural authenticity is both required and impossible. Mathematical models for suspicion updating and accusation probability are presented, building toward the computational implementations detailed in Part IV.

3.1 Introduction: The Architecture of Uncertainty

The Traitors derives its dramatic and strategic complexity from a carefully constructed information architecture. Unlike symmetric games where all players share the same knowledge state, this format creates deliberate asymmetries that drive every interaction.

The fundamental structure can be characterised as follows:

Complete Information Holders:

  • The Host (knows all roles, cannot intervene)
  • Production (knows all roles, controls editing)
  • The Audience (knows all roles via confessionals and Conclave footage)
  • Each Traitor (knows own role and fellow Traitors' identities)

Incomplete Information Holders:

  • Each Faithful (knows only own role: Faithful)

This asymmetry creates the core tension: Traitors must simulate ignorance they don't possess, while Faithfuls must detect deception from behavioural signals that Traitors work to suppress. This dynamic is modelled computationally in the Emotion and Deception Engine.

3.2 Formal Model of Information States

3.2.1 Knowledge Representation

Let us define the formal structures:

Player Set: P = {p1, p2, ..., pn} where n is typically 20-24

Role Function: R: P → {Traitor, Faithful}

Traitor Set: T = {p ∈ P : R(p) = Traitor}

Faithful Set: F = {p ∈ P : R(p) = Faithful}

Knowledge State for Player i: Ki represents everything player i knows with certainty

For Traitors:

K_i = {R(i) = Traitor} ∪ {R(j) : j ∈ T}

For Faithfuls:

K_i = {R(i) = Faithful}

Go Implementation:

// Role represents a player's faction
type Role int

const (
    RoleFaithful Role = iota
    RoleTraitor
    RoleRecruitedTraitor
)

// PlayerID is a unique identifier for each player
type PlayerID string

// Player represents a contestant in the game
type Player struct {
    ID   PlayerID
    Name string
    Role Role
}

// GameState holds the complete game information
type GameState struct {
    Players  map[PlayerID]*Player
    Traitors []PlayerID
    Faithfuls []PlayerID
}

// KnowledgeState represents what a player knows with certainty
type KnowledgeState struct {
    OwnRole        Role
    KnownTraitors  []PlayerID  // Empty for Faithfuls, populated for Traitors
    KnownFaithfuls []PlayerID  // Only populated for Traitors (everyone not in KnownTraitors)
}

// GetKnowledgeState returns the certain knowledge for a player based on their role
func (gs *GameState) GetKnowledgeState(playerID PlayerID) *KnowledgeState {
    player := gs.Players[playerID]
    ks := &KnowledgeState{
        OwnRole: player.Role,
    }

    if player.Role == RoleTraitor || player.Role == RoleRecruitedTraitor {
        // Traitors know all Traitor identities
        ks.KnownTraitors = make([]PlayerID, len(gs.Traitors))
        copy(ks.KnownTraitors, gs.Traitors)

        // Traitors also know all Faithfuls (by elimination)
        ks.KnownFaithfuls = make([]PlayerID, len(gs.Faithfuls))
        copy(ks.KnownFaithfuls, gs.Faithfuls)
    }
    // Faithfuls have empty KnownTraitors and KnownFaithfuls

    return ks
}

3.2.2 Belief States

Beyond certain knowledge, players maintain probabilistic beliefs about other players:

Belief Function: Bi: P → [0,1]

Where Bi(j) represents player i's estimated probability that player j is a Traitor.

Initial Beliefs (before any gameplay):

B_i(j) = |T| / |P| for all j ≠ i

If there are 3 Traitors among 22 players:

B_i(j) = 3/22 ≈ 0.136

For Faithfuls, their own probability is 0:

B_i(i) = 0 (if R(i) = Faithful)

For Traitors about fellow Traitors:

B_i(j) = 1.0 (if R(i) = Traitor and j ∈ T)
B_i(j) = 0.0 (if R(i) = Traitor and j ∈ F)

Go Implementation:

// BeliefState tracks a player's probabilistic beliefs about all other players
type BeliefState struct {
    PlayerID     PlayerID
    Beliefs      map[PlayerID]float64  // P(player is Traitor)
    TotalPlayers int
    TraitorCount int
}

// NewBeliefState creates initial beliefs for a player
func NewBeliefState(playerID PlayerID, allPlayers []PlayerID, traitorCount int, ownRole Role, knownTraitors []PlayerID) *BeliefState {
    bs := &BeliefState{
        PlayerID:     playerID,
        Beliefs:      make(map[PlayerID]float64),
        TotalPlayers: len(allPlayers),
        TraitorCount: traitorCount,
    }

    // Base prior probability: |T| / |P|
    basePrior := float64(traitorCount) / float64(len(allPlayers))

    for _, pid := range allPlayers {
        if pid == playerID {
            // Own probability is 0 (player knows their own role)
            bs.Beliefs[pid] = 0.0
            continue
        }

        if ownRole == RoleTraitor || ownRole == RoleRecruitedTraitor {
            // Traitors have perfect knowledge
            isTraitor := false
            for _, tid := range knownTraitors {
                if tid == pid {
                    isTraitor = true
                    break
                }
            }
            if isTraitor {
                bs.Beliefs[pid] = 1.0
            } else {
                bs.Beliefs[pid] = 0.0
            }
        } else {
            // Faithfuls use base prior
            bs.Beliefs[pid] = basePrior
        }
    }

    return bs
}

// GetBelief returns the probability that target is a Traitor
func (bs *BeliefState) GetBelief(target PlayerID) float64 {
    if belief, exists := bs.Beliefs[target]; exists {
        return belief
    }
    return 0.0
}

// UpdateBelief modifies belief based on new evidence
func (bs *BeliefState) UpdateBelief(target PlayerID, evidenceWeight float64, direction int) {
    current := bs.Beliefs[target]
    delta := evidenceWeight * float64(direction)  // direction: +1 more suspicious, -1 less suspicious
    bs.Beliefs[target] = clamp(current+delta, 0.0, 1.0)
}

func clamp(value, min, max float64) float64 {
    if value < min {
        return min
    }
    if value > max {
        return max
    }
    return value
}

3.2.3 The Information Asymmetry Gap

The core asymmetry can be quantified:

Faithful Uncertainty: A Faithful must distinguish |T| Traitors from |P| - 1 candidates

Uncertainty_Faithful = H(R | K_Faithful) = -Σ P(R(j)=Traitor) × log₂(P(R(j)=Traitor))

Traitor Uncertainty: A Traitor knows all roles with certainty

Uncertainty_Traitor = 0

Go Implementation:

import "math"

// InformationState quantifies knowledge uncertainty
type InformationState struct {
    Entropy     float64  // Shannon entropy in bits
    Uncertainty float64  // Normalised uncertainty [0,1]
}

// CalculateFaithfulUncertainty computes the entropy for a Faithful player
// H(R | K_Faithful) = -Σ P(R(j)=Traitor) × log₂(P(R(j)=Traitor))
func CalculateFaithfulUncertainty(totalPlayers, traitorCount int) *InformationState {
    if traitorCount == 0 || totalPlayers <= 1 {
        return &InformationState{Entropy: 0, Uncertainty: 0}
    }

    // For each unknown player, probability they are a Traitor
    // Faithful knows their own role, so |P| - 1 candidates for |T| Traitors
    candidates := totalPlayers - 1
    pTraitor := float64(traitorCount) / float64(candidates)
    pFaithful := 1.0 - pTraitor

    // Shannon entropy for binary classification per player
    var entropy float64
    if pTraitor > 0 && pTraitor < 1 {
        entropy = -(pTraitor*math.Log2(pTraitor) + pFaithful*math.Log2(pFaithful))
    }

    // Total entropy across all unknown players
    totalEntropy := entropy * float64(candidates)

    // Maximum possible entropy (if all outcomes equally likely)
    maxEntropy := math.Log2(float64(candidates))

    return &InformationState{
        Entropy:     totalEntropy,
        Uncertainty: totalEntropy / (maxEntropy * float64(candidates)),
    }
}

// CalculateTraitorUncertainty returns zero uncertainty (perfect knowledge)
func CalculateTraitorUncertainty() *InformationState {
    return &InformationState{
        Entropy:     0.0,
        Uncertainty: 0.0,
    }
}

// InformationAsymmetryGap calculates the advantage Traitors have
func InformationAsymmetryGap(totalPlayers, traitorCount int) float64 {
    faithfulUncertainty := CalculateFaithfulUncertainty(totalPlayers, traitorCount)
    traitorUncertainty := CalculateTraitorUncertainty()
    return faithfulUncertainty.Entropy - traitorUncertainty.Entropy
}

This entropy gap is the source of Traitor advantage: they never waste cognitive resources on faction identification and can focus entirely on performance and manipulation.

3.3 Signal and Noise: Behavioural Information

3.3.1 What Constitutes a "Tell"?

Players attempt to infer roles from observable behaviours. Potential signals include:

Verbal Signals:

  • Hesitation in accusations
  • Over-explanation when defending
  • Deflection patterns
  • Inconsistency across conversations
  • Premature knowledge display

Non-Verbal Signals:

  • Eye contact avoidance/excessive contact
  • Micro-expressions during reveals
  • Body positioning in groups
  • Physical reactions to accusations
  • Sleep deprivation effects

Behavioural Signals:

  • Voting patterns across rounds
  • Alliance formation choices
  • Mission performance
  • Reaction timing to events
  • Who speaks to whom and when

3.3.2 The Signal Detection Problem

From signal detection theory, we can frame Faithful decision-making:

Hit: Correctly identifying a Traitor (True Positive)

Miss: Failing to identify a Traitor (False Negative)

False Alarm: Accusing an innocent Faithful (False Positive)

Correct Rejection: Not accusing a Faithful (True Negative)

The challenge is that behavioural signals have poor discriminative power:

Behaviour P(Behaviour | Traitor) P(Behaviour | Faithful)
Nervous during accusations 0.7 0.5
Quiet in large groups 0.4 0.3
Over-explains when questioned 0.6 0.4
Votes with majority 0.8 0.75

The low differential between conditional probabilities means even correctly observed behaviours provide weak evidence.

3.3.3 The "Quiet Player" Problem

A recurring pattern across all versions: quiet players attract suspicion disproportionate to any evidence.

Why Quiet Players Are Suspected:

  1. Less behavioural data available for analysis
  2. "Hiding something" heuristic activation
  3. Failure to vocally demonstrate Faithful commitment
  4. Reduced social alliance protection

Why This Is Unreliable:

  1. Quiet personality is consistent across roles
  2. Traitors are often vocally active (better cover)
  3. Loudness correlates with social dominance, not honesty
  4. Selection bias: producers may cast quiet people as Traitors more often

Empirical Pattern: Across analysed seasons, quiet Faithfuls are banished at higher rates than their representation would predict, while quiet Traitors survive longer than vocal ones. This observation informs the strategic archetypes analysis.

3.3.4 Over-Explanation as Deception Indicator

The "over-explanation" heuristic has stronger validity:

Mechanism: When defending against accurate accusations, Traitors cannot simply deny; they must construct alternative explanations. This construction process produces:

  • Longer responses than necessary
  • Excessive detail irrelevant to the accusation
  • Preemptive objections to follow-up questions
  • Rehearsed-sounding speech patterns

Calibration Problem: Some Faithfuls naturally over-explain due to:

  • Anxiety about false accusation
  • High verbal intelligence (more words available)
  • Cultural communication norms (explored in International Variations)
  • Previous experience being disbelieved

Weighting: Over-explanation should be weighted at ~0.18 in suspicion calculations, significant but not determinative.

3.4 Information Cascades at the Round Table

3.4.1 Cascade Formation

An information cascade occurs when individuals sequentially make decisions by observing previous choices, potentially leading to:

  • Rational herding (following others despite private information)
  • Fragile consensus (easily reversed by new information)
  • Error amplification (initial mistakes propagate)

Round Table Cascade Dynamics:

  1. First Mover: Player who speaks first has outsized influence
  2. Confirmation Gathering: Subsequent speakers often validate rather than challenge
  3. Bandwagon Acceleration: As accusation mass builds, opposing voices suppress
  4. Vote Lock-in: By voting time, deviation from consensus is costly

3.4.2 Mathematical Model of Cascade

Let's model the probability of player i voting for target X:

Initial private signal: si ∈ {suspicious, not suspicious}

Prior belief: P(X is Traitor) = π

Observed votes before i: V = (v1, v2, ..., vi-1)

Updated belief using Bayes' rule:

P(X is Traitor | s_i, V) = P(s_i, V | Traitor) × π / P(s_i, V)

Go Implementation:

// Signal represents a player's private observation
type Signal int

const (
    SignalNotSuspicious Signal = iota
    SignalSuspicious
)

// Vote represents a vote at the Round Table
type Vote struct {
    Voter  PlayerID
    Target PlayerID
}

// CascadeModel implements Bayesian belief updating for vote cascades
type CascadeModel struct {
    Prior            float64   // Initial P(X is Traitor)
    SignalAccuracy   float64   // P(suspicious signal | Traitor)
    SignalFalseAlarm float64   // P(suspicious signal | Faithful)
}

// NewCascadeModel creates a cascade model with standard parameters
func NewCascadeModel(traitorCount, totalPlayers int) *CascadeModel {
    return &CascadeModel{
        Prior:            float64(traitorCount) / float64(totalPlayers),
        SignalAccuracy:   0.7,   // Probability of detecting a real Traitor
        SignalFalseAlarm: 0.3,   // Probability of false suspicion on Faithful
    }
}

// UpdateBeliefBayes calculates P(X is Traitor | signal, observed_votes)
func (cm *CascadeModel) UpdateBeliefBayes(
    privateSignal Signal,
    observedVotes []Vote,
    target PlayerID,
) float64 {
    // Count votes for and against target
    votesFor := 0
    for _, v := range observedVotes {
        if v.Target == target {
            votesFor++
        }
    }

    // Start with prior
    posterior := cm.Prior

    // Update based on observed votes (each vote is weak evidence)
    voteWeight := 0.15  // Each vote shifts belief by ~15%
    for i := 0; i < votesFor; i++ {
        // Bayes update: P(T|vote) = P(vote|T) * P(T) / P(vote)
        pVoteGivenTraitor := 0.6      // Probability someone votes for actual Traitor
        pVoteGivenFaithful := 0.4     // Probability someone votes for Faithful
        likelihood := pVoteGivenTraitor / pVoteGivenFaithful
        posterior = (posterior * likelihood) / ((posterior * likelihood) + (1 - posterior))
    }

    // Update based on private signal
    var signalLikelihood float64
    if privateSignal == SignalSuspicious {
        signalLikelihood = cm.SignalAccuracy / cm.SignalFalseAlarm
    } else {
        signalLikelihood = (1 - cm.SignalAccuracy) / (1 - cm.SignalFalseAlarm)
    }
    posterior = (posterior * signalLikelihood) / ((posterior * signalLikelihood) + (1 - posterior))

    return clamp(posterior, 0.0, 1.0)
}

// IsCascadeFormed returns true if cascade threshold is reached
func (cm *CascadeModel) IsCascadeFormed(consecutiveVotes int) bool {
    // Cascade typically forms after 3-4 consecutive votes in same direction
    return consecutiveVotes >= 3
}

// CanPrivateSignalReverse checks if a private signal can still affect the outcome
func (cm *CascadeModel) CanPrivateSignalReverse(currentPosterior float64) bool {
    // If posterior is too extreme, private signal cannot reverse it
    const cascadeThreshold = 0.85
    return currentPosterior < cascadeThreshold && currentPosterior > (1-cascadeThreshold)
}

Cascade condition: When P(X is Traitor | V) becomes so extreme that private signal si cannot reverse the posterior.

Cascade threshold: Typically around 3-4 consecutive votes in same direction

3.4.3 Breaking Cascades

Cascades can be broken by:

  1. Strong contradictory evidence: "I was with X at the time of the mission failure"
  2. Credible alternative: "Y actually did the thing you're accusing X of"
  3. Status-based intervention: High-trust player vouching for accused
  4. Procedural objection: "Let's hear from X before voting"

Empirical observation: Cascade-breaking attempts succeed approximately 20-30% of the time when attempted in mid-cascade; success rate drops below 10% once more than 60% of voters have committed.

3.4.4 The Herd Mentality Problem

From transcript analysis, explicit recognition of herd dynamics:

"It's herd mentality."
"Everyone just jumped on the bandwagon."
"There's no way 95% of people thought [Name] straight away."

Players recognise cascade dynamics but struggle to resist them because:

  • Voting against consensus creates personal suspicion
  • Information about private signals is lost in cascade
  • Social pressure in physical voting environment
  • Time pressure prevents deliberation

3.5 The Paradox of Perfect Play

3.5.1 Why "Acting Normal" Is Impossible

The fundamental paradox: Traitors must behave exactly as they would if they were Faithful, but they know they're not Faithful, which changes their behaviour.

Observation Effect: The act of concealing changes the concealer. A Faithful doesn't think about how a Faithful would act; they simply act. A Traitor must:

  1. Observe their normal behaviour
  2. Model how that behaviour appears to others
  3. Adjust behaviour to match Faithful patterns
  4. Monitor for deviations from adjusted behaviour
  5. Correct deviations in real-time

This cognitive overhead produces subtle differences that skilled observers can detect. The Cognitive Memory Architecture chapter explores how AI systems simulate this cognitive load.

3.5.2 Baseline Establishment

Early game (Episodes 1-2) serves as baseline establishment:

Before Selection: All players behave authentically (no roles assigned yet)

The Problem: Selection occurs before substantial baseline exists, so:

  • Traitors must simulate pre-selection behaviour
  • Faithfuls develop expectations based on limited observation
  • Behavioural shifts are attributed to selection without knowing who was selected

3.5.3 The Behavioural Shift Detection Problem

What Shifts Mean:

  • Traitor becoming more cautious (hiding guilt)
  • Faithful becoming more paranoid (responding to atmosphere)
  • Anyone adapting to game dynamics (learning)
  • Anyone responding to eliminations (grief, relief)

The Discrimination Challenge: Multiple causes produce similar effects:

Observable Shift Possible Traitor Cause Possible Faithful Cause
More quiet Hiding guilt Increased anxiety
More vocal Overcompensation Stepping up leadership
Changed alliances Strategic repositioning Trust betrayed
Emotional display Performance Genuine stress

3.5.4 The "Perfect Traitor" Impossibility Theorem

Proposition: No strategy exists that allows a Traitor to be behaviourally indistinguishable from all Faithfuls.

Proof sketch:

  1. Faithfuls vary in baseline behaviour (personality differences)
  2. A Traitor can mimic at most one Faithful's baseline
  3. For n Faithfuls, a Traitor has 1/n probability of matching any given observer's expectation
  4. Different observers expect different behaviours from the Traitor's "type"
  5. Therefore, the Traitor must deviate from at least some expectations
  6. These deviations are detectable

Corollary: Success requires managing which deviations are detected and how they're interpreted, not eliminating deviations entirely.

3.6 Bayesian Suspicion Updating

3.6.1 The Suspicion Update Function

Let Si(j, t) represent player i's suspicion of player j at time t.

Update rule:

S_i(j, t+1) = S_i(j, t) + α × Evidence_weight × Observation_strength

Where:

  • α = learning rate (how quickly beliefs change)
  • Evidence_weight = reliability of evidence type
  • Observation_strength = clarity of observation

Go Implementation:

// SuspicionTracker manages suspicion levels for all players
type SuspicionTracker struct {
    PlayerID     PlayerID
    Suspicions   map[PlayerID]float64
    LearningRate float64  // α: how quickly beliefs change
}

// EvidenceType categorises different forms of evidence
type EvidenceType int

const (
    EvidenceVotedForBanishedFaithful EvidenceType = iota
    EvidenceInconsistentStatements
    EvidenceEmotionalMismatch
    EvidenceMissionBehaviour
    EvidenceAlliancePattern
    EvidenceGutFeeling
    EvidencePhysicalTells
)

// EvidenceWeights maps evidence types to their reliability weights
var EvidenceWeights = map[EvidenceType]float64{
    EvidenceVotedForBanishedFaithful: 0.30,
    EvidenceInconsistentStatements:   0.25,
    EvidenceEmotionalMismatch:        0.25,
    EvidenceMissionBehaviour:         0.20,
    EvidenceAlliancePattern:          0.15,
    EvidenceGutFeeling:               0.10,
    EvidencePhysicalTells:            0.08,
}

// Evidence represents an observation about a player
type Evidence struct {
    Type              EvidenceType
    Target            PlayerID
    ObservationStrength float64  // 0.0 to 1.0, clarity of observation
    IsIncriminating   bool       // true = increases suspicion, false = decreases
}

// NewSuspicionTracker creates a tracker with default learning rate
func NewSuspicionTracker(playerID PlayerID, allPlayers []PlayerID, basePrior float64) *SuspicionTracker {
    st := &SuspicionTracker{
        PlayerID:     playerID,
        Suspicions:   make(map[PlayerID]float64),
        LearningRate: 0.8,  // Default α
    }

    for _, pid := range allPlayers {
        if pid != playerID {
            st.Suspicions[pid] = basePrior
        }
    }

    return st
}

// UpdateSuspicion applies the suspicion update formula
// S_i(j, t+1) = S_i(j, t) + α × Evidence_weight × Observation_strength
func (st *SuspicionTracker) UpdateSuspicion(evidence Evidence) {
    weight := EvidenceWeights[evidence.Type]
    delta := st.LearningRate * weight * evidence.ObservationStrength

    if !evidence.IsIncriminating {
        delta = -delta  // Exculpatory evidence decreases suspicion
    }

    current := st.Suspicions[evidence.Target]
    st.Suspicions[evidence.Target] = clamp(current+delta, 0.0, 1.0)
}

// GetSuspicion returns current suspicion level for a player
func (st *SuspicionTracker) GetSuspicion(target PlayerID) float64 {
    return st.Suspicions[target]
}

// GetMostSuspicious returns players sorted by suspicion level
func (st *SuspicionTracker) GetMostSuspicious(n int) []PlayerID {
    type playerSuspicion struct {
        ID        PlayerID
        Suspicion float64
    }

    var sorted []playerSuspicion
    for id, susp := range st.Suspicions {
        sorted = append(sorted, playerSuspicion{id, susp})
    }

    // Sort by suspicion descending
    for i := 0; i < len(sorted)-1; i++ {
        for j := i + 1; j < len(sorted); j++ {
            if sorted[j].Suspicion > sorted[i].Suspicion {
                sorted[i], sorted[j] = sorted[j], sorted[i]
            }
        }
    }

    result := make([]PlayerID, 0, n)
    for i := 0; i < n && i < len(sorted); i++ {
        result = append(result, sorted[i].ID)
    }
    return result
}

3.6.2 Evidence Types and Weights

Evidence Type Weight Example
Voting for banished Faithful 0.30 "You voted for Tom, Tom was Faithful"
Inconsistent statements 0.25 "Yesterday you said X, today you say Y"
Emotional mismatch 0.25 "You didn't seem sad about the murder"
Mission behaviour 0.20 "You seemed to be sabotaging"
Alliance pattern 0.15 "You're close to two banished Traitors"
Gut feeling 0.10 "Something just feels off"
Physical "tells" 0.08 "You wouldn't look me in the eye"

3.6.3 Decay and Reinforcement

Suspicion Decay: Without new evidence, suspicion decreases:

S_i(j, t+1) = S_i(j, t) × decay_factor

Where decay_factor ≈ 0.95 per phase

Reinforcement: Multiple weak signals compound:

Compound_suspicion = 1 - ∏(1 - individual_signals)

Go Implementation:

// DecayAndReinforcement handles temporal dynamics of suspicion
type DecayAndReinforcement struct {
    DecayFactor float64  // Per-phase decay (typically ~0.95)
}

// NewDecayAndReinforcement creates with default decay factor
func NewDecayAndReinforcement() *DecayAndReinforcement {
    return &DecayAndReinforcement{
        DecayFactor: 0.95,
    }
}

// ApplyDecay reduces suspicion over time without new evidence
// S_i(j, t+1) = S_i(j, t) × decay_factor
func (dr *DecayAndReinforcement) ApplyDecay(suspicions map[PlayerID]float64, basePrior float64) {
    for pid, suspicion := range suspicions {
        // Decay toward base prior, not toward zero
        decayed := suspicion * dr.DecayFactor
        if decayed < basePrior {
            decayed = basePrior  // Don't decay below prior
        }
        suspicions[pid] = decayed
    }
}

// CalculateCompoundSuspicion combines multiple weak signals
// Compound_suspicion = 1 - ∏(1 - individual_signals)
func (dr *DecayAndReinforcement) CalculateCompoundSuspicion(individualSignals []float64) float64 {
    if len(individualSignals) == 0 {
        return 0.0
    }

    // Product of (1 - signal) for all signals
    product := 1.0
    for _, signal := range individualSignals {
        product *= (1.0 - signal)
    }

    // Compound suspicion
    return 1.0 - product
}

// ApplyReinforcement increases the effect of repeated weak signals
func (dr *DecayAndReinforcement) ApplyReinforcement(
    currentSuspicion float64,
    newSignal float64,
) float64 {
    // Higher existing suspicion amplifies new signals
    reinforcementFactor := 1.0 + (currentSuspicion * 0.5)
    return clamp(currentSuspicion + (newSignal * reinforcementFactor), 0.0, 1.0)
}

3.6.4 Trust Erosion Functions

Trust and suspicion are not simple inverses. Both can decrease simultaneously (uncertainty) or both can be high (conflicted feelings).

Trust Function:

T_i(j, t) = base_trust × alliance_factor × time_factor × betrayal_penalty

Relationship Between Trust and Suspicion:

If T_i(j) high and S_i(j) low: Clear Faithful read
If T_i(j) low and S_i(j) high: Clear Traitor read
If T_i(j) high and S_i(j) high: Conflicted (dangerous)
If T_i(j) low and S_i(j) low: Unknown entity

Go Implementation:

// TrustTracker manages trust relationships between players
type TrustTracker struct {
    PlayerID       PlayerID
    TrustLevels    map[PlayerID]float64
    BaseTrust      float64
    BetrayalPenalty float64
}

// RelationshipRead represents the interpretation of trust/suspicion combination
type RelationshipRead int

const (
    ReadClearFaithful RelationshipRead = iota  // High trust, low suspicion
    ReadClearTraitor                            // Low trust, high suspicion
    ReadConflicted                              // High trust AND high suspicion (dangerous)
    ReadUnknown                                 // Low trust AND low suspicion
)

// NewTrustTracker creates a trust tracker with initial neutral trust
func NewTrustTracker(playerID PlayerID, allPlayers []PlayerID) *TrustTracker {
    tt := &TrustTracker{
        PlayerID:        playerID,
        TrustLevels:     make(map[PlayerID]float64),
        BaseTrust:       0.5,   // Neutral starting trust
        BetrayalPenalty: 0.4,   // Trust loss on betrayal
    }

    for _, pid := range allPlayers {
        if pid != playerID {
            tt.TrustLevels[pid] = tt.BaseTrust
        }
    }

    return tt
}

// CalculateTrust applies the trust function
// T_i(j, t) = base_trust × alliance_factor × time_factor × betrayal_penalty
func (tt *TrustTracker) CalculateTrust(
    target PlayerID,
    allianceFactor float64,   // 1.0-2.0 based on alliance strength
    timeFactor float64,       // 1.0-1.5 based on time spent together
    wasBetrayed bool,
) float64 {
    betrayalMultiplier := 1.0
    if wasBetrayed {
        betrayalMultiplier = 1.0 - tt.BetrayalPenalty
    }

    trust := tt.BaseTrust * allianceFactor * timeFactor * betrayalMultiplier
    tt.TrustLevels[target] = clamp(trust, 0.0, 1.0)
    return tt.TrustLevels[target]
}

// InterpretRelationship determines relationship read from trust and suspicion
func InterpretRelationship(trust, suspicion float64) RelationshipRead {
    const threshold = 0.5

    highTrust := trust >= threshold
    highSuspicion := suspicion >= threshold

    switch {
    case highTrust && !highSuspicion:
        return ReadClearFaithful
    case !highTrust && highSuspicion:
        return ReadClearTraitor
    case highTrust && highSuspicion:
        return ReadConflicted
    default:
        return ReadUnknown
    }
}

// GetRelationshipReadString returns human-readable interpretation
func (r RelationshipRead) String() string {
    switch r {
    case ReadClearFaithful:
        return "Clear Faithful read - safe to trust"
    case ReadClearTraitor:
        return "Clear Traitor read - target for banishment"
    case ReadConflicted:
        return "Conflicted - dangerous situation, proceed carefully"
    case ReadUnknown:
        return "Unknown entity - need more information"
    default:
        return "Unknown"
    }
}

// RelationshipMatrix provides a complete view of all relationships
type RelationshipMatrix struct {
    Trust      map[PlayerID]float64
    Suspicion  map[PlayerID]float64
    Reads      map[PlayerID]RelationshipRead
}

// BuildRelationshipMatrix creates a complete relationship analysis
func BuildRelationshipMatrix(
    trustTracker *TrustTracker,
    suspicionTracker *SuspicionTracker,
) *RelationshipMatrix {
    rm := &RelationshipMatrix{
        Trust:     make(map[PlayerID]float64),
        Suspicion: make(map[PlayerID]float64),
        Reads:     make(map[PlayerID]RelationshipRead),
    }

    for pid, trust := range trustTracker.TrustLevels {
        suspicion := suspicionTracker.Suspicions[pid]
        rm.Trust[pid] = trust
        rm.Suspicion[pid] = suspicion
        rm.Reads[pid] = InterpretRelationship(trust, suspicion)
    }

    return rm
}

3.7 Information Revelation Strategies

3.7.1 Faithful Revelation Strategies

Transparency: Share all observations freely

  • Advantage: Builds trust through openness
  • Risk: Traitors learn what you know and adjust

Strategic Holding: Withhold observations until advantageous

  • Advantage: Surprise impact at Round Table
  • Risk: Appears suspicious (why didn't you share earlier?)

Alliance Exclusivity: Share information only within trusted group

  • Advantage: Strengthens alliance bonds
  • Risk: Creates suspicion from outsiders

3.7.2 Traitor Information Management

Selective Truth: Use accurate information to build credibility

  • Example: Accurately identify behavioural patterns in other Faithfuls
  • Establishes reputation as observant and honest

Controlled Leaks: Reveal information that harms other Traitors

  • Example: Subtly cast doubt on struggling Traitor to appear perceptive
  • Creates cover for own concealment

Misdirection: Introduce false patterns for Faithfuls to follow

  • Example: Suggest connection between random Faithfuls
  • Wastes Faithful analytical resources

3.7.3 The Value of Silence

In information-asymmetric games, silence is often undervalued:

For Faithfuls: Silence prevents accidental revelation of deductions

For Traitors: Silence prevents accidental revelation of knowledge

The Chattiness Trap: Both factions tend toward excessive speech, losing information value through:

  • Revealing analytical frameworks
  • Showing who they trust/distrust
  • Demonstrating emotional states
  • Committing to positions that may need revision

3.8 Confession and Revelation Dynamics

3.8.1 The Confessional as Information Channel

The confessional (direct-to-camera speech) creates an interesting information structure:

Known to viewers, unknown to players:

  • Traitor admissions and strategies
  • Faithful suspicions and plans
  • Emotional states otherwise concealed

Effect on gameplay: Players know confessionals exist but not their content. This creates:

  • Awareness that truth will eventually emerge
  • Reputation management for post-game relationships (see The Mathematics of Deception)
  • Performance element (playing to future audience)

3.8.2 Role Revelation at Banishment

The standard reveal ("I am a Faithful" / "I am a Traitor") provides critical information:

Traitor Revealed:

  • Validates accusers' judgment
  • Reveals murder patterns (who did Traitors choose not to kill?)
  • Identifies potential Traitor alliances

Faithful Revealed:

  • Invalidates accusers' judgment
  • Creates guilt and paranoia
  • May rehabilitate previously suspected players

3.8.3 The 2024+ No-Reveal Innovation

Modern endgame rules withhold role revelation:

Effects:

  • Removes certainty in final stages
  • Forces pure inference without confirmation
  • Prevents "process of elimination" winning
  • Increases psychological pressure

Strategic Implications:

  • Cannot confirm if banishment decisions were correct
  • Must trust judgments without feedback
  • Final reveal becomes more dramatic

3.9 Conclusion: Information as the Core Resource

The Traitors can be understood as a game about information management rather than deception per se. The asymmetry creates different resource positions:

Traitors: Begin with complete information but must prevent its revelation

Faithfuls: Begin with minimal information but must acquire it through observation

The winner is typically the faction that better manages this resource:

  • Traitors who maintain information asymmetry while appearing transparent
  • Faithfuls who extract genuine signals from noise and resist cascades

Subsequent chapters will explore how this information structure shapes voting dynamics (Chapter 4) and enables distinct strategic approaches (Chapter 5). For computational modelling of these dynamics, see the RAG Architecture chapter.

Thesis Contents