Skip to content

Assertion-Based Cognition

A Theory of Machine Cognition for Artificial Agents

How to Build a Brain for AI Agents — From Knowledge Intake to Conscious Recall

Framework Specification · Version 2.0 · March 2026


Abstract

An AI agent without persistent cognition is not an agent. It is a stateless function that produces text. It cannot know who it is talking to, cannot remember what happened yesterday, cannot recognize contradictions in what it has been told, and cannot build understanding over time. It is, in the most precise sense, brainless.

Assertion-Based Cognition (ABC) is a formal theory of machine cognition for artificial agents. It defines not only how agents store and retrieve knowledge, but how they acquire it — how raw conversational signal is transformed into structured, trustworthy, temporally-aware knowledge through a multi-stage cognitive pipeline that mirrors the processing architecture of the biological brain.

This document specifies the complete ABC framework: the assertion primitive, the cognitive intake pipeline (the Thalamus), the asynchronous processing model that preserves real-time conversational performance, the knowledge graph, and the recall architecture. It is intended for platform engineers building agent systems, product leaders evaluating cognitive infrastructure, and researchers working at the intersection of AI memory, knowledge representation, and agent architecture.


Part I: The Problem and the Principles

1. The Problem: Brainless Agents

Large language models are powerful reasoning engines, but they have no brain. Each API call is independent. The model does not know what it said five minutes ago, who it is talking to, or what it has been told to remember. Context windows provide session-level continuity, but they are volatile, bounded, and expensive. When the window closes, everything is gone.

The industry has responded with storage solutions: vector databases for semantic search, Redis for session caching, PostgreSQL for user profiles. These approaches treat agent knowledge as a storage problem. They answer "where do I put the data?" but not the harder questions: What does it mean for an agent to know something? How should knowledge enter the system? How does the agent distinguish between what it knows and what it thinks? How does it detect that new information contradicts old information?

These are not database questions. They are cognitive questions. And they require a cognitive framework — not a better schema.

The fundamental insight of ABC: an agent does not need a memory layer. It needs a brain.

2. The Five Principles

Principle 1: Knowledge Is Asserted, Not Stored

The fundamental unit of knowledge in ABC is an assertion — a structured claim about the world that carries metadata about its own reliability. "Subject S has property P with value V, with confidence C, valid from time T1 to time T2, from source X." This is not a database record. It is a knowledge primitive with epistemological properties built into its structure. The distinction matters: a database record is passive data. An assertion is a claim that the system can reason about, validate, and challenge.

Principle 2: Nothing Is Absolutely True

Every assertion carries a confidence score between 0.0 and 1.0. A human-verified fact might carry 0.95 confidence. An AI inference from conversational context might carry 0.6. A derived assertion inherits the lowest confidence in its derivation chain. This reflects how knowledge actually works — certainty is a spectrum, not a binary. An agent that treats all knowledge as equally reliable is an agent that cannot prioritize, cannot hedge, and cannot be trusted.

Principle 3: Time Is a First-Class Dimension

ABC implements bitemporal semantics. Every assertion tracks two independent timelines: business time (when the fact was true in the real world) and transaction time (when the system recorded it). This separation enables queries that conventional systems cannot express. "What did the agent believe about this user as of last Friday?" is a transaction-time query. "When was this user promoted to VP?" is a business-time query. In any regulated environment, "what did the system know and when did it know it" is a legal question. Bitemporality answers it.

Principle 4: Knowledge Has Provenance

Every assertion records its origin: who or what created it, how it entered the system, and what evidence supports it. Provenance categories include human input, agent inference, system import, and derived (computed from other assertions). Provenance enables policy-driven trust — an enterprise can configure rules like "never act on agent-inferred assertions with confidence below 0.7 unless corroborated by a human-provenance assertion." Without provenance, all knowledge is anonymous. Anonymous knowledge cannot be audited, cannot be trusted, and cannot be governed.

Principle 5: Knowledge Is Immutable

Assertions are never updated or deleted. To change a fact, you supersede it — the original assertion is marked as superseded and a new assertion is created with full history preserved. This append-only model provides a complete audit trail, enables point-in-time reconstruction, and eliminates the class of bugs caused by in-place mutation. The brain does not erase memories. It forms new ones that take precedence.


Part II: The Cognitive Architecture

3. The ABC Triad: Assert, Bond, Contextualize

The framework organizes agent cognition into three operational capabilities that together form the minimum viable cognitive architecture for a persistent AI agent.

3.1 Assert — Knowledge Acquisition

Assertion is the act of committing a claim to the agent's knowledge base. Every assertion is atomic, immutable, and self-describing:

FieldTypePurpose
idContent hashContent-addressable identifier. Same content produces the same ID — automatic deduplication.
tenant_idStringIsolation boundary. All operations scoped to tenant.
subjectStringThe entity this is about (e.g., user:alice).
predicateStringThe property being asserted (e.g., role).
valueJSONThe claimed value. Flexible structure.
confidenceFloat [0.0–1.0]Degree of certainty from source.
valid_fromTimestampBusiness time: when this became true.
valid_untilTimestamp | nullBusiness time: when this stopped being true. Null = current.
transaction_tTimestampSystem time: when this was recorded.
provenanceEnumOrigin: human, agent, import, or derived.
encoding_depthInteger [1–3]How deeply this was processed during intake.
is_supersededBooleanWhether a newer assertion has replaced this.

3.2 Bond — Knowledge Relationships

Isolated assertions are insufficient for cognition. The brain does not store facts in isolation — it understands how things connect. Bonds are directional, typed edges between subjects that carry the same temporal and confidence dimensions as assertions:

user:tommy —[direct_report]→ person:sarah-johnson
project:atlas —[has_member]→ user:tommy

Bonds enable graph queries with temporal and confidence filtering applied simultaneously: "Who reported to Sarah in Q3?" "What projects is Tommy on, with confidence above 0.7?" These are questions that flat storage cannot answer and that the Agentic Brain answers natively.

3.3 Contextualize — Knowledge Recall

Contextualization is the act of assembling relevant knowledge for a specific moment of interaction. When the agent begins a session, it must answer: What do I know about this person? What is relevant right now? What should I proactively surface?

ABC defines a structured recall operation that returns four categories:

CategoryCognitive AnalogExample
RemindersProspective memory"You asked me to remind you about the board meeting."
Active ContextWorking memory"Tom is Product Director. His current project is Media Conductor V2."
Episodic MemoryAutobiographical recall"Last Tuesday, Tom mentioned frustration with the deployment pipeline."
Relational ContextSocial cognition"Tom reports to Paulette. His team includes three engineers."

This is where ABC diverges most sharply from vector-database approaches. Semantic search returns documents similar to a query. The Agentic Brain returns structured, confidence-scored, temporally-valid, provenanced knowledge organized by cognitive function. The agent does not search. It recalls.


Part III: The Thalamus — Cognitive Intake

4. Why Intake Is the Hard Problem

Most agent memory systems focus on storage and retrieval. ABC recognizes that these are downstream concerns. If knowledge does not enter the system with quality, no amount of retrieval sophistication matters. Garbage in, garbage out is not just a cliche — it is the failure mode of every agent memory system that treats intake as the calling application's problem.

In the biological brain, information does not arrive and get filed. It passes through multiple processing stages before becoming a stable memory. The thalamus filters sensory input, the hippocampus encodes it in context, the prefrontal cortex cross-references it against existing knowledge. Each stage is a quality gate. The brain spends enormous cognitive resources on intake because the integrity of everything downstream depends on it.

In current agent architectures, the conversational agent — the model optimized for dialogue — is also making epistemological decisions about what constitutes knowledge. This is asking a journalist to simultaneously write the story and be the editor. The result is predictable: inconsistent extraction quality, missed contradictions, no deduplication, no inference, and a knowledge base that degrades over time.

The Thalamus is ABC's answer: a dedicated cognitive intake layer that transforms raw conversational signal into high-quality, validated, enriched assertions.

5. The Thalamus: Architecture and Operations

The Thalamus sits between the source agent (any conversational AI) and the assertion store. It is not optional middleware. It is a core component of the Agentic Brain — the system that converts experience into knowledge.

5.1 The Intake Pipeline

Every candidate assertion submitted by a source agent passes through seven processing stages:

Stage 1: Signal Classification Is this assertion-worthy? Not everything a user says is knowledge. "Hey how's it going" is not an assertion. "I just got promoted to VP" is. The Thalamus applies a formal signal taxonomy — identity facts, preferences, commitments, events, emotional states, relational claims — and rejects conversational noise before it enters the knowledge base.

Stage 2: Deep Encoding What does this mean in the context of existing knowledge? Shallow encoding stores "Tom said he's frustrated." Deep encoding links: "Tom is frustrated because the deployment pipeline failed for the third time this sprint, which connects to his stated priority of shipping V2 by end of quarter." The Thalamus has access to the existing knowledge graph and can perform contextual enrichment that the source agent cannot.

Stage 3: Inference Cascade What can be legitimately derived? "I report to Sarah" should trigger the creation of a bond (user → direct_report → Sarah), an inferred assertion about Sarah's role (manager, confidence 0.6), and potentially a team membership bond. Each inferred assertion carries provenance marked as "derived" with confidence propagated from the source assertion.

Stage 4: Coherence Validation Does this contradict existing assertions? The Thalamus checks the incoming assertion against existing assertions for the same subject and predicate. If a user said "I'm on the Atlas project" in January and now says "I've never worked on Atlas," the Thalamus detects the conflict rather than blindly storing both.

Stage 5: Reinforcement Check Does this confirm something already known? If the incoming assertion matches an existing one, this is not a new fact — it is reinforcement. The existing assertion's confidence should increase. The content-addressable ID system makes this structurally detectable: same content produces the same hash.

Stage 6: Contextual Binding What was the context of acquisition? The assertion is tagged with metadata about the conversational context: what topic was being discussed, what the user's apparent emotional state was, what other assertions were created in the same session. This enables context-sensitive recall — when a similar context arises in the future, contextually bound assertions surface more readily.

Stage 7: Consolidation Trigger Should this trigger a working model update? After a threshold of new assertions about a subject, the Thalamus evaluates whether the agent's working model of that subject needs to be reconstructed — a higher-order summary that captures the essential picture without requiring traversal of every raw assertion.

5.2 The Response Contract

The Thalamus does not simply accept or reject assertions. It collaborates with the source agent through a structured response:

ResponseMeaningAgent Action
acceptedAssertion met quality threshold, persisted. May include enrichments (inferred bonds, derived assertions).None required. Knowledge base improved silently.
reinforcedAssertion matched existing knowledge. Confidence boosted on existing assertion. No duplicate created.None required. Existing knowledge strengthened.
conflict_detectedIncoming assertion contradicts an existing assertion. Both assertions included in response.Surface the conflict conversationally when appropriate: "You mentioned X before, but now Y — did something change?"
needs_clarificationAssertion is ambiguous. Specific questions provided.Ask the user to disambiguate: "Which project — Atlas or Meridian?"
rejectedSignal classification determined this is not assertion-worthy. Conversational noise.None required. No knowledge was created.

The Thalamus response contract transforms knowledge intake from a fire-and-forget write into a collaborative cognitive process between the source agent and the brain.

5.3 Universal Intake via SDK

All communication with the Thalamus flows through the SubCortex SDK. Any AI agent that integrates the SDK — a recruiting agent, a customer success agent, a compliance agent — routes assertions through the same Thalamus and receives the same quality gating. The SDK is the universal intake interface for ABC-compliant systems.


Part IV: The Subconscious — Asynchronous Cognition

6. The Latency Problem and the Subconscious Solution

Real-time conversational agents respond in milliseconds. Introducing a synchronous cognitive intake pipeline — where the agent waits for the Thalamus to classify, encode, validate, and persist before continuing the conversation — would introduce unacceptable latency. The user would experience the agent "thinking" mid-conversation, breaking the flow that makes modern voice and chat agents feel natural.

This is not just an engineering constraint. It is a cognitive design insight. The biological brain does not pause mid-sentence to consolidate a memory. The prefrontal cortex keeps the conversation flowing while the hippocampus processes, encodes, and cross-references in the background. You might be three topics deep before your brain finishes integrating something from ten minutes ago. And sometimes — critically — your brain interrupts you. You are talking about something unrelated and suddenly think "wait, that contradicts what she said earlier." That is the background process surfacing a result.

ABC models this through asynchronous cognitive processing — the Subconscious. The source agent fires assertions to the Thalamus without waiting. The Thalamus processes in the background. Results surface when the agent is ready.

6.1 The Async Flow

The source agent identifies an assertion-worthy signal during conversation and submits it to the Thalamus as a fire-and-forget operation. Zero latency impact. The conversation continues uninterrupted.

The Thalamus runs the full seven-stage intake pipeline in the background. Results are written to the Cognitive Response Queue — a persistent buffer within the Agentic Brain that holds processed intake results until the source agent is ready to consume them.

The source agent checks the queue during natural conversation breaks: a pause, a topic transition, the start of a new turn, or idle time between sessions. If everything came back clean — accepted and reinforced responses — the agent never mentions it. The knowledge base got smarter silently. If something needs attention — a conflict or clarification — the agent weaves it in naturally when the conversational moment is right.

6.2 The Cognitive Response Queue

The queue is not an external message broker. It is part of the Agentic Brain itself — the agent's subconscious processing buffer:

FieldPurpose
correlation_idTies the response back to the original assertion submission.
statusEnum: accepted, reinforced, conflict_detected, needs_clarification, rejected.
priorityUrgency ranking. A contradiction on a high-confidence assertion is more urgent than a reinforcement confirmation.
surfacing_hintNatural language suggestion for how the agent could raise this in conversation. Generated by the Thalamus.
ttlTime-to-live. A clarification request from two days ago is stale. Expired items are auto-resolved or discarded.
payloadThe enriched assertion(s), conflicting assertions, or clarification questions.
acknowledgedBoolean. Set to true when the source agent has consumed and acted on (or explicitly skipped) this item.

SDK access is through the intake namespace:

typescript
// Check for pending intake results
const pending = await brain.intake.pending(agentId)

// Acknowledge after acting on or skipping an item
await brain.intake.acknowledge(itemId)

6.3 Batch Inference

The asynchronous model enables a capability that synchronous processing cannot: batch inference. If the source agent fires five assertions during a rapid conversational exchange, the Thalamus can process them as a batch — detecting relationships between them, resolving cross-references, and building a coherent assertion set rather than five independent writes. Batch processing produces significantly higher knowledge quality than sequential processing because it can see the forest, not just individual trees.

6.4 Why the Subconscious Is a Feature, Not a Workaround

The asynchronous model is not a compromise forced by latency constraints. It is a better cognitive architecture. Synchronous intake would force the source agent to handle conflicts and clarifications immediately — mid-conversation, out of context, disruptively. The Subconscious allows the agent to handle them when the moment is right. A human colleague who notices an inconsistency does not interrupt you mid-sentence. They wait for a natural pause and say "Hey, earlier you mentioned X, but now it sounds like Y — did something change?" That is what the Subconscious enables.


Part V: The Knowledge Semantics

7. Temporal Semantics

Business Time (Valid Time)

When was this fact true in the real world? If a user says "I was promoted to VP in January," the assertion's valid_from is January, even if the conversation happens in March. Business time enables the agent to reason about the world's timeline, not just its own.

Transaction Time (System Time)

When did the system learn this fact? Transaction time is set automatically when the assertion is recorded and cannot be modified. It enables audit queries: "Show me everything the agent knew as of 3pm last Thursday."

Why Both Matter

On March 1, a user tells the agent "I was promoted to VP on January 15." The assertion has business time of January 15 and transaction time of March 1. If an auditor asks "What did the agent know about this user on February 1?" — the answer is nothing about the VP role, because the system did not learn it until March 1. If the auditor asks "What was this user's role on February 1?" — the answer is VP, because the promotion was effective January 15. This distinction is required for SOX compliance, GDPR audit trails, and any regulated environment.

8. The Confidence Model

RangeTypical SourceAgent Behavior
0.9–1.0Human-verified input, official recordsEstablished fact. Safe to act on without qualification.
0.7–0.89Direct user statementReliable. May act on it, can reference the source.
0.5–0.69Agent inference from contextProbable. Qualify with "I believe" or "from our conversation."
0.3–0.49Weak inference, partial patternDo not act without confirmation.
0.0–0.29Contradicted or retractedDo not surface. Retained for audit only.

Confidence Propagation: When an assertion is derived from other assertions, its confidence is computed as the minimum confidence in its derivation chain. Derived knowledge is never more confident than its weakest input.

Confidence Reinforcement: When the Thalamus detects that an incoming assertion matches an existing one (via content-addressable hash), the existing assertion's confidence is increased rather than creating a duplicate. A fact mentioned once carries lower confidence than a fact confirmed across five conversations. This models the brain's rehearsal mechanism — repetition strengthens memory.

9. The Provenance Model

ProvenanceDefinitionTrust Implication
humanDirectly stated by a user or administrator.Highest trust. The agent was explicitly told this.
agentInferred by the AI during conversation.Variable trust. Depends on model confidence and context quality.
importLoaded from an external system.System trust. Reliability depends on source system quality.
derivedComputed from other assertions via the Thalamus inference cascade.Inherited trust. Confidence is the minimum of input assertions.

Provenance enables policy-driven governance: "Agent-provenance assertions with confidence below 0.7 require human confirmation before being used in customer-facing responses." This is the kind of policy that enterprises require and that no system without provenance tracking can enforce.

10. Immutability and Supersession

Assertions are never modified or deleted. When a fact changes, a new assertion is created and the previous one is marked as superseded. The supersession chain provides complete knowledge evolution history.

Retraction is a special case: when the user says "Actually, I'm not on the Atlas project anymore," the agent creates a new assertion with valid_until set to the current time, superseding the original. The original is not deleted — it is closed. Auditors can reconstruct exactly what the agent knew at any prior point in time.


Part VI: Signals — Emotional and Behavioral Cognition

11. The Signal Primitive

Assertions capture stable facts. But cognition is not just facts — it includes emotional state, behavioral patterns, and interpersonal dynamics that change continuously and decay over time.

Signals are the ABC primitive for this dimension. A signal records an observation of emotional or behavioral state at a specific moment:

typescript
{
  subject: 'user:alice',
  signalType: 'frustration',
  intensity: 0.8,           // 0.0 – 1.0
  confidence: 0.9
}

Unlike assertions, signals are ephemeral by design. A frustration signal from two weeks ago should not carry the same weight as one from five minutes ago. The signal system models this through configurable decay.

12. Decay and Half-Life

Each signal type has a configured half-life — the time (in hours) it takes for the recorded intensity to decay by 50%. The engine computes the current effective intensity on every read using exponential decay:

effective_intensity = recorded_intensity × 0.5^(hours_elapsed / half_life)

This means the engine never returns stale signal data. Every query reflects the current emotional landscape, not a historical snapshot. An agent asking "how does this user feel?" gets an answer grounded in right now.

13. Trajectories

Individual signal readings are useful. But the direction of change is often more valuable. Is the user's frustration rising or falling? Is engagement growing or waning?

The engine computes trajectories by analyzing signal entries over a configurable time window:

TrajectoryCondition
warmingPositive signals strengthening or negative signals weakening
coolingPositive signals weakening or negative signals strengthening
stableIntensity roughly constant over the window
neutralInsufficient data or fully decayed signals

Trajectories are surfaced in UserIdentification as rapportTrajectory — giving the agent a single, actionable read on the direction of the relationship.

14. Signal Categories

Signals are categorized as positive or negative for trajectory computation:

Positive signals — engagement, satisfaction, trust, enthusiasm, rapport Negative signals — frustration, confusion, disengagement, anxiety, distrust

Custom signal types are supported for domain-specific use cases. The categorization drives trajectory computation: rising positive signals = warming, rising negative signals = cooling.


Part VII: The Cognitive Frontier

15. Unexplored Cognitive Dimensions

ABC specifies the foundational architecture for an Agentic Brain. The following cognitive dimensions represent the research frontier — areas where biological cognition provides clear models that machine cognition has not yet implemented.

Emotional Tagging — The amygdala tags memories with emotional weight. Assertions could carry an emotional dimension alongside confidence, enabling salience-weighted recall where emotionally charged memories surface faster and influence decision-making more strongly.

Salience and Attention — The brain's reticular activating system filters what reaches conscious awareness based on relevance to the current context. A salience scoring layer that weights assertions based on recency, emotional charge, relevance, and frequency of access would make recall dramatically more useful in high-volume knowledge bases.

Memory Consolidation — During sleep, the brain consolidates — it moves important short-term memories into long-term storage, compresses redundant information, and strengthens frequently accessed pathways. A background consolidation process is structurally different from confidence decay. It is reorganization of knowledge itself.

Procedural Memory — ABC handles declarative knowledge (facts about the world). The brain also has procedural memory — knowing how to do things, not just what things are. An agent that remembers "Tom prefers concise answers" is declarative. An agent that has learned to be concise with Tom without being told is procedural. This is the difference between remembering a fact and developing a skill.

Priming and Associative Activation — The brain does not recall in isolation. Accessing one memory activates a spreading network of associations. The bond graph provides structural foundation for this, but there is no model yet for probabilistic associative activation during recall.

Theory of Mind — The brain maintains models of other people's knowledge and beliefs. An agent with theory of mind would track not just what it knows, but what it believes the user knows — enabling it to avoid redundant explanations and surface genuinely new information. If solved, this transforms the Agentic Brain from a memory system into a genuine cognitive partner.


Part VIII: Comparison

16. ABC vs. Existing Approaches

CapabilityVector DBKV / RedisRDBMSAgentic Brain (ABC)
Cognitive intake pipelineNoNoNoThalamus: 7-stage processing
Async knowledge processingNoNoNoSubconscious with response queue
Contradiction detectionNoNoManualAutomatic via coherence validation
Signal decay and trajectoriesNoNoNoNative with configurable half-life
Temporal queriesNoTTL onlyLimitedBitemporal, first-class
Confidence scoringNoNoManualBuilt into every assertion
Provenance trackingNoNoManualRequired on every assertion
Knowledge graphNoNoVia JOINsNative directional graph
Inference cascadeNoNoNoAuto-derived assertions from intake
Immutable historyNoNoVia triggersArchitectural guarantee
Agent identityNoNoNoNative agent config store

The comparison makes the category difference visible. These technologies are storage and retrieval tools. The Agentic Brain is a cognitive system. They occupy different layers of the stack, and the Agentic Brain may use any of them internally — but it provides epistemological semantics that none of them offer.


Part IX: Architectural Requirements

17. Implementation Requirements

Any system implementing Assertion-Based Cognition must satisfy the following:

RequirementDescription
Content-Addressable IdentityAssertion IDs must be deterministic hashes. Same content, same ID, everywhere.
Bitemporal IndexingEfficient queries on both business time and transaction time, independently and combined.
Append-Only DurabilityAll writes durable after acknowledgment. No assertion lost.
Tenant IsolationCross-tenant data leakage must be architecturally impossible.
Thalamus Intake PipelineDedicated cognitive intake layer. Assertions must not bypass quality gating in production.
Asynchronous ProcessingIntake must not block the source agent's conversational flow.
Cognitive Response QueuePersistent buffer for intake results with priority, TTL, and surfacing hints.
Graph Traversal with Temporal FilteringRelationship queries with temporal and confidence predicates during traversal, not post-filters.
Provenance as Required MetadataUnprovenienced assertions are invalid. This is not optional.
Confidence-Aware RecallRecall must use confidence scores when assembling context.
Signal Decay on ReadSignal queries must compute effective intensity at query time, not return stale recorded values.

Conclusion

The AI agent market is building on a foundation of amnesia. Agents that cannot remember, cannot track confidence, cannot prove where their knowledge came from, cannot detect contradictions, and cannot reason about time are not agents. They are stateless functions with personality prompts.

Assertion-Based Cognition provides the formal theory for what an agent's brain should be. It specifies how knowledge enters the system (the Thalamus), how it is processed without disrupting conversation (the Subconscious), how it is stored with full epistemological metadata (the assertion primitive), how it is connected (bonds), how emotional and behavioral state is tracked (signals), and how it is recalled with cognitive structure (contextualization).

The five principles — assertion-native knowledge, confidence as a spectrum, bitemporal semantics, mandatory provenance, and immutability — are not optional features. They are the minimum requirements for trustworthy machine cognition. The Thalamus intake pipeline and asynchronous Subconscious are not optimizations. They are the difference between a database with metadata and a brain that learns.

The question for every team building AI agents is no longer "Where do we store memory?" It is: Does our agent have a brain?


SubCortex — The Agentic Brain

subcortex.ai

Released under the MIT License.