Appearance
Core Concepts
Subjects
A subject is any entity in your system — a user, an agent, a team, a document, a product. Subjects are identified by a string URI you define:
user:alice
agent:support-bot
team:platform-engineering
org:acme-corpThere is no subject registration step. A subject comes into existence the moment you write data about it. The SubjectPrefix enum in the SDK provides well-known prefixes that the engine recognizes for specialized behavior.
Assertions
An assertion is a fact about a subject — the atomic unit of knowledge in SubCortex:
typescript
{
subject: 'user:alice',
predicate: 'role',
value: 'Engineering Manager',
confidence: 0.99, // 0.0 – 1.0
source: 'human' // human | agent | import | derived
}Assertions are the core knowledge primitive. They are immutable — when a newer assertion with the same subject+predicate is stored, the older one is marked superseded rather than deleted. You always have the full history.
Confidence
Every assertion carries a confidence score between 0.0 and 1.0:
| Range | Typical Source | Agent Behavior |
|---|---|---|
| 0.9–1.0 | Human-verified input, official records | Established fact. Safe to act on. |
| 0.7–0.89 | Direct user statement | Reliable. May reference the source. |
| 0.5–0.69 | Agent inference from context | Qualify with "I believe" or "from our conversation." |
| 0.3–0.49 | Weak inference, partial pattern | Do not act without confirmation. |
| 0.0–0.29 | Contradicted or retracted | Retained for audit only. |
Derived assertions inherit the minimum confidence in their derivation chain — derived knowledge is never more confident than its weakest input.
Provenance
Every assertion records its origin:
| Source | Description |
|---|---|
human | Directly stated by a user or administrator. Highest trust. |
agent | Inferred by the AI during conversation. Variable trust. |
import | Loaded from an external system. |
derived | Computed from other assertions via the Thalamus inference cascade. |
Provenance enables policy-driven governance — for example, requiring human confirmation before acting on low-confidence agent-inferred assertions.
Temporal Semantics
SubCortex implements bitemporal tracking on every assertion:
- Business time (
valid_from/valid_until) — when the fact was true in the real world - Transaction time — when the system recorded it
This separation enables queries like "What did the agent believe about this user as of last Friday?" (transaction time) vs. "When was this user promoted?" (business time).
Relationships
Relationships model directional connections between two subjects:
typescript
{
fromSubject: 'user:alice',
toSubject: 'team:platform',
relationshipType: 'member_of'
}The relationship graph can be traversed in either direction with temporal and confidence filtering applied during traversal — not as post-filters. Common uses: org structure, ownership, team membership, entity associations.
The SDK provides built-in relationship types via OrgRelationship and EntityRelationship enums, with reverse relationship mapping for bidirectional queries.
Signals
Signals capture emotional and behavioral state. Unlike assertions (which are stable facts), signals decay over time:
typescript
await brain.signals.record({
tenantId: 'my-tenant',
subject: 'user:alice',
signalType: 'frustration',
intensity: 0.8,
confidence: 0.9
})Decay Model
Each signal type has a configured half-life — the time it takes for the intensity to decay by 50%. The engine computes the current effective intensity on read, so the value you get back reflects decay since the signal was last recorded. A frustration score of 0.8 recorded two weeks ago might be 0.2 today.
This gives agents awareness of how a user feels right now, not just at some historical point.
Trajectories
Signals are analyzed over a configurable time window to compute a trajectory:
| Trajectory | Meaning |
|---|---|
warming | Signal intensity is trending upward |
cooling | Signal intensity is trending downward |
stable | Signal intensity is roughly constant |
neutral | Insufficient data or fully decayed |
Trajectories are returned as part of SignalSnapshot and UserIdentification — giving agents not just the current value, but the direction of change.
System Signal Types
The SDK defines built-in signal types (SYSTEM_SIGNAL_TYPES) covering common emotional and behavioral dimensions. Custom signal types are also supported. Signals are categorized as positive or negative for trajectory computation.
Thalamus
The Thalamus is the cognitive intake pipeline — the system that transforms raw conversational signal into structured, validated knowledge. When an agent submits data through intake.submit(), it flows through a multi-stage processing pipeline before being committed to the knowledge store.
Intake Pipeline
Every candidate assertion passes through these processing stages:
- Signal Classification — Is this assertion-worthy? Conversational noise is filtered before it enters the knowledge base.
- Deep Encoding — What does this mean in context? Shallow encoding stores the literal statement. Deep encoding links it to existing knowledge.
- Inference Cascade — What can be derived? "I report to Sarah" triggers relationship creation, role inference, and team membership bonds.
- Coherence Validation — Does this contradict existing assertions? Conflicts are detected and surfaced rather than silently overwritten.
- Reinforcement Check — Does this confirm something already known? Repeated observations boost confidence rather than creating duplicates.
- Contextual Binding — What was the context of acquisition? Assertions are tagged with conversational metadata for context-sensitive recall.
- Consolidation Trigger — Should the working model be updated? After a threshold of new assertions, summaries are reconstructed.
Asynchronous Processing
The Thalamus operates asynchronously to preserve real-time conversational performance. The source agent fires assertions without waiting — zero latency impact. The Thalamus processes in the background and writes results to the Cognitive Response Queue.
The agent checks the queue during natural conversation breaks. If everything processed cleanly, the knowledge base improved silently. If a conflict or clarification is needed, the agent weaves it in when the conversational moment is right.
Response Contract
The Thalamus collaborates with the source agent through structured responses:
| Status | Meaning | Agent Action |
|---|---|---|
accepted | Assertion met quality threshold. Persisted with enrichments. | None required. |
reinforced | Matched existing knowledge. Confidence boosted. | None required. |
conflict_detected | Contradicts an existing assertion. Both included. | Surface the conflict conversationally when appropriate. |
needs_clarification | Assertion is ambiguous. Questions provided. | Ask the user to disambiguate. |
rejected | Not assertion-worthy. Conversational noise. | None required. |
Each response includes a surfacing_hint — a natural language suggestion for how the agent could raise the issue in conversation.
SDK Access
typescript
// Submit through the intake pipeline
await brain.intake.submit({
tenantId: 'my-tenant',
agentId: 'support-bot',
assertions: [{ subject: 'user:alice', predicate: 'role', value: 'VP Engineering' }]
})
// Check for pending results
const pending = await brain.intake.pending(agentId)
// Acknowledge after acting on an item
await brain.intake.acknowledge(itemId)Context Generation
users.identify() assembles everything SubCortex knows about a user into a structured context document:
xml
<subcortex-context version="1.0">
<subject id="user:alice">
<assertions>
<assertion predicate="role" value="Engineering Manager" confidence="0.99" />
</assertions>
<signals>
<signal type="frustration" intensity="0.2" trajectory="cooling" />
</signals>
<relationships>
<relationship type="member_of" target="team:platform" />
</relationships>
<conflicts>
<conflict hint="Previously said role was Senior Engineer, now VP Engineering" />
</conflicts>
</subject>
</subcortex-context>The context includes four cognitive categories:
| Category | Cognitive Analog | Purpose |
|---|---|---|
| Reminders | Prospective memory | Things the agent was asked to remember to surface |
| Active Context | Working memory | Current facts, role, preferences, active projects |
| Signals | Emotional awareness | Current emotional state with trajectories |
| Relational Context | Social cognition | Connected people, team structure, org relationships |
The XML schema is designed for transformer attention — structured so models can reliably extract relevant context. The agent does not search. It recalls.

