# User Stories
`Úsér_Stóríés // Sýstém_Dóc`
**ᚠ ᛫ ᛟ ᛫ ᚱ ᛫ ᛒ ᛫ ᛟ ᛫ ᚲ**
***
title: User Stories
subtitle: What developers can build with the ForbocAI SDK
slug: user-stories
------------------
This document defines the core user stories for the ForbocAI SDK. Each story follows the format: **As a \[role], I want \[goal], so that \[benefit]**. Stories are translated into BDD specifications that drive API endpoint design.
***
## Epic 1: Cortex — Local Inference
Córtex_Módule // Locál_SLM
ᚠ ᛫ ᚢ ᛫ ᚦ ᛫ ᚨ ᛫ ᚱ ᛫ ᚲ
### US-1.1: Initialize Local Model
> **As a** developer\
> **I want to** initialize a local language model on the user's device\
> **So that** I can run AI inference without server round-trips or API costs
```gherkin
Feature: Cortex Initialization
Scenario: Initialize Cortex with a model
Given a configured environment
When I call Cortex.init with model "smollm2-135m"
Then the model weights should be downloaded (if not cached)
And the model should be loaded into memory
And the Cortex instance should be ready for inference
Scenario: List available models
Given an initialized SDK
When I call Cortex.listModels()
Then I should receive an array of available models with their sizes and capabilities
```
| Method | Endpoint | Description |
| -------- | ------------------------ | ---------------------------- |
| `POST` | `/v1/cortex/init` | Initialize a Cortex instance |
| `GET` | `/v1/cortex/models` | List available models |
| `GET` | `/v1/cortex/{id}/status` | Get Cortex instance status |
| `DELETE` | `/v1/cortex/{id}` | Destroy a Cortex instance |
***
### US-1.2: Generate Completion
> **As a** developer\
> **I want to** send a prompt to the local model and receive a completion\
> **So that** my application can generate AI-powered text responses
```gherkin
Feature: Text Completion
Scenario: Generate a simple completion
Given an initialized Cortex instance
When I call cortex.complete() with prompt "Hello, my name is"
Then I should receive a streaming response with generated tokens
And the response should complete within the configured timeout
Scenario: Generate with system prompt
Given an initialized Cortex instance
When I call cortex.complete() with a system prompt and user prompt
Then the model should follow the system prompt instructions
And respond appropriately to the user prompt
Scenario: Generate structured JSON output
Given an initialized Cortex instance
When I call cortex.complete() with a JSON schema constraint
Then the response should be valid JSON matching the schema
```
| Method | Endpoint | Description |
| ------ | ------------------------------------- | ------------------------------- |
| `POST` | `/v1/cortex/{id}/complete` | Generate completion (streaming) |
| `POST` | `/v1/cortex/{id}/complete/structured` | Generate structured output |
***
### US-1.3: Handle Offline/Fallback
> **As a** developer\
> **I want to** define fallback behaviors if the local model fails or hangs\
> **So that** my game remains playable even if the AI subsystem crashes
```gherkin
Feature: Inference Resilience
Scenario: Model load failure
Given a device with insufficient RAM
When Cortex.init() fails
Then the SDK should throw a specific "insufficient_resources" error
And the game should be able to fall back to "Legacy Rules" mode
Scenario: Inference timeout
Given a complex prompt
When the model takes longer than 2000ms to respond
Then the SDK should abort the request
And return a pre-defined "Default Fallback" response (e.g., "...")
```
| Method | Endpoint | Description |
| ------ | -------------------- | ---------------------------------------------------------- |
| `N/A` | `Client-Side Config` | Configure timeouts and fallback strings in `Cortex.init()` |
***
## Epic 2: Agent — Autonomous Entities
Agént_Créate // NPC_Entíty
᛭ ᚠ ᛫ ᚨ ᛫ ᛁ ᛭
### US-2.1: Create an Agent
> **As a** developer\
> **I want to** create an AI agent with a persona and initial state\
> **So that** I can add intelligent NPCs or assistants to my application
```gherkin
Feature: Agent Creation
Scenario: Create agent with persona
Given an initialized Cortex instance
When I call Agent.create() with a persona string and initial state
Then an Agent instance should be created
And the agent should have an empty memory store
And the agent should be ready to process inputs
Scenario: Create agent from Soul (import)
Given a Soul exported to IPFS
When I call Agent.fromSoul() with the IPFS CID
Then the agent should be restored with its persona, memories, and state
And the agent should behave consistently with its previous incarnation
```
| Method | Endpoint | Description |
| -------- | ------------------- | --------------------------------- |
| `POST` | `/v1/agents` | Create a new agent |
| `GET` | `/v1/agents/{id}` | Get agent details |
| `POST` | `/v1/agents/import` | Import agent from Soul (IPFS CID) |
| `DELETE` | `/v1/agents/{id}` | Destroy an agent |
***
### US-2.2: Process Agent Input
> **As a** developer\
> **I want to** send input to an agent and receive dialogue + actions\
> **So that** my agent can respond intelligently and take validated actions
```gherkin
Feature: Agent Processing
Scenario: Process dialogue input
Given an agent with persona "a suspicious merchant"
And the agent has memory of being cheated by the player
When I call agent.process() with input "Want to make a deal?"
Then the response should include dialogue reflecting suspicion
And the response may include an action object
Scenario: Process with context
Given an agent with persona "a guard"
When I call agent.process() with input "Let me through"
And context includes { playerHasPass: true }
Then the response action should be { type: 'ALLOW_PASSAGE' }
Scenario: Invalid action rejected by Bridge
Given an agent with persona "a merchant"
When the agent attempts to trade an item it doesn't possess
Then the Bridge should reject the action
And return an error with reason "ITEM_NOT_IN_INVENTORY"
```
| Method | Endpoint | Description |
| ------- | ------------------------- | ------------------------------ |
| `POST` | `/v1/agents/{id}/process` | Process input and get response |
| `GET` | `/v1/agents/{id}/state` | Get current agent state |
| `PATCH` | `/v1/agents/{id}/state` | Update agent state externally |
***
## Epic 3: Memory — RAG Pipeline
Mémory_Stóre // Véctor_Recáll
ᛊ ᛫ ᛟ ᛫ ᚢ ᛫ ᛚ
### US-3.1: Store Observation
> **As a** developer\
> **I want to** store events as semantic memories for an agent\
> **So that** the agent can recall relevant past events during future interactions
```gherkin
Feature: Memory Storage
Scenario: Store a text observation
Given an agent with an empty memory store
When I call agent.memory.store() with text "The player saved my life"
Then the text should be embedded as a vector
And stored in the agent's memory database
And associated with a timestamp
Scenario: Store a structured event
Given an agent with an existing memory
When I call agent.memory.store() with an event object
Then the event should be serialized to text
And embedded and stored with metadata
```
| Method | Endpoint | Description |
| -------- | ----------------------------------- | ----------------------------- |
| `POST` | `/v1/agents/{id}/memory` | Store a memory/observation |
| `GET` | `/v1/agents/{id}/memory` | List all memories (paginated) |
| `DELETE` | `/v1/agents/{id}/memory/{memoryId}` | Delete a specific memory |
***
### US-3.2: Recall Relevant Memories
> **As a** developer\
> **I want to** retrieve memories semantically related to a query\
> **So that** the agent can use past context when responding
```gherkin
Feature: Memory Retrieval
Scenario: Recall by semantic similarity
Given an agent with memories about "the player's betrayal" and "a sunny day"
When I call agent.memory.recall() with query "trust issues"
Then the memory about betrayal should be returned
And the sunny day memory should not be returned
Scenario: Recall with limit
Given an agent with 100 memories
When I call agent.memory.recall() with limit 5
Then at most 5 memories should be returned
And they should be ordered by relevance score
```
| Method | Endpoint | Description |
| ------ | ------------------------------- | ----------------------------- |
| `POST` | `/v1/agents/{id}/memory/recall` | Semantic search over memories |
***
## Epic 4: Bridge — Neuro-Symbolic Validation
Brídge_Válidate // Rúle_Chéck
ᛒ ᛫ ᚱ ᛫ ᛁ ᛫ ᛗ
### US-4.1: Validate Agent Action
> **As a** developer\
> **I want to** validate AI-generated actions against my application's rules\
> **So that** agents cannot perform impossible or invalid actions
```gherkin
Feature: Action Validation
Scenario: Valid action passes validation
Given a Bridge configured with game rules
And an agent with inventory ["sword", "shield"]
When the agent outputs action { type: 'EQUIP', item: 'sword' }
Then the Bridge should validate the action as VALID
And return the action for execution
Scenario: Invalid action rejected
Given a Bridge configured with game rules
And an agent with inventory ["sword"]
When the agent outputs action { type: 'EQUIP', item: 'shield' }
Then the Bridge should reject the action as INVALID
And return error { reason: 'ITEM_NOT_OWNED', item: 'shield' }
Scenario: Custom validation rules
Given a Bridge with custom rule "players cannot trade during combat"
When an agent attempts a TRADE action during combat state
Then the Bridge should reject with reason 'INVALID_DURING_COMBAT'
```
| Method | Endpoint | Description |
| ------ | --------------------- | -------------------------------- |
| `POST` | `/v1/bridge/validate` | Validate an action against rules |
| `POST` | `/v1/bridge/rules` | Register custom validation rules |
| `GET` | `/v1/bridge/rules` | List registered rules |
***
### US-4.2: Force Agent Action (GM Override)
> **As a** developer\
> **I want to** forcefully inject an action or dialogue into an agent's stream\
> **So that** I can control scripted sequences (cutscenes) without fighting the AI
```gherkin
Feature: GM Override
Scenario: Inject scripted dialogue
Given an agent in the middle of a "Thinking" loop
When I call agent.override({ dialogue: "Follow me, quickly!" })
Then the agent should immediately output that dialogue
And the AI context should ideally update to remember it said that
Scenario: Force movement
Given an agent deciding where to go
When I call agent.override({ action: { type: 'MOVE', target: 'door' } })
Then the agent should execute the move immediately
And bypass the usual validation rules (GM Authority)
```
| Method | Endpoint | Description |
| ------ | -------------------------- | -------------------------------- |
| `POST` | `/v1/agents/{id}/override` | Force a specific state or action |
***
## Epic 5: Soul — Portable Agent State
Sóul_Expórt // Státe_Snápshot
ᛊ ᛫ ᛟ ᛫ ᚢ ᛫ ᛚ
### US-5.1: Export Agent to Soul
> **As a** developer\
> **I want to** export an agent's complete state (persona, memories, stats) as a Soul\
> **So that** it can be persisted, traded, or used in other applications
```gherkin
Feature: Soul Export
Scenario: Export to local JSON
Given an agent with persona, memories, and state
When I call agent.exportSoul({ format: 'json' })
Then I should receive a JSON object containing all agent data
And the JSON should follow the Soul schema
Scenario: Export to IPFS
Given an agent with memories
When I call agent.exportSoul({ storage: 'ipfs' })
Then the Soul should be uploaded to IPFS
And I should receive the IPFS CID
And the Soul should be retrievable via the CID
```
| Method | Endpoint | Description |
| ------ | ----------------------------- | ------------------------- |
| `POST` | `/v1/agents/{id}/soul/export` | Export agent to Soul |
| `GET` | `/v1/souls/{cid}` | Retrieve Soul by IPFS CID |
***
### US-5.2: Import Agent from Soul
> **As a** developer\
> **I want to** recreate an agent from an exported Soul\
> **So that** characters can persist across sessions or transfer between applications
```gherkin
Feature: Soul Import
Scenario: Import from IPFS CID
Given a Soul exported with CID "bafybeif..."
When I call Agent.fromSoul({ cid: 'bafybeif...' })
Then the agent should be restored with its original persona
And all memories should be restored
And the agent state should match the export
Scenario: Import with state merge
Given an existing agent and a Soul from another application
When I call agent.mergeSoul() with the foreign Soul
Then the agent's memories should include both sets
And conflicting state should be resolved per merge rules
```
| Method | Endpoint | Description |
| ------ | ---------------------------- | ------------------------------ |
| `POST` | `/v1/agents/import` | Create agent from Soul |
| `POST` | `/v1/agents/{id}/soul/merge` | Merge Soul into existing agent |
***
## Epic 6: Ghost Agents — Automated QA
Ghóst_Rún // Héadless_Tést
ᛇ ᛫ ᛆ ᛫ ᛟ ᛫ ᛊ ᛫ ᛏ
### US-6.1: Run Ghost Agent Session
> **As a** developer\
> **I want to** run headless AI agents through my application for automated testing\
> **So that** I can validate content, balance, and edge cases at scale
```gherkin
Feature: Ghost Agent Testing
Scenario: Run exploration test
Given a level with multiple paths
When I run a Ghost Agent with goal "maximize exploration"
Then the agent should traverse all accessible areas
And report exploration coverage percentage
Scenario: Detect dead-end
Given a level with an impassable bug
When I run a Ghost Agent
And the agent gets stuck for > 60 seconds
Then the run should flag a "potential dead-end" at the location
And capture a screenshot
Scenario: Batch testing with seeds
Given 10 procedurally generated levels
When I run Ghost Agents on all levels in parallel
Then I should receive aggregate metrics (completion rate, avg time, failures)
```
| Method | Endpoint | Description |
| ------ | ------------------------------- | --------------------------- |
| `POST` | `/v1/ghost/run` | Start a Ghost Agent session |
| `GET` | `/v1/ghost/{sessionId}/status` | Get session status |
| `GET` | `/v1/ghost/{sessionId}/results` | Get session results/metrics |
| `POST` | `/v1/ghost/batch` | Run batch Ghost Agent tests |
***
## Epic 7: Analytics & Debugging — The "Black Box" Insight
Debúg_Log // Tóken_Metrïcs
ᛞ ᛫ ᛖ ᛫ ᛒ ᛫ ᚢ ᛫ ᚷ
### US-7.1: Inspect Agent Thought Process
> **As a** developer\
> **I want to** view the internal "Chain of Thought" logs of an agent\
> **So that** I can understand *why* an NPC made a specific decision (e.g., why it attacked a friendly player)
```gherkin
Feature: Reasoning Inspection
Scenario: View internal monologue
Given an agent has just performed an action
When I query the agent's debug logs
Then I should see the "Thought" chain that led to the "Action"
And I should see which Memories were recalled for context
Scenario: Real-time debug stream
Given a game running in "Debug Mode"
When an agent "thinks"
Then the SDK should emit a `debug:thought` event with the raw SLM reasoning
```
| Method | Endpoint | Description |
| ------ | ----------------------------- | ----------------------------------- |
| `GET` | `/v1/agents/{id}/logs` | Get decision logs for an agent |
| `GET` | `/v1/agents/{id}/logs/latest` | Get the most recent thought process |
***
### US-7.2: Monitor Token Usage & Cost
> **As a** studio lead\
> **I want to** track token usage and API calls per agent/session\
> **So that** I can optimize performance and manage infrastructure costs
```gherkin
Feature: Usage Monitoring
Scenario: Track session tokens
Given an active game session
When an agent generates dialogue
Then the token usage count should increment
And I should be able to set a hard limit to prevent overn-runs
Scenario: View dashboard metrics
Given a dashboard user
When I view the "Usage" tab
Then I should see a breakdown of Local (Free) vs. Cloud (Paid) inference calls
```
| Method | Endpoint | Description |
| ------ | ------------------ | ------------------------- |
| `GET` | `/v1/usage/stats` | Get usage statistics |
| `GET` | `/v1/usage/limits` | Check current rate limits |