Conversation Memory & State Management

Version: ≥5.5.x

Overview

Conversation Memory (IConversationMemory) is the heart of EDDI's stateful architecture. It's a Java object that represents the complete state of a conversation, including history, user data, context, and intermediate processing results. This object is passed through the entire Lifecycle Pipeline, with each task reading from and writing to it.

What is Conversation Memory?

Think of Conversation Memory as a living document that captures everything about a conversation:

  • Who: User ID and bot ID

  • What: All messages exchanged (both user inputs and bot outputs)

  • When: Timestamp of each interaction

  • Context: Data passed from external systems (user profile, session info, etc.)

  • State: Current processing stage (READY, IN_PROGRESS, ENDED, etc.)

  • Properties: Extracted and stored data (user preferences, entities, variables)

  • History: Complete record of all previous conversation steps

Key Concepts

1. Conversation Steps

A conversation is divided into steps, where each step represents one complete interaction cycle:

Each step contains:

  • Input: What the user said

  • Actions: Actions triggered by behavior rules

  • Data: Results from lifecycle tasks (parsed expressions, API responses, LLM outputs)

  • Output: Bot's response

2. Current Step vs Previous Steps

  • Current Step: Writable, being built during lifecycle execution

  • Previous Steps: Read-only, provides conversation history

3. Memory Scopes

EDDI supports different scopes for storing data:

Scope
Lifetime
Use Case

step

Single interaction

Temporary data needed only for this response

conversation

Entire conversation

User preferences, extracted entities (persists across steps)

longTerm

Across conversations

User profile data that should persist between sessions

4. Undo/Redo Support

Conversation Memory supports undo/redo operations:

This enables scenarios like:

  • User makes a mistake and wants to go back

  • Testing different conversation paths

  • Debugging bot behavior

Conversation Memory Structure

Core Properties

Conversation States

How Lifecycle Tasks Use Memory

Each lifecycle task follows this pattern:

Example: Behavior Rules Task

Example: LangChain Task

Example: HTTP Calls Task

Accessing Memory in Configurations

In Output Templates (Thymeleaf)

In HTTP Call Body Templates

In Behavior Rule Conditions

Memory Persistence

Storage Mechanism

  1. During Processing: Memory resides in Java heap (fast access)

  2. After Each Step: Memory is serialized and saved to MongoDB

  3. On Next Request: Memory is loaded from MongoDB and cached

Caching Strategy

EDDI uses Infinispan for distributed caching:

  • Fast retrieval of frequently accessed conversations

  • Reduced MongoDB load

  • Supports horizontal scaling across multiple EDDI instances

MongoDB Structure

Best Practices

1. Use Appropriate Scopes

2. Clean Up Large Data

If you store large API responses, consider cleaning them after use:

Extract only what you need instead of storing the entire response.

3. Leverage History for Context

When calling LLMs, you can control how much history is sent:

4. Use Context for External Data

Pass data from your application via context instead of hardcoding:

Then access in bot logic:

Memory Flow Example

Let's trace how memory flows through a complete conversation step:

1. User Request

2. Memory Initialization

3. Parser Task Execution

4. Behavior Rules Execution

5. HTTP Call Execution

6. Output Generation

7. Memory Persistence

8. Response to User

Advanced Topics

Accessing Nested Data

Iterating Over History

Conditional Memory Access

Last updated

Was this helpful?