Developer Quickstart Guide

Version: 6.0.0

This guide helps developers quickly understand EDDI's architecture and start building agents.

Understanding EDDI in 5 Minutes

What EDDI Is

EDDI is middleware for conversational AI—it sits between your app and AI services (OpenAI, Claude, etc.), providing:

  • Orchestration: Control when and how LLMs are called

  • Business Logic: IF-THEN rules for decision-making

  • State Management: Maintain conversation history and context

  • API Integration: Call external REST APIs from agent logic

Key Concept: The Lifecycle Pipeline

Every user message goes through a pipeline of tasks:

Input → Parser → Rules → API/LLM → Output

Each task transforms the Conversation Memory (a state object containing everything about the conversation).

Agent Composition

Agents aren't code—they're JSON configurations:

Quick Setup

Prerequisites

  • Java 25

  • Maven 3.8.4

  • MongoDB 6.0+

  • Docker (optional, recommended)

Run with Docker (Easiest)

Run from Source

💡 Secrets Vault: If you plan to store API keys through the Manager UI or use ${eddivault:...} references, set the vault master key first:

Without this, the vault is disabled and secret endpoints return HTTP 503. Any passphrase works for local dev. See Secrets Vault for full details.

Configuring AI Tools

If you plan to use the Web Search or Weather tools in your agents, you need to set up API keys in your environment or application.properties.

Web Search (Google):

  • eddi.tools.websearch.provider=google

  • eddi.tools.websearch.google.api-key=...

  • eddi.tools.websearch.google.cx=...

Weather (OpenWeatherMap):

  • eddi.tools.weather.openweathermap.api-key=...

See LangChain Documentation for details.

Your First Agent (via API)

1. Create a Dictionary

Dictionaries define what users can say:

Response: Dictionary ID (e.g., eddi://ai.labs.parser.dictionaries.regular/regulardictionarystore/regulardictionaries/abc123?version=1)

2. Create Behavior Rules

Rules define what the agent does:

Response: Behavior set ID

3. Create Output Templates

Response: Output set ID

4. Create a Workflow

Workflows bundle extensions together:

Response: Workflow ID

5. Create an Agent

Response: Agent ID (e.g., agent-abc-123)

6. Deploy the Agent

7. Chat with Your Agent

Adding an LLM (OpenAI Example)

1. Create LangChain Configuration

2. Add LangChain to Workflow

Add this extension to your package:

3. Create Behavior Rule to Trigger LLM

Now when users ask questions, the LLM is automatically called!

Understanding the Flow

Let's trace what happens when a user says "hello":

1. API Request

2. RestAgentEngine

  • Validates agent ID

  • Creates/loads conversation memory

  • Submits to ConversationCoordinator

3. ConversationCoordinator

  • Ensures sequential processing (no race conditions)

  • Queues message for this conversation

4. LifecycleManager Executes Pipeline

Parser Task:

Behavior Rules Task:

Output Task:

5. Save & Return

  • Memory saved to MongoDB

  • Response returned to user

Key Architectural Components

IConversationMemory

The state object passed through the pipeline:

ILifecycleTask

Interface all tasks implement:

ConversationCoordinator

Ensures messages are processed in order:

Common Patterns

Pattern 1: Conditional LLM Invocation

Only call LLM for complex queries:

Pattern 2: API Call Before LLM

Fetch data, then ask LLM to format it:

The LLM receives the API response in memory and can format it naturally.

Pattern 3: Context-Aware Responses

Use context passed from your app:

Access in output template:

Next Steps

Learn More

Use the Dashboard

Visit http://localhost:7070 to:

  • Create agents visually

  • Test conversations interactively

  • Browse configurations

  • Monitor deployments

Explore Examples

Check the examples/ folder for:

  • Weather agent (API integration)

  • Support agent (multi-turn conversations)

  • E-commerce agent (context management)

Build Your Own Task

Create a custom lifecycle task:

Register it in CDI and it becomes available as an extension!

Troubleshooting

Agent doesn't respond

  1. Check deployment status: GET /administration/deploy/{agentId}

  2. Check conversation state: GET /conversationstore/conversations/{conversationId}

  3. Check logs for errors

Rules not matching

  • Verify dictionary expressions match your input

  • Check rule conditions are correct

  • Use occurrence: "anyStep" to match across conversation

LLM not being called

  • Ensure behavior rule triggers the LLM action

  • Check LangChain configuration is in the package

  • Verify API key is correct

Memory not persisting

  • Ensure MongoDB is running

  • Check connection string in config

  • Use correct scope (conversation not step)

Getting Help

  • Documentation: https://github.com/labsai/EDDI/tree/main/docs

  • GitHub: https://github.com/labsai/EDDI

  • Issues: https://github.com/labsai/EDDI/issues

Summary

EDDI's power comes from its configurable pipeline architecture:

  • Agents are JSON configurations, not code

  • Everything flows through Conversation Memory

  • Tasks are pluggable and reusable

  • LLMs are orchestrated, not just proxied

Start simple, then add complexity as needed. The architecture scales from basic agents to sophisticated multi-API workflows.

Last updated

Was this helpful?