Langchain
Version: ≥5.3.x
The Langchain lifecycle task (using the langchain4j library) allows EDDI to leverage the capabilities of various large language model (LLM) APIs. This task seamlessly integrates with a range of currently supported APIs, including OpenAI's ChatGPT, Hugging Face models, Anthropic Claude, Google Gemini, and Ollama, thereby facilitating advanced natural language processing within EDDI bots.
Note: To streamline the initial setup and configuration of the Langchain lifecycle task, you can utilize the "Bot Father" bot. The "Bot Father" bot guides you through the process of creating and configuring tasks, ensuring that you properly integrate the various LLM APIs. By using "Bot Father," you can quickly get your Langchain configurations up and running with ease, leveraging its intuitive interface and automated assistance to minimize errors and enhance productivity.
Configuration
The Langchain task is configured through a JSON object that defines a list of tasks, where each task can interact with a specific LLM API. These tasks can be tailored to specific use cases, utilizing unique parameters and settings.
Configuration Parameters
actions: Defines the actions that the lifecycle task is responsible for.
id: A unique identifier for the lifecycle task.
type: Specifies the type of API (e.g.,
openai
,huggingface
,anthropic
,gemini
,ollama
).description: Brief description of what the task accomplishes.
parameters: Key-value pairs for API configuration such as API keys, model identifiers, and other API-specific settings.
systemMessage: Optional message to include in the system context.
prompt: User input to override before the request to the LLM. If not set or empty, the user input will be taken
sendConversation: Boolean indicating whether to send the entire conversation or only user input (
true
orfalse
, default:true
).includeFirstBotMessage: Boolean indicating whether to include the first bot message in the conversation (
true
orfalse
, default:true
).logSizeLimit: Limit for the size of the log (
-1
for no limit).convertToObject: Boolean indicating whether to convert the LLM response to an object (
true
orfalse
, default:false
). Note: For this to work, the response from your LLM needs to be in valid JSON format!addToOutput: Boolean indicating whether the LLM output should automatically be added to the output (
true
orfalse
, default:false
).
Example Configuration
Here’s an example of how to configure a Langchain task for various LLM APIs:
OpenAI Configuration
Hugging Face Configuration
Anthropic Configuration
Note: Anthropic doesn't allow the first message to be from the bot, therefore includeFirstBotMessage
should be set to false
for anthropic api calls.
Vertex Gemini Configuration
Ollama Configuration
API Endpoints
The Langchain task can be managed via specific API endpoints, facilitating easy setup, management, and operation within the EDDI ecosystem.
Endpoints Overview
Read JSON Schema
Endpoint:
GET /langchainstore/langchains/jsonSchema
Description: Retrieves the JSON schema for validating Langchain configurations
List Langchain Descriptors
Endpoint:
GET /langchainstore/langchains/descriptors
Description: Returns a list of all Langchain configurations with optional filters
Read Langchain Configuration
Endpoint:
GET /langchainstore/langchains/{id}
Description: Fetches a specific Langchain configuration by its ID
Update Langchain Configuration
Endpoint:
PUT /langchainstore/langchains/{id}
Description: Updates an existing Langchain configuration
Create Langchain Configuration
Endpoint:
POST /langchainstore/langchains
Description: Creates a new Langchain configuration
Duplicate Langchain Configuration
Endpoint:
POST /langchainstore/langchains/{id}
Description: Duplicates an existing Langchain configuration
Delete Langchain Configuration
Endpoint:
DELETE /langchainstore/langchains/{id}
Description: Deletes a specific Langchain configuration
Sure, here is the extended section for the configuration options:
Extended Configuration Options
The LangChain lifecycle task offers extended configuration options to fine-tune the behavior of tasks before and after they interact with LLM APIs. These configurations help in managing properties, handling responses, and controlling retries.
Example of Extended Configuration
Below is an example configuration showcasing more advanced options such as preRequest
, postResponse
, and retryHttpCallInstruction
.
Configuration Parameters Explained
preRequest.propertyInstructions: Defines properties to be set before making the request to the LLM API. Each instruction specifies:
name: The property name.
valueString: The value to be assigned to the property.
scope: The scope of the property (e.g.,
step
,conversation
,longTerm
).
postResponse.propertyInstructions: Defines properties to be set based on the response from the LLM API. Each instruction specifies:
name: The property name.
valueString: The value to be assigned to the property.
scope: The scope of the property (e.g.,
step
,conversation
,longTerm
).
postResponse.outputBuildInstructions: Configures how the response should be transformed into output. This is an alternative to
addToOutput
if you want to manipulate the llm results before adding them to the bot output. Each instruction specifies:pathToTargetArray: The path to the array in the response where data is located.
iterationObjectName: The name of the object for iterating over the array.
outputType: The type of output to generate.
outputValue: The value to be used for output.
postResponse.qrBuildInstructions: Configures quick replies based on the response. Each instruction specifies:
pathToTargetArray: The path to the array in the response where quick replies are located.
iterationObjectName: The name of the object for iterating over the array.
quickReplyValue: The value for the quick reply.
quickReplyExpressions: The expressions for the quick reply.
This extended configuration provides more control and flexibility in managing tasks and handling responses, ensuring that your LangChain tasks operate efficiently and effectively.
Common Issues and Troubleshooting
API Key Expiry: Ensure API keys are valid and renew them before expiry.
Model Misconfiguration: Verify model names and parameters to ensure they match those supported by the LLM provider.
Timeouts and Performance: Adjust timeout settings based on network performance and API responsiveness.
Last updated