Chat Interface
The chat interface provides interactive AI conversations with context management and tool calling.
Overview
Features:
- Real-time AI responses
- Conversation history
- Tool calling support
- Multi-provider support
- Context management
- Message threading
Usage
Basic Chat
Navigate to Chat in the sidebar:
- Type your message
- Press Enter or click Send
- View AI response
- Continue the conversation
Message Types
| Type | Description |
|---|---|
user | Your messages |
assistant | AI responses |
system | System instructions |
tool | Tool call results |
Conversation Management
New Conversation
Start fresh:
- Click "New Conversation"
- Previous context is cleared
- System prompt is reset
Conversation History
View past conversations:
- Scroll to load older messages
- Search by content
- Filter by date
Clear History
bash
curl -X DELETE https://your-domain.com/api/agents/{id}/chatTool Calling
The AI can call tools during conversation:
Example
User: "What's the weather in Tokyo?"
AI thinks → Calls weather tool → Returns result
json
{
"role": "assistant",
"content": "The weather in Tokyo is...",
"tool_calls": [
{
"id": "call_123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"Tokyo\"}"
}
}
]
}Tool Results
json
{
"role": "tool",
"tool_call_id": "call_123",
"content": "{\"temperature\": 22, \"condition\": \"sunny\"}"
}Configuration
System Prompt
Set agent behavior:
json
{
"llm_system_prompt": "You are a helpful assistant specialized in customer support. Always be polite and professional."
}Model Settings
| Setting | Description |
|---|---|
llm_provider | AI provider (workers-ai, openai, anthropic) |
llm_model | Specific model |
llm_temperature | Creativity (0-1) |
llm_max_tokens | Response length limit |
API
Send Message
bash
curl -X POST https://your-domain.com/api/agents/{id}/chat \
-H "Content-Type: application/json" \
-d '{
"message": "Hello, how can you help me?",
"conversation_id": "conv-123"
}'Response:
json
{
"id": 456,
"role": "assistant",
"content": "Hello! I'm here to help you with...",
"conversation_id": "conv-123",
"created_at": "2024-12-15T10:30:00Z"
}Get History
bash
curl "https://your-domain.com/api/agents/{id}/chat?limit=50&conversation_id=conv-123"Response:
json
{
"messages": [
{
"id": 1,
"role": "user",
"content": "Hello",
"created_at": "2024-12-15T10:00:00Z"
},
{
"id": 2,
"role": "assistant",
"content": "Hi there!",
"created_at": "2024-12-15T10:00:01Z"
}
]
}Streaming Responses
For real-time response streaming:
typescript
const response = await fetch('/api/agents/{id}/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'Tell me a story',
stream: true
})
})
const reader = response.body.getReader()
while (true) {
const { done, value } = await reader.read()
if (done) break
console.log(new TextDecoder().decode(value))
}Context Window
Managing Context
The AI maintains conversation context:
- Recent messages are included
- Older messages are summarized
- System prompt is always included
Context Limits
| Provider | Context Limit |
|---|---|
| Workers AI | 8K-128K tokens |
| OpenAI | 4K-128K tokens |
| Anthropic | 100K-200K tokens |
Integrations
With Workflows
Trigger chat from workflows:
json
{
"type": "chat",
"data": {
"message": "{{input.question}}",
"system_prompt": "Answer briefly"
}
}With Tools
Enable agent tools:
json
{
"tools": [
{
"name": "search_database",
"description": "Search the knowledge base",
"parameters": {
"query": { "type": "string" }
}
}
]
}Best Practices
1. Clear System Prompts
Write specific, clear instructions:
You are a technical support agent for Acme Software.
- Answer questions about our products
- Help troubleshoot issues
- Escalate complex cases to humans
- Never make up answers2. Manage Context
Keep conversations focused:
- Start new conversations for new topics
- Clear history when switching contexts
- Use conversation IDs for threading
3. Enable Relevant Tools
Only enable needed tools:
- Reduces confusion
- Faster responses
- Lower costs
4. Monitor Usage
Track chat metrics:
- Response times
- Token usage
- Error rates