This API provides streaming chat functionality with AI-generated responses, thinking steps, and metadata. The endpoint processes user messages and returns real-time streaming responses.
/api/v1
All endpoints require Bearer token authentication.
Authorization: Bearer <your-token>
Endpoint: POST /api/v1/chat/generate/stream
Processes user messages and streams back AI-generated responses with thinking steps and metadata in real-time.
Parameter | Type | Required | Description |
---|---|---|---|
chatId |
string | No | ID of the chat session. If not provided, a new session will be created. |
chat_history |
array | Yes | Array of chat messages containing the conversation history. |
persona |
string | No | Persona to use for response generation (default: “maya”). |
scope |
string | No | Scope of the conversation (default: “internal”). |
tools |
array | No | Optional tools available for the chat. |
Each message in chat_history
should contain:
Property | Type | Required | Description |
---|---|---|---|
content |
string | Yes | The message content. |
role |
string | Yes | Message role: user , assistant , or system . |
id |
string | No | Unique identifier for the message. |
{
"chatId": "60d21b4667d0d8992e610c85",
"chat_history": [
{
"content": "Hello, how are you?",
"role": "user",
"id": "msg_001"
},
{
"content": "Hello! I'm doing well, thank you for asking. How can I help you today?",
"role": "assistant",
"id": "msg_002"
},
{
"content": "Can you explain artificial intelligence?",
"role": "user",
"id": "msg_003"
}
],
"persona": "maya",
"scope": "internal",
"tools": []
}
The endpoint returns a Server-Sent Events (SSE) stream with Content-Type: text/event-stream
.
data: {"type":"thinking_update","content":"Planning response"}
data: {"type":"thinking_update","content":"Analyzing user request"}
data: {"type":"final_response","content":"Here's your answer...","json":{}}
data: {"type":"metadata","content":{"chatId":"60d21b4667d0d8992e610c85","title":"Discussion about AI ethics","replaySessionID":"60d21b4667d0d8992e610c86"}}
data: [DONE]
Code | Description |
---|---|
200 |
Success - Streaming response with thinking steps and final results |
400 |
Bad Request - Invalid request parameters |
401 |
Unauthorized - User not authenticated |
403 |
Forbidden - User lacks required scope |
500 |
Internal Server Error - Server error during processing |
{
"message": "Error description"
}
For streaming errors (500 status):
data: {"content":null,"error":{"message":"Internal server error"}}
curl -X POST "https://your-api-domain.com/api/v1/chat/generate/stream" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-token-here" \
-d '{
"chat_history": [
{
"content": "What is machine learning?",
"role": "user"
}
],
"persona": "maya"
}'
const response = await fetch('/api/v1/chat/generate/stream', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-token-here'
},
body: JSON.stringify({
chat_history: [
{
content: 'Explain quantum computing',
role: 'user'
}
],
persona: 'maya'
})
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { value, done } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') {
console.log('Stream completed');
return;
}
try {
const parsed = JSON.parse(data);
console.log('Received:', parsed);
} catch (e) {
// Handle parsing errors
}
}
}
}
chatId
is automatically generated if not providedFor questions or issues regarding this API, please contact the development team.