An OpenAI-compatible proxy API that provides intelligent context filtering and chat completion capabilities with enhanced message relevance processing.
The Context Proxy API acts as an intelligent middleware between your application and OpenAI-compatible language models. It automatically filters chat history for relevance, processes context data, and forwards optimized requests to the underlying AI service.
The endpoint requires Bearer token authentication:
Authorization: Bearer <your-token>
/api/v1/proxy
POST /api/v1/proxy/{proxyUrl}/{OpenAIAPIKey}/chat/completions
Provides OpenAI-compatible chat completions with intelligent context processing.
Parameter | Type | Description |
---|---|---|
proxyUrl |
string | The base URL of the target OpenAI-compatible API |
OpenAIAPIKey |
string | API key for the target service |
/api/v1/proxy/https://api.openai.com/v1/sk-your-api-key-here/chat/completions
POST /api/v1/proxy/default/chat/completions
For users who don’t have an OpenAI API key, this route provides chat completions using our own infrastructure without requiring external API keys.
Note: When using the default route, you must set the
model
field toalchemyst-ai/alchemyst-c1
.
The API automatically processes chat history to:
Field | Type | Required | Description |
---|---|---|---|
model |
string | Yes | Model identifier for completion |
messages |
array | Yes | Array of message objects |
max_tokens |
number | No | Maximum tokens in response |
temperature |
number | No | Sampling temperature (0-2) |
top_p |
number | No | Nucleus sampling parameter |
frequency_penalty |
number | No | Frequency penalty (-2 to 2) |
presence_penalty |
number | No | Presence penalty (-2 to 2) |
stream |
boolean | No | Whether to stream responses |
{
"role": "user|assistant|system",
"content": "string"
}
system
: System instructions and contextuser
: User messages and queriesassistant
: AI assistant responses{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is machine learning?"
},
{
"role": "assistant",
"content": "Machine learning is a subset of artificial intelligence..."
},
{
"role": "user",
"content": "Can you explain neural networks?"
}
],
"max_tokens": 150,
"temperature": 0.7
}
Note: If you are using the default route (
/api/v1/proxy/default/chat/completions
), you must set"model": "alchemyst-ai/alchemyst-c1"
instead of"gpt-3.5-turbo"
.
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677858242,
"model": "gpt-3.5-turbo",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Neural networks are computing systems inspired by biological neural networks..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 56,
"completion_tokens": 31,
"total_tokens": 87
}
}
Field | Type | Description |
---|---|---|
id |
string | Unique identifier for the completion |
object |
string | Object type (always “chat.completion”) |
created |
number | Unix timestamp of creation |
model |
string | Model used for completion |
choices |
array | Array of completion choices |
usage |
object | Token usage statistics |
{
"error": {
"message": "Error description",
"type": "error_type",
"code": "error_code"
}
}
Status | Description | Error Type |
---|---|---|
200 |
Success | - |
400 |
Bad Request | invalid_request_error |
401 |
Unauthorized | authentication_error |
500 |
Internal Server Error | server_error |
invalid_request
: Malformed request bodyinvalid_messages
: Missing or invalid messages arrayinternal_error
: Server processing errorcurl -X POST "/api/v1/proxy/https://api.openai.com/v1/sk-your-key/chat/completions" \
-H "Authorization: Bearer <your-token>" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
]
}'
curl -X POST "/api/v1/proxy/https://api.openai.com/v1/sk-your-key/chat/completions" \
-H "Authorization: Bearer <your-token>" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful coding assistant."
},
{
"role": "user",
"content": "How do I create a Python list?"
},
{
"role": "assistant",
"content": "You can create a Python list using square brackets: my_list = [1, 2, 3]"
},
{
"role": "user",
"content": "How do I add items to this list?"
}
],
"temperature": 0.3,
"max_tokens": 100
}'
curl -X POST "/api/v1/proxy/https://api.custom-ai.com/v1/custom-key/chat/completions" \
-H "Authorization: Bearer <your-token>" \
-H "Content-Type: application/json" \
-d '{
"model": "custom-model-v1",
"messages": [
{
"role": "user",
"content": "Explain quantum computing"
}
],
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 200,
"frequency_penalty": 0.1,
"presence_penalty": 0.1
}'
The proxy works with any OpenAI-compatible API including: