Managed Inference I/O Specs
Supported input modes, response formats, and request schemas for CosmicAC inference endpoints.
| Mode | Description |
|---|
| HTTP REST | /v1/chat/completions (OpenAI-compatible) |
| HRPC | Hyperswarm RPC protocol |
| Mode | Description |
|---|
| Bearer Token | Authorization: Bearer <token> |
| API Key Header | X-API-Key: <token> |
| Mode | Description |
|---|
| Streaming | Server-Sent Events (SSE) support |
| Non-streaming | Standard JSON request/response |
| Mode | Description |
|---|
| JSON Response | Standard completion with usage metadata. |
| Streaming SSE | Real-time event stream with data: chunks. |
| Usage Tracking | Token consumption (input/output/total). |
| Field | Type | Required | Description |
|---|
| model | string | Yes | Model identifier |
| messages | array | Yes | An array of message objects |
| stream | boolean | No | Enable streaming response |
| stream_options | object | No | Streaming configuration |
| Field | Type | Description |
|---|
| role | string | Message role |
| content | string | Message content |
| Field | Type | Description |
|---|
| include_usage | boolean | Include token usage in stream |
{
"model": "string (required)",
"messages": [
{ "role": "string", "content": "string" }
],
"stream": "boolean",
"stream_options": {
"include_usage": true
}
}