Documentation Index
Fetch the complete documentation index at: https://docs.axioniclabs.ai/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- A trained model with Ready status on the Models page
- An Axionic API key (default key created with your account, find it in Settings > API Keys)
Quick Test with curl
Using the Python SDK
Using the OpenAI Python Client
Structured Outputs
If you are coming from OpenAI’s Structured Outputs guide, the main difference is that Axionic does not currently expose OpenAI’sresponse_format field on /v1/chat/completions.
Use one of these Axionic-native paths instead:
- pass an inline
policyor savedpolicy_idon/v1/chat/completionsor/v1/completionswhen you want to stay inside an OpenAI SDK client - call
/sampling/generatewithmethod: "guided-generation"when you want direct JSON-schema, regex, or grammar constraints
Guided Generation is model/runtime-dependent. If the target runtime does not support that decoding strategy, Axionic can fall back to a standard decoding path. Test your schema on the exact model you plan to deploy.
OpenAI client with an inline policy
Pass the policy throughextra_body:
response.choices[0].message.content, so parse it yourself with json.loads(...).
See Reusable inference policies for the full policy shape.
Direct guided-generation request
This is the same constraint path used by Spectra’s Optimization page when you choose Guided Generation:json_schema with either regex_pattern or grammar.
Use regex_pattern when you need hard matching. Use grammar only for lightweight format guidance; it is not documented as full OpenAI-style grammar-constrained decoding.
Schema guidance and current limitations
- keep the root schema as a JSON object; the current runtime validation path expects object-shaped JSON responses
- add
additionalProperties: falsewhen you want to reject extra keys instead of only validating required ones - unlike OpenAI’s
response_formatflow, optional fields are allowed here as long as you leave them out ofrequired - OpenAI-style fields such as
response_formaton requests, plusmessage.parsedandmessage.refusalon responses, are not part of Axionic’s chat surface today - if you need acceptance flags, scores, or trace data instead of just the generated text, use the policy APIs rather than the OpenAI-compatible chat response
OpenAI’s Structured Outputs guide recommends object-rooted schemas, clear field descriptions, and explicit
required lists. Those are good practices here as well, but the transport is different: use Axionic policy or /sampling/generate fields rather than OpenAI’s response_format.Applying Steering Vectors
Pass steering parameters viaextra_body: