Documentation Index
Fetch the complete documentation index at: https://docs.axioniclabs.ai/llms.txt
Use this file to discover all available pages before exploring further.
Four input methods:
- Upload JSON —
.json files containing tool definitions (single tool or array).
- Upload OpenAPI Spec —
.json, .yaml, or .yml files in OpenAPI 3.x format. Operations are extracted as individual tools.
- Paste JSON — paste tool definitions directly.
- Natural Language — describe tools in plain English; the teacher model compiles these into structured schemas.
Built-in example sets are available: general-purpose (Data Retrieval, Actions & Mutations, Computation & Analysis) and domain-specific (CRM, Support Desk, E-Commerce).
Upload a standard OpenAPI 3.x spec and Spectra extracts each operation as a tool:
openapi: 3.1.0
info:
title: CRM Tool API
version: 1.0.0
paths:
/leads/status:
post:
operationId: update_lead_status
summary: Update the status of an existing lead in the CRM system.
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
lead_id:
type: string
description: "Unique identifier for the lead (format: LD-XXXX)"
status:
type: string
enum: [New, Contacted, Qualified, Proposal, Negotiation, Closed Won, Closed Lost]
required:
- lead_id
- status
responses:
'200':
description: Status updated successfully
Accepts .json, .yaml, and .yml files.
Each tool is defined as a JSON object with a name, description, and parameter schema:
{
"name": "search",
"description": "Search for records matching a query. Returns ranked results with relevance scores.",
"parameters": {
"type": "object",
"properties": {
"query": { "type": "string", "description": "Natural language search query" },
"filters": {
"type": "string",
"description": "Key-value filter expression (e.g. \"status:active, type:report\")"
},
"limit": { "type": "integer", "description": "Maximum number of results to return (default: 10)" },
"sort_by": {
"type": "string",
"description": "Field to sort results by",
"enum": ["relevance", "date", "name"]
}
},
"required": ["query"]
}
}
During training, Spectra normalizes uploaded JSON schemas and natural-language tool descriptions into the same canonical tool definition before seed generation and reward compilation. In practice, that means your final tool name is normalized to snake_case, and the model is trained against typed parameters plus explicit required fields rather than against the raw prose description.
Good examples
What to avoid
Clear and detailed:A tool called update_record that modifies one or more fields on an existing record.
Parameters:
- record_id (required): Unique identifier for the record to update
- updates (required): JSON object of field names and their new values
Concise but complete:Send notification tool. Takes recipient (email, phone, or channel ID), channel (email, sms, or webhook), subject (string), and body (string, max 10000 chars).
Too vague:Missing key information:Update status tool with some parameters
Tips for Natural Language Schemas
- Be specific about parameters — include names, types, and constraints
- Specify constraints — formats (
LD-XXXX), enums (New, Contacted, Qualified), ranges (1-10), lengths (max 280 chars)
- Indicate required vs optional — explicitly state which parameters are required
- One tool per description — describe a single tool per natural language entry
Natural-language tools are convenient, but they still have to compile into a typed schema before training starts. If you need deterministic parameter names, strict enums, or exact patterns, use JSON or generate the schema in the UI and review it before training.
Pipeline Schema Methods
When using the mechanex-train CLI or Cloud Run Jobs directly, tool schemas can be provided in several ways:
Environment Variable
Use TOOL_SCHEMAS_TEXT with pipe-separated descriptions:
export TOOL_SCHEMAS_TEXT="A search tool that finds records by query, with optional filters and limit. | A tool to create a new record with record_type, title, and optional data fields."
Text File
Create a file with one schema per line:
A search tool that finds records matching a natural language query. Takes query (required), filters (optional key-value string), and limit (optional integer).
A tool called create_record that creates a new record. Requires record_type and title, with optional data (JSON) and assignee.
Send notification tool with recipient, channel (email/sms/webhook), subject, and body parameters.
Then reference it:
export TOOL_SCHEMAS_TEXT_FILE="./tool_schemas.txt"
Config File
{
"tool_schemas_text": [
"A search tool that finds records by query with optional filters and pagination",
"A tool to create a new record with record_type, title, and optional metadata",
"Calculate tool that evaluates a math expression with optional precision"
],
"teacher_provider": "GOOGLE",
"hf_repo_name": "my-model"
}
JSON Schema Directory
Place individual .json files in a directory:
export TOOL_SCHEMAS_DIR="./tool_schemas"
Mixing Methods
All methods can be combined. The pipeline merges schemas from all sources:
{
"tool_schemas_dir": "./tool_schemas",
"tool_schemas_text": [
"A tool to delete a record by ID, requires a confirm boolean set to true"
],
"tool_schemas_text_file": "./extra_tools.txt"
}
JSON vs Natural Language
| Aspect | JSON Schema | Natural Language |
|---|
| Precision | Exact specification | AI-interpreted |
| Ease of use | Requires JSON knowledge | Plain English |
| Quick prototyping | Slower | Very fast |
| Complex constraints | Excellent | Good |
| Best for | Production, complex APIs | Prototypes, simple tools |
Natural language schemas cost slightly more to process since the teacher model interprets the description during reward compilation. The difference is minimal (approximately $0.001 per schema).