Connecting to the Vortex Layer
A world-class API designed for low-latency inference and secure intelligence distribution. Follow this guide to initialize your first node session.
Quick Start Protocol
The Vortex AI API follows a standard RESTful architecture, allowing for seamless integration across all major serverless environments. To begin, ensure you have an active vx_pk key from your Security Settings.
python
import openai
client = openai.OpenAI(
api_key="YOUR_VORTEX_KEY",
base_url="https://api.vortexaillm.com/v1"
)
response = client.chat.completions.create(
model="vortex-gpt-4o",
messages=[{"role": "user", "content": "Initialize node."}]
)
client = openai.OpenAI(
api_key="YOUR_VORTEX_KEY",
base_url="https://api.vortexaillm.com/v1"
)
response = client.chat.completions.create(
model="vortex-gpt-4o",
messages=[{"role": "user", "content": "Initialize node."}]
)
Intelligence Endpoints
Vortex AI offers a unified access layer to multiple high-performance nodes, including GPT-4o, Llama 3.1, and Claude 3.5. All traffic is encrypted via AES-256 and distributed via our global edge network.
http request
POST /v1/chat/completions
GET /v1/models
POST /v1/embeddings
GET /v1/models
POST /v1/embeddings
Rate Limiting & Tiers
By default, Free Tier nodes are limited to 1,000 tokens per minute. Premium Tier members enjoy unlimited throughput and priority queue access with 0ms cold starts on serverless nodes.