HomePricingDocsAboutContact
LoginGet Started

APIKEY FUN Documentation

Welcome to the APIKEY FUN developer documentation. APIKEY FUN is a unified AI API gateway that lets you access OpenAI, Anthropic Claude, Google Gemini, DeepSeek, xAI Grok, and more with a single API key and a familiar OpenAI-compatible interface.

💡 Tip: If you already use the OpenAI SDK, you only need to change two things: your api_key and base_url. Everything else works exactly the same.

Quickstart

Get up and running in under 5 minutes.

Step 1 — Sign Up and Get Your API Key

Create a free account at apikey.fun. You'll receive $5 in free credits instantly. From your dashboard, navigate to API Keys and click Generate New Key.

Step 2 — Install the OpenAI SDK

# Python
pip install openai

# Node.js
npm install openai

Step 3 — Make Your First API Call

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_APIKEYFUN_KEY",
    base_url="https://api.apikey.fun/v1"
)

response = client.chat.completions.create(
    model="gpt-5.4",
    messages=[{"role": "user", "content": "Tell me a fun fact about Boulder, CO."}]
)

print(response.choices[0].message.content)

Authentication

All API requests must include your API key in the Authorization HTTP header using Bearer authentication:

Authorization: Bearer YOUR_APIKEYFUN_KEY

Never expose your API key in client-side code. Always load it from environment variables on your server.

⚠️ Security: Treat your API key like a password. If compromised, regenerate it immediately from your dashboard.

Base URL

All API requests should be directed to:

https://api.apikey.fun/v1

When using any OpenAI-compatible SDK, set the base_url / baseURL parameter to this URL.

Your First Request

The endpoint POST /v1/chat/completions is the primary interface for all chat models. Here is a minimal example with cURL:

curl https://api.apikey.fun/v1/chat/completions \
  -H "Authorization: Bearer YOUR_APIKEYFUN_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.4",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

Chat Completions

Create a model response for the given chat conversation. This endpoint is fully compatible with the OpenAI Chat Completions API.

MethodEndpoint
POST/v1/chat/completions

Parameters

ParameterTypeRequiredDescription
modelstring✓ YesThe model ID to use (e.g. gpt-5.4, claude-sonnet-4.6)
messagesarray✓ YesArray of message objects with role and content
streambooleanNoIf true, send partial message tokens as SSE events
temperaturenumberNoSampling temperature between 0 and 2. Default: 1
max_tokensintegerNoMaximum tokens to generate in the response
top_pnumberNoNucleus sampling probability. Default: 1

Streaming

Set stream: true to receive Server-Sent Events (SSE) as tokens are generated. All models support streaming.

response = client.chat.completions.create(
    model="claude-sonnet-4.6",
    messages=[{"role": "user", "content": "Write me a poem."}],
    stream=True,
)

for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Function Calling

Function calling (tool use) is supported for models that natively support it, including GPT-5.4 and Claude Sonnet 4.6. Pass the tools parameter as you would with the standard OpenAI API.

Supported Models

All models below are accessible with your single APIKEY FUN key.

Model IDProviderContextSupports
gpt-5.4OpenAI1MChat, Vision, Functions, Streaming
gpt-5.4-miniOpenAI1MChat, Vision, Functions, Streaming
gpt-5.2OpenAI256KChat, Vision, Functions, Streaming
claude-opus-4.6Anthropic1MChat, Vision, Tools, Streaming
claude-sonnet-4.6Anthropic1MChat, Vision, Tools, Streaming
claude-haiku-4.5Anthropic200KChat, Vision, Tools, Streaming
gemini-3.1-proGoogle1MChat, Vision, Streaming
gemini-3-flashGoogle1MChat, Vision, Streaming
deepseek-v3.2DeepSeek128KChat, Streaming
grok-4.2xAI2MChat, Vision, Streaming

Model Pricing

Pricing is per 1 million tokens (blended input+output average). Exact rates are shown in your dashboard.

ModelInput (per 1M tokens)Output (per 1M tokens)
gpt-5.4$2.50$10.00
gpt-5.4-mini$0.15$0.60
claude-sonnet-4.6$3.00$15.00
claude-haiku-4.5$0.80$4.00
gemini-3.1-pro$1.25$5.00
gemini-3-flash$0.075$0.30

Rate Limits

PlanRequests / minTokens / min
Starter (Free)60100,000
Developer300500,000
Team1,0002,000,000
EnterpriseCustomCustom

When a rate limit is exceeded, the API returns a 429 Too Many Requests status. Implement exponential backoff for production applications.

Error Codes

HTTP StatusCodeDescription
400invalid_requestThe request is malformed or missing required parameters.
401authentication_errorInvalid or missing API key.
403permission_deniedYour key does not have access to this resource.
404model_not_foundThe specified model ID does not exist.
429rate_limit_exceededToo many requests. Retry with exponential backoff.
500internal_errorAn unexpected server-side error occurred. Please retry.
503provider_unavailableThe upstream AI provider is currently unavailable.

Best Practices

  • Always load API keys from environment variables — never hardcode them.
  • Implement retry logic with exponential backoff for 429 and 5xx errors.
  • Use the cheapest appropriate model for your use case (e.g. gpt-5.4-mini for simple tasks).
  • Set max_tokens to limit output length and prevent runaway costs.
  • Use streaming for long outputs to improve user-perceived latency.
  • Monitor your usage in the dashboard and set spending alerts.
📬 Have questions? Email us at admin@apikey.fun and our team will respond within 24 hours.