Skip to main content

Error Response Format

All errors return a consistent JSON structure:
{
  "error": {
    "type": "rate_limit_exceeded",
    "message": "Daily request limit reached for closed-source models. Resets at midnight UTC.",
    "code": 429
  }
}

Complete Error Reference

HTTPTypeCauseFix
400bad_requestMalformed JSON, invalid parameters, unsupported param for modelCheck request body against the API reference
401auth_errorMissing, invalid, or revoked API keyVerify your Authorization: Bearer sk-samurai-... header
402insufficient_creditsPAYG credit balance is $0.00Add credits at dashboard/billing
403permission_deniedModel requires a higher plan tierCheck pricing plans and upgrade
404not_foundModel ID doesn’t exist or is unavailableCheck models list for valid IDs
422validation_errorParameter values failed validation (e.g. temperature > 2)Check parameter ranges in the docs
429rate_limit_exceededDaily request limit hit for your planWait until midnight UTC or upgrade plan
500server_errorInternal server errorRetry with exponential backoff
503model_unavailableUpstream provider (OpenAI, Anthropic, etc.) is downTry a fallback model or retry

Common Errors & Solutions

Symptoms: Every request fails immediately with auth_errorCauses & Fixes:
  • Key starts with sk-samurai- but was revoked → Create a new key in the dashboard
  • Passing OPENAI_API_KEY instead of your Samurai key → Use SAMURAI_API_KEY
  • Extra spaces or newlines in the key → Strip whitespace with .strip()
  • Header format wrong → Must be Authorization: Bearer sk-samurai-YOUR_KEY
import os
key = os.environ.get("SAMURAI_API_KEY", "").strip()
# Verify it looks right
assert key.startswith("sk-samurai-"), f"Bad key format: {key[:20]}..."
Symptoms: Requests to Pro models (o1, Sora, DALL-E 3 HD) fail with insufficient_creditsFix:
  1. Go to Dashboard → Billing
  2. Purchase PAYG credits ($5 minimum)
  3. Credits are applied instantly
Note: Subscription plans (Free/Starter/Pro) give request quotas. PAYG credits are separate and needed for Pro-tier models.
Symptoms: Request works for some models but not othersCause: Your current plan doesn’t include that model class.
Model ClassMinimum Plan
Open-source (Llama, Mistral, DeepSeek)Free
Closed-source (GPT-4o, Claude, Gemini)Free (limited)
Pro (o1, Sora, DALL-E 3 HD)Pro + PAYG credits
Fix: Upgrade at Dashboard → Billing
Symptoms: Requests fail mid-session, especially for high-volume usePlan limits (requests per day):
PlanOpen-sourceClosed-sourcePro
Free150700
Starter2,0001,0000
Pro4,5002,500650
Fix: Implement exponential backoff (see below) or upgrade your plan.
Symptoms: Specific model fails but others work fineCause: The upstream provider (OpenAI, Anthropic, Google) is experiencing an outage.Fix: Use a fallback model:
FALLBACKS = {
    "gpt-4o": "claude-3-5-sonnet-20241022",
    "claude-3-5-sonnet-20241022": "gemini-2.0-flash",
    "gemini-2.0-flash": "deepseek-chat",
}

Retry with Exponential Backoff

import time
import random
from openai import OpenAI, RateLimitError, APIStatusError

client = OpenAI(
    api_key="sk-samurai-YOUR_KEY",
    base_url="https://www.samuraiapi.in/v1"
)

def chat_with_retry(messages: list, model: str = "gpt-4o", max_retries: int = 5):
    """Chat with automatic retry on rate limits and server errors."""
    for attempt in range(max_retries):
        try:
            return client.chat.completions.create(
                model=model,
                messages=messages
            )
        except RateLimitError:
            if attempt == max_retries - 1:
                raise
            wait = (2 ** attempt) + random.uniform(0, 1)
            print(f"Rate limited. Retrying in {wait:.1f}s (attempt {attempt + 1}/{max_retries})")
            time.sleep(wait)
        except APIStatusError as e:
            if e.status_code >= 500 and attempt < max_retries - 1:
                wait = (2 ** attempt) + random.uniform(0, 1)
                print(f"Server error {e.status_code}. Retrying in {wait:.1f}s")
                time.sleep(wait)
            else:
                raise

Model Fallback Pattern

FALLBACK_CHAIN = [
    "gpt-4o",
    "claude-3-5-sonnet-20241022",
    "gemini-2.0-flash",
    "deepseek-chat",  # cheapest fallback
]

def chat_with_fallback(messages: list):
    for model in FALLBACK_CHAIN:
        try:
            return client.chat.completions.create(
                model=model,
                messages=messages
            )
        except Exception as e:
            print(f"Model {model} failed: {e}. Trying next...")
    raise RuntimeError("All models failed")

Node.js Error Handling

import OpenAI, { APIError, RateLimitError, AuthenticationError } from 'openai';

const client = new OpenAI({
  apiKey: process.env.SAMURAI_API_KEY,
  baseURL: 'https://www.samuraiapi.in/v1'
});

try {
  const response = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello!' }]
  });
  console.log(response.choices[0].message.content);
} catch (error) {
  if (error instanceof AuthenticationError) {
    console.error('Invalid API key — check your SAMURAI_API_KEY env var');
  } else if (error instanceof RateLimitError) {
    console.error('Rate limited — implement backoff or upgrade plan');
  } else if (error instanceof APIError) {
    console.error(`API error ${error.status}: ${error.message}`);
  } else {
    throw error;
  }
}

Error Response Format

All errors return a consistent JSON structure:
{
  "error": {
    "type": "auth_error",
    "message": "Invalid API key provided.",
    "code": 401
  }
}

Error Code Reference

HTTP StatusTypeCauseSolution
400bad_requestInvalid request parametersCheck your request body
401auth_errorMissing or invalid API keyVerify your API key
402insufficient_creditsPAYG credit balance depletedAdd credits in dashboard
403permission_deniedModel requires higher planUpgrade your plan
404not_foundModel ID doesn’t existCheck /reference/models
422validation_errorRequest failed validationCheck parameter types
429rate_limit_exceededDaily request limit hitWait until midnight UTC
500server_errorInternal server errorRetry with backoff
503model_unavailableUpstream provider is downTry again or use another model

Common Errors & Fixes

Problem: Your API key is missing, malformed, or revoked.Fix: Check that:
  • You’re passing Authorization: Bearer sk-samurai-YOUR_KEY
  • The key hasn’t been deleted from your dashboard
  • There are no leading/trailing spaces in your key
Problem: The model requires a higher subscription plan.Fix: Check which plan includes your target model at /reference/pricing. Upgrade from your dashboard.
Problem: Your PAYG credit balance is $0.Fix: Add credits from Dashboard → Billing. Pro models require PAYG credits on top of your subscription.
Problem: You’ve hit your daily request limit.Fix: Wait until midnight UTC for the limit to reset, implement request queuing, or upgrade your plan.
Problem: The upstream provider (OpenAI, Anthropic, etc.) is experiencing downtime.Fix: Implement fallback to another model:
primary_model = "claude-3-5-sonnet-20241022"
fallback_model = "gpt-4o"

try:
    response = client.chat.completions.create(model=primary_model, messages=messages)
except Exception:
    response = client.chat.completions.create(model=fallback_model, messages=messages)

Python Error Handling

from openai import (
    AuthenticationError,
    RateLimitError,
    APIStatusError,
    APIConnectionError
)

try:
    response = client.chat.completions.create(model="gpt-4o", messages=messages)
except AuthenticationError:
    print("Invalid API key")
except RateLimitError:
    print("Rate limit hit — wait and retry")
except APIStatusError as e:
    print(f"API error {e.status_code}: {e.message}")
except APIConnectionError:
    print("Network error — check your connection")