docsFAQTroubleshooting

Troubleshooting

Solutions for common errors and issues.

Connection Issues

Connection Timeout

Causes and solutions:

  1. Network Issue: Check your internet connection
  2. Wrong URL: Verify the correct endpoint:
    • OpenAI compatible: https://api.fizzlyapi.com/v1
    • Anthropic compatible: https://api.fizzlyapi.com
  3. Firewall/Proxy: Ensure your network allows HTTPS connections to api.fizzlyapi.com
  4. VPN: Try disabling VPN if you’re using one

SSL Certificate Error

# Update CA certificates
# Ubuntu/Debian
sudo apt update && sudo apt install ca-certificates
 
# macOS
brew install ca-certificates
 
# Windows
# Update Windows and restart

Proxy Configuration

If you’re behind a corporate firewall:

export HTTP_PROXY="http://proxy.example.com:8080"
export HTTPS_PROXY="http://proxy.example.com:8080"

Authentication Errors

401 - Invalid API Key

Cause: API key is invalid or expired.

Solution:

  1. Verify your API key in the Fizzly Console
  2. Ensure the key has not been deleted or disabled
  3. Copy and paste the key again (watch for extra spaces)

403 - Insufficient Permissions

Cause: Your account balance is depleted or API key lacks permissions.

Solution:

  1. Check your balance in the Dashboard
  2. Top up your account in BillingTop Up
  3. Verify API key permissions

Rate Limiting

429 - Rate Limit Exceeded

Cause: Too many requests in a short period.

Solution:

  1. Reduce the frequency of requests
  2. Implement exponential backoff
  3. Contact support for higher limits if needed

Example retry logic:

import time
from openai import OpenAI, RateLimitError
 
client = OpenAI(base_url="https://api.fizzlyapi.com/v1")
 
def call_with_retry(max_retries=3):
    for i in range(max_retries):
        try:
            return client.chat.completions.create(
                model="gpt-4o",
                messages=[{"role": "user", "content": "Hello"}]
            )
        except RateLimitError:
            if i < max_retries - 1:
                time.sleep(2 ** i)
            else:
                raise

Model Errors

Model Not Available

Cause: Model name is incorrect or not supported.

Solution:

  1. Verify the model name is spelled correctly
  2. Check the supported models
  3. Try an alternative model

Response Too Slow

For faster responses:

  1. Use a faster model (e.g., anthropic/claude-haiku-3.5 instead of anthropic/claude-opus-4)
  2. Reduce the max_tokens parameter
  3. Enable streaming for perceived faster responses

Environment Variables

Variables Not Loading

# Check if variables are set
echo $env:ANTHROPIC_API_KEY
echo $env:OPENAI_API_KEY
 
# For permanent settings, add to PowerShell profile
notepad $PROFILE

Still having issues? Contact [email protected] with error details.