Troubleshooting
Solutions for common errors and issues.
Connection Issues
Connection Timeout
Causes and solutions:
- Network Issue: Check your internet connection
- Wrong URL: Verify the correct endpoint:
- OpenAI compatible:
https://api.fizzlyapi.com/v1 - Anthropic compatible:
https://api.fizzlyapi.com
- OpenAI compatible:
- Firewall/Proxy: Ensure your network allows HTTPS connections to api.fizzlyapi.com
- VPN: Try disabling VPN if you’re using one
SSL Certificate Error
# Update CA certificates
# Ubuntu/Debian
sudo apt update && sudo apt install ca-certificates
# macOS
brew install ca-certificates
# Windows
# Update Windows and restartProxy Configuration
If you’re behind a corporate firewall:
export HTTP_PROXY="http://proxy.example.com:8080"
export HTTPS_PROXY="http://proxy.example.com:8080"Authentication Errors
401 - Invalid API Key
Cause: API key is invalid or expired.
Solution:
- Verify your API key in the Fizzly Console
- Ensure the key has not been deleted or disabled
- Copy and paste the key again (watch for extra spaces)
403 - Insufficient Permissions
Cause: Your account balance is depleted or API key lacks permissions.
Solution:
- Check your balance in the Dashboard
- Top up your account in Billing → Top Up
- Verify API key permissions
Rate Limiting
429 - Rate Limit Exceeded
Cause: Too many requests in a short period.
Solution:
- Reduce the frequency of requests
- Implement exponential backoff
- Contact support for higher limits if needed
Example retry logic:
import time
from openai import OpenAI, RateLimitError
client = OpenAI(base_url="https://api.fizzlyapi.com/v1")
def call_with_retry(max_retries=3):
for i in range(max_retries):
try:
return client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}]
)
except RateLimitError:
if i < max_retries - 1:
time.sleep(2 ** i)
else:
raiseModel Errors
Model Not Available
Cause: Model name is incorrect or not supported.
Solution:
- Verify the model name is spelled correctly
- Check the supported models
- Try an alternative model
Response Too Slow
For faster responses:
- Use a faster model (e.g.,
anthropic/claude-haiku-3.5instead ofanthropic/claude-opus-4) - Reduce the
max_tokensparameter - Enable streaming for perceived faster responses
Environment Variables
Variables Not Loading
# Check if variables are set
echo $env:ANTHROPIC_API_KEY
echo $env:OPENAI_API_KEY
# For permanent settings, add to PowerShell profile
notepad $PROFILEStill having issues? Contact [email protected] with error details.