Environment Setup

Auto-Browse supports multiple Language Model (LLM) providers to power its AI capabilities. This guide covers the setup for each supported provider.

Supported Providers

OpenAI (Default Provider)

  1. Get your OpenAI API key:

    • Go to OpenAI’s platform
    • Create an account or sign in
    • Navigate to API keys
    • Create a new API key
  2. Configure your environment:

OPENAI_API_KEY=your_openai_api_key_here
LLM_PROVIDER=openai  # Optional, defaults to openai
AUTOBROWSE_LLM_MODEL=gpt-4o-mini  # Optional, defaults to gpt-4o-mini

Model Options

OpenAI models supported:

  • gpt-4o-mini (default)
  • Compatible GPT-4 models

Google AI (Gemini)

  1. Get your Google AI API key:

  2. Configure your environment:

GOOGLE_API_KEY=your_google_key_here
LLM_PROVIDER=google
AUTOBROWSE_LLM_MODEL=gemini-2.0-flash-lite

Model Options

  • gemini-2.0-flash-lite
  • Other Gemini models

Azure OpenAI

  1. Set up Azure OpenAI:

    • Create an Azure account
    • Set up Azure OpenAI service
    • Deploy a model
    • Get your API details
  2. Configure your environment:

AZURE_OPENAI_API_KEY=your_azure_key_here
AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/
AZURE_OPENAI_API_VERSION=2024-12-01-preview
AZURE_OPENAI_API_DEPLOYMENT_NAME=your-deployment-name
LLM_PROVIDER=azure

Anthropic Claude

  1. Get your Anthropic API key:

  2. Configure your environment:

ANTHROPIC_API_KEY=your_anthropic_key_here
LLM_PROVIDER=anthropic
AUTOBROWSE_LLM_MODEL=claude-3

Google Vertex AI

  1. Set up Google Cloud:

    • Create a Google Cloud project
    • Enable Vertex AI API
    • Create service account credentials
  2. Configure your environment:

GOOGLE_APPLICATION_CREDENTIALS=path/to/credentials.json
LLM_PROVIDER=vertex

Ollama (Local Deployment)

  1. Install Ollama:

    • Follow installation instructions at Ollama.ai
    • Start the Ollama service
  2. Configure your environment:

LLM_PROVIDER=ollama
AUTOBROWSE_LLM_MODEL=llama2
BASE_URL=http://localhost:11434  # Optional, this is the default

Advanced Configuration

Timeout Settings

Configure timeouts for AI operations:

AUTO_BROWSE_TIMEOUT=60000  # Default 60s
AUTO_BROWSE_OPERATION_TIMEOUT=10000  # Default 10s

Debug Mode

Enable detailed logging for troubleshooting:

AUTO_BROWSE_DEBUG=true  # Enables debug logging

Custom Endpoints

For enterprise setups or custom model deployments:

LLM_API_ENDPOINT=https://your-custom-endpoint

Best Practices

  1. API Key Security

    • Never commit API keys to version control
    • Use environment variables or secret management
    • Rotate keys periodically
  2. Model Selection

    • Start with default model for basic operations
    • Test different models for specific use cases
    • Consider cost vs. performance tradeoffs
  3. Error Handling

    • Implement proper error handling
    • Monitor API rate limits
    • Set appropriate timeouts

Troubleshooting

Common issues and solutions:

  1. API Key Issues
Error: Authentication failed
Solution: Verify API key is correct and has proper permissions
  1. Model Availability
Error: Model not available
Solution: Confirm model name and availability in your region
  1. Rate Limits
Error: Rate limit exceeded
Solution: Implement backoff strategy or upgrade API tier

Next Steps