Environment Setup
Configure Auto-Browse with different LLM providers and environment settings
Environment Setup
Auto-Browse supports various Language Model (LLM) providers to power its AI capabilities. This guide covers the setup for each supported provider.
OpenAI Configuration
OpenAI is the default and currently supported LLM provider.
Setup Steps
-
Get your OpenAI API key:
- Go to OpenAI’s platform
- Create an account or sign in
- Navigate to API keys
- Create a new API key
-
Configure your environment:
Model Options
OpenAI models supported:
gpt-4o-mini
(default)- Compatible GPT models
Upcoming Provider Support
The following LLM providers will be supported in future releases:
Anthropic Claude
Google Gemini
Local LLMs
Support for running models locally using providers like:
- Ollama
- LMStudio
- LocalAI
Configuration will be available in future releases.
Meta Llama
Integration with Meta’s Llama models will be supported through various hosting options.
Advanced Configuration
Timeout Settings
Configure timeouts for AI operations:
Debug Mode
Enable detailed logging for troubleshooting:
Custom Endpoints
For enterprise setups or custom model deployments:
Best Practices
-
API Key Security
- Never commit API keys to version control
- Use environment variables or secret management
- Rotate keys periodically
-
Model Selection
- Start with default model for basic operations
- Test different models for specific use cases
- Consider cost vs. performance tradeoffs
-
Error Handling
- Implement proper error handling
- Monitor API rate limits
- Set appropriate timeouts
Troubleshooting
Common issues and solutions:
- API Key Issues
- Model Availability
- Rate Limits
Next Steps
- Try the quickstart guide with your configuration
- Explore supported actions
- Learn about best practices