Skip to main content

Enterprise & Research Models

Specialized & Fast Inference

Open Source & Community

Unified & Routing

Cloud & Enterprise

Local & Private

Choosing the Right Provider

With 19+ AI providers available, selecting the right model depends on your specific needs. Consider these key factors:
  • Ultra-fast inference: Groq (LPU technology), Together AI
  • Best reasoning: Anthropic Claude, DeepSeek, OpenAI o1
  • Balanced performance: OpenAI GPT-4, Google Gemini, Cohere
  • Local speed: Ollama, LM Studio (no network latency)
  • Free/Low-cost: Local models (Ollama, LM Studio), OpenRouter * Budget-friendly: Together AI, HuggingFace, Hyperbolic * Premium: Anthropic, OpenAI, Google (higher quality) * Enterprise: AWS Bedrock, GitHub Models (included benefits)
  • Maximum privacy: Local models (Ollama, LM Studio) - data never leaves your device * Enterprise-grade: AWS Bedrock, Anthropic (SOC 2 compliant) * Cloud security: OpenAI, Google, Cohere (encrypted transmission) * Specialized: Perplexity (search integration with privacy considerations)
  • Code generation: All providers support coding, specialized: Cohere, Together AI, GitHub * Multimodal: Google Gemini, OpenAI GPT-4 Vision, Moonshot * Long context: Claude (200K+), Gemini (1M+), GPT-4 (128K) * Function calling: OpenAI, Anthropic, Google, Cohere * Search integration: Perplexity (real-time web search) * Multilingual: Cohere, Google, Moonshot (Chinese), Mistral
  • Rapid prototyping: Groq, Together AI (fast iteration)
  • Production applications: Anthropic, OpenAI, AWS Bedrock
  • Research & analysis: DeepSeek, Perplexity, Cohere
  • Offline development: Ollama, LM Studio
  • Enterprise integration: AWS Bedrock, GitHub Models
  • Cost optimization: Hyperbolic, HuggingFace, OpenRouter

Quick Start

1

Choose Your Provider

Select from 19+ providers based on your needs: speed, cost, capabilities, or privacy requirements
2

Get API Credentials

For cloud providers: Sign up and get API keys. For local providers: Download and install the software
3

Configure in CodinIT

Add your credentials in CodinIT’s settings under AI Providers or use provider-specific setup prompts
4

Select Your Model

Choose from available models within your selected provider, considering context limits and capabilities
5

Start Building

Begin using AI assistance in your development workflow with the configured provider

Configuration Tips

Multi-Provider Setup: Configure multiple providers simultaneously and switch between them based on task requirements, cost considerations, or performance needs.
API Key Security: Your API keys are stored locally and never transmitted to CodinIT servers. They are only used to communicate directly with your chosen AI provider.
Rate Limits: Each provider has different rate limits and usage quotas. Monitor your usage and consider provider switching for high-volume workloads.
Provider Switching: Easily switch between providers mid-project. CodinIT maintains separate contexts for different providers, allowing you to leverage specialized capabilities as needed.
Local vs Cloud: Local providers (Ollama, LM Studio) offer maximum privacy but require hardware resources. Cloud providers offer convenience and advanced features but involve data transmission.

Next Steps

Provider Ecosystem: With 19+ AI providers, you can choose the perfect model for every task - from rapid prototyping to production deployment, from cost optimization to maximum privacy.