Enterprise & Research Models
Anthropic
Claude models with advanced reasoning capabilities
OpenAI
GPT-5 and GPT-4 models for versatile AI assistance
Gemini models with multimodal capabilities
DeepSeek
Advanced reasoning models for complex tasks
Specialized & Fast Inference
Groq
Ultra-fast inference with LPU technology
Together AI
Access to 50+ open-source models
Hyperbolic
Optimized inference for open-source models
Perplexity
AI models with integrated web search
XAI Grok
X.AI’s Grok models with real-time knowledge
Open Source & Community
Cohere
Command R series models for coding and analysis
HuggingFace
Open-source model hub with community models
Mistral AI
Open-source and commercial Mistral models
Moonshot
Chinese language models with Kimi series
Unified & Routing
OpenRouter
Access multiple models through a unified API
OpenAI Compatible
Connect to any OpenAI-compatible API endpoint
Cloud & Enterprise
AWS Bedrock
Enterprise-grade AI models through AWS infrastructure
GitHub Models
Access OpenAI and other models through GitHub
Local & Private
Choosing the Right Provider
With 19+ AI providers available, selecting the right model depends on your specific needs. Consider these key factors:Performance & Speed
Performance & Speed
- Ultra-fast inference: Groq (LPU technology), Together AI
- Best reasoning: Anthropic Claude, DeepSeek, OpenAI o1
- Balanced performance: OpenAI GPT-4, Google Gemini, Cohere
- Local speed: Ollama, LM Studio (no network latency)
Cost Considerations
Cost Considerations
- Free/Low-cost: Local models (Ollama, LM Studio), OpenRouter * Budget-friendly: Together AI, HuggingFace, Hyperbolic * Premium: Anthropic, OpenAI, Google (higher quality) * Enterprise: AWS Bedrock, GitHub Models (included benefits)
Privacy & Security
Privacy & Security
- Maximum privacy: Local models (Ollama, LM Studio) - data never leaves your device * Enterprise-grade: AWS Bedrock, Anthropic (SOC 2 compliant) * Cloud security: OpenAI, Google, Cohere (encrypted transmission) * Specialized: Perplexity (search integration with privacy considerations)
Model Capabilities
Model Capabilities
- Code generation: All providers support coding, specialized: Cohere, Together AI, GitHub * Multimodal: Google Gemini, OpenAI GPT-4 Vision, Moonshot * Long context: Claude (200K+), Gemini (1M+), GPT-4 (128K) * Function calling: OpenAI, Anthropic, Google, Cohere * Search integration: Perplexity (real-time web search) * Multilingual: Cohere, Google, Moonshot (Chinese), Mistral
Use Case Optimization
Use Case Optimization
- Rapid prototyping: Groq, Together AI (fast iteration)
- Production applications: Anthropic, OpenAI, AWS Bedrock
- Research & analysis: DeepSeek, Perplexity, Cohere
- Offline development: Ollama, LM Studio
- Enterprise integration: AWS Bedrock, GitHub Models
- Cost optimization: Hyperbolic, HuggingFace, OpenRouter
Quick Start
1
Choose Your Provider
Select from 19+ providers based on your needs: speed, cost, capabilities, or privacy requirements
2
Get API Credentials
For cloud providers: Sign up and get API keys. For local providers: Download and install the software
3
Configure in CodinIT
Add your credentials in CodinIT’s settings under AI Providers or use provider-specific setup prompts
4
Select Your Model
Choose from available models within your selected provider, considering context limits and capabilities
5
Start Building
Begin using AI assistance in your development workflow with the configured provider
Configuration Tips
API Key Security: Your API keys are stored locally and never transmitted to CodinIT servers. They are only used to
communicate directly with your chosen AI provider.
Provider Switching: Easily switch between providers mid-project. CodinIT maintains separate contexts for different
providers, allowing you to leverage specialized capabilities as needed.
Local vs Cloud: Local providers (Ollama, LM Studio) offer maximum privacy but require hardware resources. Cloud
providers offer convenience and advanced features but involve data transmission.
Next Steps
Model Configuration
Learn about context windows and model parameters
Compare Models
Compare different models and their capabilities
Run Models Locally
Set up local models for complete privacy
Prompt Engineering
Optimize your prompts for better results
Token Efficiency
Optimize costs and performance across providers
Integration Guides
Connect with databases, deployments, and APIs
Provider Ecosystem: With 19+ AI providers, you can choose the perfect model for every task - from rapid
prototyping to production deployment, from cost optimization to maximum privacy.
