Setup
- Install Ollama: Download from ollama.com and install
- Start Ollama: Run
ollama servein terminal - Download a model:
- Configure context window:
Configuration in CodinIT
- Click the settings icon (⚙️) in CodinIT
- Select “ollama” as the API Provider
- Enter your saved model name
- (Optional) Set base URL if not using default
http://localhost:11434
Recommended Models
qwen2.5-coder:32b- Excellent for codingcodellama:34b-code- High quality, large sizedeepseek-coder:6.7b-base- Effective for codingllama3:8b-instruct-q5_1- General tasks
Dynamic Context Windows
CodinIT automatically calculates optimal context windows based on model parameter size:- 70B+ models: 32k context window (e.g., Llama 70B)
- 30B+ models: 16k context window
- 7B+ models: 8k context window
- Smaller models: 4k context window (default)
- Llama 70B models: 32k context
- Llama 405B models: 128k context
Notes
- Auto-detection: CodinIT automatically detects Ollama running on port 11434
- Context window: Dynamically calculated based on model capabilities
- Resource demands: Large models require significant system resources
- Offline capability: Works without internet after model download
- Performance: May be slow on average hardware
