What Are Tokens?
CodinIT uses AI that runs on “tokens.” Tokens are small pieces of text that the AI reads and writes. Understanding token usage helps you optimize costs and stay within model limits.How Tokens Get Used
Tokens are consumed in several ways:- System prompts: CodinIT’s built-in prompts (default, fine-tuned, or experimental) that guide AI behavior
- Your messages: The questions and requests you send to the AI
- AI responses: The code, explanations, and artifacts the AI generates
- Project context: File contents, file changes, and running processes the AI reads
- Chain of thought: The reasoning process shown in
<codinitThinking>tags - Conversation history: Previous messages in the chat thread
What Affects Token Usage
- Which AI model: Some models cost more per token (Claude vs GPT vs DeepSeek)
- System prompt choice: Fine-tuned prompt uses more tokens than experimental
- Project size: Bigger projects with more files use more context tokens
- Answer length: Long explanations and code use more tokens than short ones
- Chat length: Longer conversations accumulate more history tokens
- Mode selection: Discussion mode may use fewer tokens (no code generation)
- Chain of thought: Visible reasoning adds tokens but improves quality
- File context: The AI reading multiple files to understand your project
Token Limits: Each AI has a maximum amount of text it can handle at once. If you go over, you might get errors.
How to Save Tokens
Use Buttons Instead of Typing
CodinIT has buttons and menus that don’t use tokens:- Example prompts: Click suggested prompts instead of typing
- File operations: Use the file tree to create/delete files
- Terminal: Run commands yourself instead of asking AI
Write Better Requests
Be specific and short:Smart Ways to Save Tokens
Use Discussion Mode for Planning
When you just want to talk and plan (not write code), use discussion mode to save tokens:- Planning: Talk about features before building them (no code artifacts generated)
- Getting advice: Ask which tools to use (plain English responses)
- Code review: Discuss improvements without changing code
- Learning: Ask questions without generating code
- Architecture decisions: Get guidance on system design
Plan Before You Build
Think first:- Write down what your app should do
- Break big projects into small pieces
- Think about problems you might face
- Make a plan for what to build first
- Add one feature at a time
- Test each piece before moving on
- Use Git to save your progress
- Build the main features first, fancy stuff later
Don’t Waste Tokens on Errors
When something breaks:- Don’t keep clicking “fix” over and over
- Read the error message to understand what’s wrong
- Use discussion mode to ask for help
- Add error handling so it doesn’t break again
- Add logging to see what’s happening
- Show friendly error messages to users
- Check user input before using it
- Use try-catch to handle errors gracefully
Keep Your Project Small
Organize your files:- Keep files under 500 lines
- Split big files into smaller ones
- Delete code you’re not using
- Use simple, efficient code
- Don’t include your whole project in every request
- Reference specific files instead
- Start a new chat when conversations get too long
- Focus on one part of your app at a time
Discussion Mode: Use this when you want to talk and plan without writing code. It uses a different system prompt focused on guidance rather than code generation, which can save tokens.
Start New Chats: When conversations get long, start a new chat to reduce context tokens. CodinIT maintains your project files, so you won’t lose work.
Use Git: Save your work with Git instead of asking AI to undo things. It’s free and doesn’t use tokens!
Choosing the Right AI Model and Prompt
Pick the Right Model for the Job
CodinIT supports multiple AI providers with different token costs:- Cheaper models (GPT-3.5, DeepSeek) for simple tasks and quick questions
- Mid-range models (GPT-4, Claude Sonnet) for most development work
- Premium models (Claude Opus) for complex problems and important code
- Check context limits - some models can’t handle huge projects
- Balance cost and quality based on what you’re doing
Different AI Models
Claude (Anthropic):- Excellent at reasoning and complex code
- Larger context windows (200K+ tokens)
- Higher cost per token but better quality
- Works well with CodinIT’s chain-of-thought prompting
- Best for: Complex projects, refactoring, architecture
- Fast and cost-effective for many tasks
- Good for iterative development
- GPT-4 for harder problems, GPT-3.5 for simple tasks
- Best for: Quick iterations, simple features, prototyping
- Very cost-effective for code generation
- Good code quality at lower cost
- Best for: Budget-conscious development, learning
- Check provider-specific strengths
- Consider regional availability
- Compare pricing for your use case
System Prompt Selection
CodinIT offers three prompt variants that affect token usage:- Default Prompt: Balanced approach with comprehensive guidelines
- Fine-Tuned Prompt: More detailed instructions, higher token usage, better results
- Experimental Prompt: Optimized for lower token usage (may sacrifice some quality)
Advanced Tips
Focus Your Requests
Be specific about files:- Name the exact files you’re working on
- Don’t ask about “the whole project”
- Focus on one part at a time
- Start with the main features
- Test each piece before adding more
- Keep your code organized in small pieces
Watch Your Usage
Track what you use:- See which tasks use the most tokens
- Notice patterns in what costs more
- Change your approach based on what you learn
- Combine related requests into one
- Do multiple things at once when possible
- Plan ahead to avoid going back and forth
Be Aware: Understanding tokens helps you save money. Focus on asking good questions, not lots of questions.
Learn as You Go: The more you use AI, the better you’ll get at knowing which approach saves the most tokens.
