Skip to main content
Learn how to use AI smartly so you don’t run out of credits or money. Think of tokens like text messages - the more you send, the more it costs.

What Are Tokens?

CodinIT uses AI that runs on “tokens.” Tokens are small pieces of text that the AI reads and writes. Understanding token usage helps you optimize costs and stay within model limits.

How Tokens Get Used

Tokens are consumed in several ways:
  • System prompts: CodinIT’s built-in prompts (default, fine-tuned, or experimental) that guide AI behavior
  • Your messages: The questions and requests you send to the AI
  • AI responses: The code, explanations, and artifacts the AI generates
  • Project context: File contents, file changes, and running processes the AI reads
  • Chain of thought: The reasoning process shown in <codinitThinking> tags
  • Conversation history: Previous messages in the chat thread

What Affects Token Usage

  • Which AI model: Some models cost more per token (Claude vs GPT vs DeepSeek)
  • System prompt choice: Fine-tuned prompt uses more tokens than experimental
  • Project size: Bigger projects with more files use more context tokens
  • Answer length: Long explanations and code use more tokens than short ones
  • Chat length: Longer conversations accumulate more history tokens
  • Mode selection: Discussion mode may use fewer tokens (no code generation)
  • Chain of thought: Visible reasoning adds tokens but improves quality
  • File context: The AI reading multiple files to understand your project
Token Limits: Each AI has a maximum amount of text it can handle at once. If you go over, you might get errors.

How to Save Tokens

Use Buttons Instead of Typing

CodinIT has buttons and menus that don’t use tokens:
  • Example prompts: Click suggested prompts instead of typing
  • File operations: Use the file tree to create/delete files
  • Terminal: Run commands yourself instead of asking AI

Write Better Requests

Be specific and short:
❌ "Make this website look better"
✅ "Add a hero section with gradient background, centered heading, and button"
Give helpful details:
❌ "Fix the login page" (AI has to search everything)
✅ "Fix the password error on /login - it breaks when password is less than 8 characters"
Use numbered lists:
❌ "Add user authentication"
✅ "Add user authentication with: 1) Login form, 2) Registration form, 3) Password reset, 4) Protected routes"

Smart Ways to Save Tokens

Use Discussion Mode for Planning

When you just want to talk and plan (not write code), use discussion mode to save tokens:
  • Planning: Talk about features before building them (no code artifacts generated)
  • Getting advice: Ask which tools to use (plain English responses)
  • Code review: Discuss improvements without changing code
  • Learning: Ask questions without generating code
  • Architecture decisions: Get guidance on system design
Discussion mode uses a different system prompt that focuses on planning rather than code generation, which can reduce token usage while still providing valuable guidance. Use the “Implement this plan” button when ready to switch to build mode.

Plan Before You Build

Think first:
  • Write down what your app should do
  • Break big projects into small pieces
  • Think about problems you might face
  • Make a plan for what to build first
Build step by step:
  • Add one feature at a time
  • Test each piece before moving on
  • Use Git to save your progress
  • Build the main features first, fancy stuff later

Don’t Waste Tokens on Errors

When something breaks:
  • Don’t keep clicking “fix” over and over
  • Read the error message to understand what’s wrong
  • Use discussion mode to ask for help
  • Add error handling so it doesn’t break again
Prevent errors:
  • Add logging to see what’s happening
  • Show friendly error messages to users
  • Check user input before using it
  • Use try-catch to handle errors gracefully

Keep Your Project Small

Organize your files:
  • Keep files under 500 lines
  • Split big files into smaller ones
  • Delete code you’re not using
  • Use simple, efficient code
Manage context:
  • Don’t include your whole project in every request
  • Reference specific files instead
  • Start a new chat when conversations get too long
  • Focus on one part of your app at a time
Discussion Mode: Use this when you want to talk and plan without writing code. It uses a different system prompt focused on guidance rather than code generation, which can save tokens.
Start New Chats: When conversations get long, start a new chat to reduce context tokens. CodinIT maintains your project files, so you won’t lose work.
Use Git: Save your work with Git instead of asking AI to undo things. It’s free and doesn’t use tokens!

Choosing the Right AI Model and Prompt

Pick the Right Model for the Job

CodinIT supports multiple AI providers with different token costs:
  • Cheaper models (GPT-3.5, DeepSeek) for simple tasks and quick questions
  • Mid-range models (GPT-4, Claude Sonnet) for most development work
  • Premium models (Claude Opus) for complex problems and important code
  • Check context limits - some models can’t handle huge projects
  • Balance cost and quality based on what you’re doing

Different AI Models

Claude (Anthropic):
  • Excellent at reasoning and complex code
  • Larger context windows (200K+ tokens)
  • Higher cost per token but better quality
  • Works well with CodinIT’s chain-of-thought prompting
  • Best for: Complex projects, refactoring, architecture
GPT (OpenAI):
  • Fast and cost-effective for many tasks
  • Good for iterative development
  • GPT-4 for harder problems, GPT-3.5 for simple tasks
  • Best for: Quick iterations, simple features, prototyping
DeepSeek:
  • Very cost-effective for code generation
  • Good code quality at lower cost
  • Best for: Budget-conscious development, learning
Other Models (Gemini, Groq, etc.):
  • Check provider-specific strengths
  • Consider regional availability
  • Compare pricing for your use case

System Prompt Selection

CodinIT offers three prompt variants that affect token usage:
  1. Default Prompt: Balanced approach with comprehensive guidelines
  2. Fine-Tuned Prompt: More detailed instructions, higher token usage, better results
  3. Experimental Prompt: Optimized for lower token usage (may sacrifice some quality)
Choose the experimental prompt if token efficiency is your top priority.

Advanced Tips

Focus Your Requests

Be specific about files:
  • Name the exact files you’re working on
  • Don’t ask about “the whole project”
  • Focus on one part at a time
Build gradually:
  • Start with the main features
  • Test each piece before adding more
  • Keep your code organized in small pieces

Watch Your Usage

Track what you use:
  • See which tasks use the most tokens
  • Notice patterns in what costs more
  • Change your approach based on what you learn
Work smarter:
  • Combine related requests into one
  • Do multiple things at once when possible
  • Plan ahead to avoid going back and forth
Be Aware: Understanding tokens helps you save money. Focus on asking good questions, not lots of questions.
Learn as You Go: The more you use AI, the better you’ll get at knowing which approach saves the most tokens.