CodeContext AI Models
Understanding the AI powering your documentation
CodeContext leverages state-of-the-art language models to understand your code and generate documentation. Choose between speed and quality based on your needs and budget.
Available Models
CodeContext Basic AI™
Free PlanFast, efficient, and great for most projects
CodeContext Advanced AI™
Pro PlanSuperior understanding and output quality
Detailed Comparison
Feature | CodeContext Basic AI™ | CodeContext Advanced AI™ |
---|---|---|
Speed | ~2 seconds | ~5 seconds |
Quality | Good | Exceptional |
Context Window | 16K tokens | 128K tokens |
Code Understanding | Good | Excellent |
Cost per Update | ~$0.01 | ~$0.10 |
Best For | Most projects | Complex codebases |
CodeContext Basic AI™ (Free Plan)
Our default model provides excellent documentation for most projects while keeping costs low.
Strengths
- Lightning fast generation
- Handles standard patterns well
- Cost-effective for frequent updates
- Great for small to medium projects
Limitations
- Less nuanced understanding
- May miss complex patterns
- Smaller context window
- Basic API documentation
Example Output Quality
## Installation Install the package using npm: ```bash npm install my-package ``` ## Usage Import and use the main function: ```javascript const myPackage = require('my-package'); myPackage.doSomething(); ```
CodeContext Advanced AI™ (Pro Plan)
The most advanced model available, providing unmatched documentation quality and understanding.
Strengths
- Deep code comprehension
- Understands complex architectures
- Superior writing quality
- Handles edge cases brilliantly
- 8x larger context window
Considerations
- Slightly slower generation
- Higher per-update cost
- Best ROI for complex projects
Example Output Quality
## 🚀 Installation Install using your preferred package manager: ```bash # npm npm install my-package # yarn yarn add my-package # pnpm pnpm add my-package ``` ## 📖 Usage ### Basic Example ```javascript import { MyPackage } from 'my-package'; const instance = new MyPackage({ apiKey: process.env.API_KEY, timeout: 5000 }); // Async/await pattern const result = await instance.doSomething(); ``` ### Advanced Configuration The package supports extensive configuration options...
Which Model Should You Use?
Use Basic AI When:
- • Your project follows standard patterns
- • You need frequent documentation updates
- • Speed is more important than perfection
- • Working with smaller codebases
- • Budget is a primary concern
Use CodeContext Advanced AI™ When:
- • Your codebase is complex or unconventional
- • Documentation quality is critical
- • You need detailed API documentation
- • Working with large enterprise projects
- • You want the absolute best results
CodeContext Privacy AI™ (Pro Exclusive)
100% offline processing for maximum security
Complete Privacy
- • All processing on your local machine
- • No data sent to external servers
- • Works completely offline
- • SOC2 compliant architecture
Same Power, More Security
- • Full Advanced AI capabilities offline
- • Optimized for local hardware
- • 2.3GB model download
- • Switch modes anytime
✅ Model installed locally (2.3GB)
🔒 Privacy mode enabled - all processing now offline
Technical Specifications
How We Use These Models
1. Context Optimization
We carefully curate the context sent to each model, including relevant code snippets, file structure, and project metadata to maximize understanding while staying within token limits.
2. Prompt Engineering
Our prompts are specifically crafted for each model's strengths, ensuring optimal documentation output regardless of which model you use.
3. Output Processing
Generated documentation goes through post-processing to ensure consistent formatting, proper markdown syntax, and integration with your existing documentation.
Coming Soon
Future Model Support
We're constantly evaluating new models to provide you with the best options:
- • CodeContext Advanced AI\u2122 upgrades
- • CodeContext Privacy AI\u2122 - 100% offline processing (Pro exclusive)
- • Custom fine-tuned models for documentation
- • Multi-model ensemble for best results