Simple, Transparent Pricing

Pay for the platform. Pay only for what you use with AI.

Professional Plan

$200
per user / month

API costs for AI usage are billed separately based on actual usage. You provide your own API keys for complete transparency and control.

Understanding AI Costs & How It Works

What are Large Language Models (LLMs)? +

Large Language Models are advanced AI systems trained on vast amounts of text to understand and generate human-like language. They power modern AI assistants and can draft documents, analyze text, and respond to complex questions.

Who provides LLMs?

Several companies provide LLM services:

  • Anthropic - Creator of Claude, known for safety and accuracy. AutoDrafter uses Claude models.
  • OpenAI - Creator of GPT-4 and ChatGPT
  • Google - Creator of Gemini

Why does AutoDrafter use Claude?

Claude models consistently outperform competitors in legal reasoning, nuanced analysis, and following complex instructions. Claude is also designed with strong safety guardrails, reducing hallucinations and improving reliability for professional use.

What's the difference between web chat and API access? +

Web Chat (Consumer Products)

Services like ChatGPT Plus or Claude Pro are consumer subscriptions ($20/month) designed for casual use. They include:

  • Usage limits that throttle heavy users
  • Restricted context windows to control costs
  • Downgraded model quality when you exceed limits
  • No integration with your documents or systems

API Access (Professional)

API access is how professional applications like AutoDrafter connect to AI models. Benefits include:

  • No artificial usage limits
  • Full context windows for complete document analysis
  • Consistent quality regardless of usage
  • Integration with your matter documents, knowledge base, and memory
  • Pay only for exactly what you use
The hidden cost of consumer plans: Platforms that seem "affordable" actually restrict what you can do. Long chats get cut off. Heavy usage gets throttled. You get a degraded experience precisely when you need quality most - on complex legal documents.
What are tokens and how are they priced? +

Understanding Tokens

Tokens are the units AI models use to process text. One token is roughly 4 characters or 0.75 words. A 1,000-word document is approximately 1,300 tokens.

Input vs Output Tokens

  • Input tokens: The text you send to the AI (your prompt, matter memory, knowledge base content, document context)
  • Output tokens: The text the AI generates (your drafted document)

Current Pricing (Anthropic Claude)

Model Input Output
Claude Sonnet 4.5 (Balanced) $3 / million tokens $15 / million tokens
Claude Opus 4.5 (Most Capable) $15 / million tokens $75 / million tokens
In practical terms: A typical motion to compel with 10,000 words of input context and 3,000 words of output costs approximately $0.15-$0.50 with Sonnet, or $0.75-$2.50 with Opus. Even intensive daily use rarely exceeds $50-100/month in API costs.
What does it cost to draft typical legal documents? +

Costs vary based on document complexity, context needed, and model selected. Here are representative examples using Claude Sonnet 4.5:

Document Type Typical Context Est. Cost
Simple correspondence Minimal $0.02-$0.05
Discovery requests Moderate $0.10-$0.25
Motion to compel High $0.15-$0.50
Summary judgment brief Very high $0.50-$2.00
Complex appellate brief Maximum $1.00-$5.00

For complex work requiring Opus 4.5 (the most capable model), multiply by approximately 5x. Even then, costs remain a fraction of the time saved.

The value proposition: A $2 document that takes 10 minutes to refine vs. 2 hours of manual drafting. At $300/hour billing rates, that's $600 of time saved for $2 in AI costs.
Why use your own API key instead of a flat subscription? +

The Problem with Flat-Fee AI Platforms

Platforms that charge a flat monthly fee must limit your usage to stay profitable. They do this by:

  • Restricting context windows (less case material considered)
  • Throttling after heavy use
  • Using cheaper, less capable models
  • Limiting document length

Bring Your Own Key (BYOK) Advantages

  • No hidden limits: Use as much as you need, when you need it
  • Full quality: Always get the best models at full capability
  • Complete transparency: See exactly what each document costs
  • Costs only go down: As AI improves and prices drop, you benefit immediately
  • No vendor lock-in: Your API key, your choice
Real-world example: A "flat $99/month" competitor throttles users after ~50 documents to stay profitable. With BYOK, 50 documents might cost $10-25 in API fees. You pay less AND get unlimited high-quality usage.
How does model selection protect my investment? +

No Vendor Lock-In

With AutoDrafter, you're not locked into any single AI provider. Your knowledge base, matter memory, document styles, and all customizations are stored independently of the AI models.

What This Means for You

  • Switch models freely: Use Sonnet for routine work, Opus for complex briefs
  • Future-proof: When better models launch, upgrade instantly
  • Costs decrease: AI costs drop 20-40% annually. Your platform subscription stays fixed, but API costs go down.
  • Quality improves: Better models mean better documents with no additional work

Your Customizations Stay With You

Unlike general AI chatbots where customizations vanish when you switch providers:

  • Your knowledge base works with any model
  • Matter memory persists across model changes
  • Document styles and formatting are model-independent
  • The platform gets smarter over time - that intelligence transfers to new models
The compounding advantage: Every document you draft, every piece of knowledge you upload, every matter you configure makes the platform more valuable. That value is yours permanently, regardless of which AI model you choose to use.
Why not just use ChatGPT or Claude directly? +

The Limitations of General AI Chat

General AI assistants are powerful but lack critical capabilities for legal work:

  • No knowledge control: You can't force the AI to use your templates, treatises, or firm standards
  • Chat degradation: Long conversations lose quality as context fills up
  • No persistent memory: Case facts disappear when the chat ends
  • No formatting: Output is plain text requiring hours of manual reformatting
  • No document processing: Can't search across case documents semantically

What AutoDrafter Adds

  • Force AI to use specific knowledge for each document
  • Persistent, editable matter memory that never degrades
  • Vectorized document search across all case materials
  • Court-ready formatting with captions and certificates
  • Style templates for consistent output
The key insight: General LLMs, no matter how good, cannot provide this level of customization on their own. And even if they could, your customizations would vanish the moment you want to try a different model. AutoDrafter is infrastructure that makes any LLM work the way you need.

Why AutoDrafter Delivers Unique Value

Platform Gets Smarter Over Time

Every piece of knowledge you add, every matter you configure, every document you draft makes the system more valuable. This accumulated intelligence is yours permanently.

No Other Platform Can Match This

General LLMs can't provide forced knowledge injection, persistent memory, or court-ready formatting. AutoDrafter fills the gap between raw AI capability and professional legal requirements.

Costs Only Decrease

AI API costs drop 20-40% annually while capabilities improve. With BYOK, you benefit immediately from every price reduction. Flat-fee platforms pocket the savings.

Customizations Never Vanish

With general AI chats, switching models means losing everything. With AutoDrafter, your knowledge base, memory, and styles transfer instantly to any new model.