← Back to Blog
cost-optimizationprompt-cachingmodel-routingpythonapi-costsllm-ops

Prompt Caching + Model Routing: How to Cut Your AI API Bill by 90% (With Code)

Two techniques that stack together to slash your AI API costs. Complete Python implementations for prompt caching (Anthropic, OpenAI, DeepSeek) and intelligent model routing with a Cost-per-Success framework.

|11 min read|By TokenTab

Prompt Caching + Model Routing: How to Cut Your AI API Bill by 90%

Most teams overpay for AI by 5-10x. Not because they picked the wrong model — because they use the same expensive model for everything and re-send the same context on every request.

Prompt caching eliminates redundant input costs. Model routing sends cheap queries to cheap models. Stack them together and your bill drops by 80-90%. This guide has working code for both.

90%

Maximum savings

Combining prompt caching + model routing + batch API

TechniqueHow It WorksTypical Savings
Prompt cachingReuse cached system prompts instead of re-processing tokens50-90% on input tokens
Model routingSend simple queries to cheap models, hard queries to frontier60-70% on total spend

Part 1: Prompt Caching Deep Dive#

How It Works

Without caching, every API call re-processes your full system prompt. 2,000-token prompt times 10,000 requests/day = 20M input tokens processed from scratch.

With caching, the provider stores the processed prompt. Subsequent requests hit the cache at a fraction of the cost:

Without caching:
  Request 1: [System: 2000 tok] + [User: 200 tok] → 2200 input tokens billed
  Request 2: [System: 2000 tok] + [User: 150 tok] → 2150 input tokens billed
  Total: 4,350 tokens at full price

With caching:
  Request 1: [System: 2000 tok → WRITE CACHE] + [User: 200 tok] → 2200 at full price
  Request 2: [System: CACHE HIT] + [User: 150 tok] → 150 full-price + 2000 cached (90% off)
  Total: 2,350 full-price + 2,000 cached tokens

Provider Comparison

Frontier Model Pricing (Before Caching)

ModelInput $/1MOutput $/1MCached $/1MContext
gpt-5.4OpenAI$2.50$15.00$0.2501.1M
gpt-5OpenAI$1.25$10.00$0.125272K
claude-opus-4-6Anthropic$5.00$25.00$0.5001M
claude-sonnet-4-6Anthropic$3.00$15.00$0.300200K
gemini-3.1-pro-previewGoogle$2.00$12.00$0.2001.0M
gemini-2.5-pro-preview-05-06Google$1.25$10.00$0.1251.0M
deepseek-chatDeepSeek$0.280$0.420$0.028131.1K

Live pricing from TokenTab database. Prices may change — last synced from provider APIs.

ProviderCache DiscountTTLActivation
Anthropic90% off input5 min (ephemeral)Manual — cache_control param
OpenAI50% off inputAutomaticAutomatic — no code changes
Google90% off inputConfigurableManual — cached_content API
DeepSeek90% off inputAutomaticAutomatic — prefix matching

Anthropic Implementation

Anthropic gives the biggest discount (90%) but requires explicit cache markers. The 5-minute TTL resets on each hit — perfect for high-traffic apps.

import anthropic

client = anthropic.Anthropic()

SYSTEM_PROMPT = """You are a senior code reviewer for a Python codebase.
Review code for: security vulnerabilities, performance issues,
readability problems, and adherence to PEP 8.
Always provide specific line references and suggested fixes.
Rate severity as: critical, warning, or info.
... (imagine 1500+ tokens of detailed instructions here)
"""

def review_code(code_snippet: str) -> str:
    response = client.messages.create(
        model="claude-sonnet-4-6",
        max_tokens=1024,
        system=[{
            "type": "text",
            "text": SYSTEM_PROMPT,
            "cache_control": {"type": "ephemeral"}  # enables caching
        }],
        messages=[
            {"role": "user", "content": f"Review this code:\n```python\n{code_snippet}\n```"}
        ]
    )
    usage = response.usage
    print(f"Input: {usage.input_tokens} | Cache read: {usage.cache_read_input_tokens} | Cache write: {usage.cache_creation_input_tokens}")
    return response.content[0].text

# First call: cache write
result = review_code("def add(a, b): return a + b")
# Input: 1700 | Cache read: 0 | Cache write: 1500

# Second call within 5 min: cache hit — 90% cheaper on cached tokens
result = review_code("def multiply(x, y): return x * y")
# Input: 200 | Cache read: 1500 | Cache write: 0
💡

Anthropic Cache TTL Reset

Every cache hit resets the 5-minute TTL. If your app handles even 1 request per 5 minutes, the cache stays warm indefinitely. For batch processing, sort requests to maximize cache hits within the TTL window.

OpenAI Implementation (Automatic)

OpenAI caches automatically for prompts over 1,024 tokens. No code changes — just verify:

from openai import OpenAI
client = OpenAI()

def query_openai(user_message: str) -> str:
    response = client.chat.completions.create(
        model="gpt-5",
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user", "content": user_message}
        ]
    )
    cached = getattr(response.usage, "prompt_tokens_details", None)
    if cached:
        print(f"Cached tokens: {cached.cached_tokens}")  # > 0 = cache hit
    return response.choices[0].message.content

DeepSeek (Automatic Prefix Caching)

DeepSeek gives 90% off with automatic prefix-based caching via its disk-based system. Keep your system prompt consistent — DeepSeek handles the rest:

client = OpenAI(api_key="your-deepseek-key", base_url="https://api.deepseek.com")

def query_deepseek(user_message: str) -> str:
    response = client.chat.completions.create(
        model="deepseek-chat",
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user", "content": user_message}
        ]
    )
    print(f"Cache hit tokens: {getattr(response.usage, 'prompt_cache_hit_tokens', 0)}")
    return response.choices[0].message.content

Real Savings Calculation

💰

Prompt Caching Savings Math

Scenario: 10,000 requests/day, 2,000-token system prompt, 200 avg user tokens, 500 avg output tokens.

Without caching (Claude Sonnet): 22M input tokens/day x $3/M = $66/day.

With caching (95% hit rate): 2M full-price + 19M cached at $0.30/M = $11.70/day.

Savings: $54.30/day = $1,629/month (82% reduction on input costs).

Standard pricing

claude-sonnet-4-6

$4050.00/mo

40%

saved

With caching

claude-sonnet-4-6

$2430.00/mo

Save $1620.00/mo ($19440.00/yr) with prompt caching


Part 2: Model Routing Deep Dive#

Why Most Queries Don't Need Frontier Models

70% of typical AI API traffic is simple tasks — classification, extraction, reformatting, basic Q&A. Sending these to GPT-5 or Claude Opus is like hiring a PhD to sort mail.

70%

of API traffic

Can be handled by smaller, cheaper models

Price Spread: Frontier vs Lightweight Models

ModelInput $/1MOutput $/1MCached $/1MContext
claude-opus-4-6Anthropic$5.00$25.00$0.5001M
gpt-5.4OpenAI$2.50$15.00$0.2501.1M
claude-sonnet-4-6Anthropic$3.00$15.00$0.300200K
gemini-3.1-pro-previewGoogle$2.00$12.00$0.2001.0M
gpt-5OpenAI$1.25$10.00$0.125272K
claude-haiku-4-5-20251001Anthropic$1.00$5.00$0.100200K
gpt-5-miniOpenAI$0.250$2.00$0.025272K
gpt-5-nanoOpenAI$0.050$0.400$0.0050272K
deepseek-chatDeepSeek$0.280$0.420$0.028131.1K
grok-4-1-fastxAI$0.200$0.500$0.0502M

Live pricing from TokenTab database. Prices may change — last synced from provider APIs.

Build a Model Router

This router classifies query complexity and sends each request to the right tier:

import anthropic
from openai import OpenAI
from dataclasses import dataclass
from enum import Enum

class Tier(Enum):
    NANO = "nano"        # Classification, extraction
    MID = "mid"          # Summarization, Q&A
    FRONTIER = "frontier" # Reasoning, code gen, analysis

@dataclass
class ModelConfig:
    provider: str
    model: str
    cost_per_1k_input: float
    cost_per_1k_output: float

MODEL_TIERS: dict[Tier, ModelConfig] = {
    Tier.NANO: ModelConfig("openai", "gpt-5-nano", 0.00010, 0.00040),
    Tier.MID: ModelConfig("deepseek", "deepseek-chat", 0.00014, 0.00028),
    Tier.FRONTIER: ModelConfig("anthropic", "claude-sonnet-4-6", 0.003, 0.015),
}

COMPLEXITY_KEYWORDS = {
    "high": ["analyze", "compare", "debug", "refactor", "architect",
             "design", "optimize", "explain why", "trade-off", "reason"],
    "low": ["classify", "extract", "format", "convert", "translate",
            "summarize briefly", "yes or no", "list the", "parse"],
}

def classify_complexity(query: str) -> Tier:
    query_lower = query.lower()
    word_count = len(query.split())
    high = sum(1 for kw in COMPLEXITY_KEYWORDS["high"] if kw in query_lower)
    low = sum(1 for kw in COMPLEXITY_KEYWORDS["low"] if kw in query_lower)

    if high >= 2 or (word_count > 200 and high >= 1):
        return Tier.FRONTIER
    if low >= 1 and word_count < 50:
        return Tier.NANO
    return Tier.MID

# Provider clients
clients = {
    "anthropic": anthropic.Anthropic(),
    "openai": OpenAI(),
    "deepseek": OpenAI(api_key="deepseek-key", base_url="https://api.deepseek.com"),
}

def route_and_query(query: str, system_prompt: str = "") -> dict:
    tier = classify_complexity(query)
    config = MODEL_TIERS[tier]

    if config.provider == "anthropic":
        resp = clients["anthropic"].messages.create(
            model=config.model, max_tokens=1024,
            system=[{"type": "text", "text": system_prompt,
                     "cache_control": {"type": "ephemeral"}}] if system_prompt else [],
            messages=[{"role": "user", "content": query}]
        )
        text, inp, out = resp.content[0].text, resp.usage.input_tokens, resp.usage.output_tokens
    else:
        resp = clients[config.provider].chat.completions.create(
            model=config.model,
            messages=[*([{"role": "system", "content": system_prompt}] if system_prompt else []),
                      {"role": "user", "content": query}]
        )
        text, inp, out = resp.choices[0].message.content, resp.usage.prompt_tokens, resp.usage.completion_tokens

    cost = inp / 1000 * config.cost_per_1k_input + out / 1000 * config.cost_per_1k_output
    return {"tier": tier.value, "model": config.model, "response": text, "cost": cost}
# Simple extraction → nano ($0.0001/1K tokens)
result = route_and_query("Extract all email addresses from this text: ...")
# Routed to: nano (gpt-5-nano) — Cost: $0.000024

# Complex reasoning → frontier
result = route_and_query("Analyze the trade-offs between microservices and monolith and design an architecture...")
# Routed to: frontier (claude-sonnet-4-6) — Cost: $0.018500

The Cost-per-Success Framework

Raw cost-per-token is misleading. A cheap model that fails 40% of the time costs more than an expensive one that always succeeds. Use Cost-per-Success (CPS):

CPS = total_cost / successful_outputs
from dataclasses import dataclass, field

@dataclass
class CostPerSuccessTracker:
    results: dict = field(default_factory=lambda: {
        "nano": {"cost": 0.0, "success": 0, "total": 0},
        "mid": {"cost": 0.0, "success": 0, "total": 0},
        "frontier": {"cost": 0.0, "success": 0, "total": 0},
    })

    def record(self, tier: str, cost: float, success: bool):
        self.results[tier]["cost"] += cost
        self.results[tier]["total"] += 1
        if success:
            self.results[tier]["success"] += 1

    def cps(self, tier: str) -> float:
        r = self.results[tier]
        return r["cost"] / r["success"] if r["success"] > 0 else float("inf")

    def report(self):
        for tier, r in self.results.items():
            rate = r["success"] / r["total"] * 100 if r["total"] else 0
            print(f"{tier:<10} {r['total']:>5} reqs | {rate:.0f}% success | CPS: ${self.cps(tier):.6f}")

After running 1,000 mixed queries:

TierQueriesSuccess RateTotal CostCPS
Nano45094%$0.018$0.000043
Mid38097%$0.095$0.000258
Frontier17099%$2.856$0.016941
All frontier (no routing)1,00099%$16.80$0.016970
💰

Routing Savings

Routed total: $2.97. All-frontier total: $16.80. Savings: 82%. The nano tier's CPS is 394x cheaper than frontier — for simple tasks, cheap models are efficient enough.

Model Routing: Cost Per 1K Requests

Same workload routed vs all-frontier.

500 input tokens300 output tokens1,000 req/day (30,000/mo)
gpt-5-nano
$4.35
grok-4-1-fast
$7.50
deepseek-chat
$7.98
gpt-5-mini
$21.75
gemini-3.1-pro-preview
$138.00
gpt-5.4
$172.50
claude-sonnet-4-6
$180.00
claude-opus-4-6
$300.00

Cheapest: gpt-5-nano saves $295.65/mo vs claude-opus-4-6

Open in Calculator →

Part 3: Stacking Both Techniques#

Routing alone saves 70%. Caching alone saves 75%. Together they compound:

OptimizationMonthly CostSavings
Baseline (all frontier, no caching)$5,040
+ Prompt caching only$1,26075%
+ Model routing only$1,51270%
+ Both combined$50490%

$4,536/mo

Monthly savings

Caching + routing on 10K requests/day

The implementation is straightforward — use the router from Part 2 and add cache_control to every Anthropic call (already shown in route_and_query above). OpenAI and DeepSeek cache automatically.


Part 4: Batch API for Offline Work#

Not everything needs real-time responses. Batch APIs give 50% off for async processing:

from openai import OpenAI
import json

client = OpenAI()

def submit_batch(queries: list[str], system_prompt: str) -> str:
    # Build JSONL batch file
    requests = [
        {"custom_id": f"req-{i}", "method": "POST", "url": "/v1/chat/completions",
         "body": {"model": "gpt-5-mini", "max_tokens": 512,
                  "messages": [{"role": "system", "content": system_prompt},
                               {"role": "user", "content": q}]}}
        for i, q in enumerate(queries)
    ]
    with open("/tmp/batch.jsonl", "w") as f:
        for r in requests:
            f.write(json.dumps(r) + "\n")

    batch_file = client.files.create(file=open("/tmp/batch.jsonl", "rb"), purpose="batch")
    job = client.batches.create(
        input_file_id=batch_file.id,
        endpoint="/v1/chat/completions",
        completion_window="24h"
    )
    print(f"Batch {job.id} submitted — 50% cheaper, results within 24h")
    return job.id

def get_results(batch_id: str) -> list[dict] | None:
    batch = client.batches.retrieve(batch_id)
    if batch.status == "completed":
        content = client.files.content(batch.output_file_id)
        return [json.loads(line) for line in content.text.strip().split("\n")]
    print(f"Status: {batch.status}")
    return None
💡

When to Use Batch API

Batch is ideal for bulk content generation, dataset labeling, nightly reports, and embedding generation — any workload where you can wait up to 24 hours. At 50% off, it stacks with routing for even deeper savings.


Part 5: Cost Tracking — Prove Your Savings#

import json
from datetime import datetime
from collections import defaultdict

class CostTracker:
    def __init__(self):
        self.records = defaultdict(lambda: {
            "requests": 0, "cost": 0.0, "cache_savings": 0.0, "routing_savings": 0.0
        })

    def record(self, model: str, cost: float, cache_savings: float = 0, routing_savings: float = 0):
        key = f"{datetime.now():%Y-%m-%d}:{model}"
        self.records[key]["requests"] += 1
        self.records[key]["cost"] += cost
        self.records[key]["cache_savings"] += cache_savings
        self.records[key]["routing_savings"] += routing_savings

    def summary(self) -> dict:
        total_cost = sum(v["cost"] for v in self.records.values())
        saved_cache = sum(v["cache_savings"] for v in self.records.values())
        saved_route = sum(v["routing_savings"] for v in self.records.values())
        reqs = sum(v["requests"] for v in self.records.values())
        baseline = total_cost + saved_cache + saved_route
        return {
            "total_cost": round(total_cost, 2),
            "total_savings": round(saved_cache + saved_route, 2),
            "effective_discount": f"{(saved_cache + saved_route) / max(baseline, 0.01) * 100:.1f}%",
            "total_requests": reqs,
            "avg_cost_per_request": round(total_cost / max(reqs, 1), 6),
        }

tracker = CostTracker()
# After a day of traffic:
# {"total_cost": 15.42, "total_savings": 128.76, "effective_discount": "89.3%", ...}

Cheat Sheet#

StepActionExpected Savings
1Add cache_control to Anthropic system prompts50-90% on input tokens
2Verify OpenAI auto-caching (cached_tokens in response)50% on input tokens
3Build a 3-tier model router60-70% on total spend
4Move batch workloads to Batch API50% on batch jobs
5Add cost tracking to prove ROIVisibility
Calculate Your Savings with TokenTab

Sources#

  1. Anthropic — Prompt Caching docs — 90% discount, 5-min TTL
  2. OpenAI — Prompt Caching guide — Automatic, 50% discount
  3. DeepSeek — KV Cache docs — Automatic prefix caching, 90% discount
  4. Google — Context Caching for Gemini — Configurable TTL, 90% discount
  5. OpenAI — Batch API reference — 50% discount, 24h window
  6. Anthropic — Message Batches API — 50% discount, 24h window
  7. TokenTab — Live Model Pricing — Real-time pricing for 1,800+ models

Weekly LLM Price Update

Get notified when AI model prices change. Free, no spam, unsubscribe anytime.