Prompt Caching + Model Routing: Cómo recortar tu factura de AI API en un 90%
La mayoría de los equipos pagan 5-10x de más por AI. No porque eligieron el modelo equivocado — porque usan el mismo modelo caro para todo y reenvían el mismo contexto en cada request.
Prompt caching elimina costos redundantes de input. Model routing envía consultas baratas a modelos baratos. Combínalos y tu factura baja 80-90%. Esta guía tiene código funcional para ambos.
90%
Ahorro máximo
Combinando prompt caching + model routing + batch API
| Técnica | Cómo funciona | Ahorro típico |
|---|---|---|
| Prompt caching | Reutiliza system prompts cacheados en vez de reprocesar tokens | 50-90% en input tokens |
| Model routing | Envía consultas simples a modelos baratos, difíciles a frontier | 60-70% en gasto total |
Parte 1: Prompt Caching a fondo#
Cómo funciona
Sin cache, cada llamada a la API reprocesa tu system prompt completo. Prompt de 2,000 tokens por 10,000 requests/día = 20M de input tokens procesados desde cero.
Con cache, el proveedor almacena el prompt procesado. Los requests subsiguientes usan el cache a una fracción del costo:
Without caching:
Request 1: [System: 2000 tok] + [User: 200 tok] → 2200 input tokens billed
Request 2: [System: 2000 tok] + [User: 150 tok] → 2150 input tokens billed
Total: 4,350 tokens at full price
With caching:
Request 1: [System: 2000 tok → WRITE CACHE] + [User: 200 tok] → 2200 at full price
Request 2: [System: CACHE HIT] + [User: 150 tok] → 150 full-price + 2000 cached (90% off)
Total: 2,350 full-price + 2,000 cached tokens
Comparación de proveedores
Frontier Model Pricing (Before Caching)
| Model | Input $/1M | Output $/1M | Cached $/1M | Context |
|---|---|---|---|---|
| gpt-5.4OpenAI | $2.50 | $15.00 | $0.250 | 1.1M |
| gpt-5OpenAI | $1.25 | $10.00 | $0.125 | 272K |
| claude-opus-4-6Anthropic | $5.00 | $25.00 | $0.500 | 1M |
| claude-sonnet-4-6Anthropic | $3.00 | $15.00 | $0.300 | 200K |
| gemini-3.1-pro-previewGoogle | $2.00 | $12.00 | $0.200 | 1.0M |
| gemini-2.5-pro-preview-05-06Google | $1.25 | $10.00 | $0.125 | 1.0M |
| deepseek-chatDeepSeek | $0.280 | $0.420 | $0.028 | 131.1K |
Live pricing from TokenTab database. Prices may change — last synced from provider APIs.
| Proveedor | Descuento de cache | TTL | Activación |
|---|---|---|---|
| Anthropic | 90% off input | 5 min (efímero) | Manual — parámetro cache_control |
| OpenAI | 50% off input | Automático | Automático — sin cambios de código |
| 90% off input | Configurable | Manual — API cached_content | |
| DeepSeek | 90% off input | Automático | Automático — coincidencia de prefijo |
Implementación con Anthropic
Anthropic da el mayor descuento (90%) pero requiere marcadores de cache explícitos. El TTL de 5 minutos se reinicia con cada hit — perfecto para apps de alto tráfico.
import anthropic
client = anthropic.Anthropic()
SYSTEM_PROMPT = """You are a senior code reviewer for a Python codebase.
Review code for: security vulnerabilities, performance issues,
readability problems, and adherence to PEP 8.
Always provide specific line references and suggested fixes.
Rate severity as: critical, warning, or info.
... (imagine 1500+ tokens of detailed instructions here)
"""
def review_code(code_snippet: str) -> str:
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system=[{
"type": "text",
"text": SYSTEM_PROMPT,
"cache_control": {"type": "ephemeral"} # enables caching
}],
messages=[
{"role": "user", "content": f"Review this code:\n```python\n{code_snippet}\n```"}
]
)
usage = response.usage
print(f"Input: {usage.input_tokens} | Cache read: {usage.cache_read_input_tokens} | Cache write: {usage.cache_creation_input_tokens}")
return response.content[0].text
# First call: cache write
result = review_code("def add(a, b): return a + b")
# Input: 1700 | Cache read: 0 | Cache write: 1500
# Second call within 5 min: cache hit — 90% cheaper on cached tokens
result = review_code("def multiply(x, y): return x * y")
# Input: 200 | Cache read: 1500 | Cache write: 0
Reinicio de TTL en Anthropic
Cada cache hit reinicia el TTL de 5 minutos. Si tu app maneja aunque sea 1 request cada 5 minutos, el cache se mantiene caliente indefinidamente. Para procesamiento batch, ordena los requests para maximizar cache hits dentro de la ventana de TTL.
Implementación con OpenAI (automático)
OpenAI cachea automáticamente para prompts de más de 1,024 tokens. Sin cambios de código — solo verifica:
from openai import OpenAI
client = OpenAI()
def query_openai(user_message: str) -> str:
response = client.chat.completions.create(
model="gpt-5",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_message}
]
)
cached = getattr(response.usage, "prompt_tokens_details", None)
if cached:
print(f"Cached tokens: {cached.cached_tokens}") # > 0 = cache hit
return response.choices[0].message.content
DeepSeek (Prefix Caching automático)
DeepSeek da 90% off con caching automático basado en prefijo a través de su sistema en disco. Mantén tu system prompt consistente — DeepSeek se encarga del resto:
client = OpenAI(api_key="your-deepseek-key", base_url="https://api.deepseek.com")
def query_deepseek(user_message: str) -> str:
response = client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_message}
]
)
print(f"Cache hit tokens: {getattr(response.usage, 'prompt_cache_hit_tokens', 0)}")
return response.choices[0].message.content
Cálculo de ahorros reales
Las cuentas de ahorro con Prompt Caching
Escenario: 10,000 requests/día, system prompt de 2,000 tokens, 200 tokens promedio de usuario, 500 tokens promedio de output.
Sin cache (Claude Sonnet): 22M input tokens/día x $3/M = $66/día.
Con cache (95% hit rate): 2M a precio completo + 19M cacheados a $0.30/M = $11.70/día.
Ahorro: $54.30/día = $1,629/mes (82% de reducción en costos de input).
Standard pricing
claude-sonnet-4-6
$4050.00/mo
40%
saved
With caching
claude-sonnet-4-6
$2430.00/mo
Save $1620.00/mo ($19440.00/yr) with prompt caching
Parte 2: Model Routing a fondo#
Por qué la mayoría de las consultas no necesitan modelos frontier
El 70% del tráfico típico de AI API son tareas simples — clasificación, extracción, reformateo, Q&A básico. Enviar esto a GPT-5 o Claude Opus es como contratar a un PhD para ordenar el correo.
70%
del tráfico de API
Puede ser manejado por modelos más pequeños y baratos
Price Spread: Frontier vs Lightweight Models
| Model | Input $/1M | Output $/1M | Cached $/1M | Context |
|---|---|---|---|---|
| claude-opus-4-6Anthropic | $5.00 | $25.00 | $0.500 | 1M |
| gpt-5.4OpenAI | $2.50 | $15.00 | $0.250 | 1.1M |
| claude-sonnet-4-6Anthropic | $3.00 | $15.00 | $0.300 | 200K |
| gemini-3.1-pro-previewGoogle | $2.00 | $12.00 | $0.200 | 1.0M |
| gpt-5OpenAI | $1.25 | $10.00 | $0.125 | 272K |
| claude-haiku-4-5-20251001Anthropic | $1.00 | $5.00 | $0.100 | 200K |
| gpt-5-miniOpenAI | $0.250 | $2.00 | $0.025 | 272K |
| gpt-5-nanoOpenAI | $0.050 | $0.400 | $0.0050 | 272K |
| deepseek-chatDeepSeek | $0.280 | $0.420 | $0.028 | 131.1K |
| grok-4-1-fastxAI | $0.200 | $0.500 | $0.050 | 2M |
Live pricing from TokenTab database. Prices may change — last synced from provider APIs.
Construye un model router
Este router clasifica la complejidad de la consulta y envía cada request al tier correcto:
import anthropic
from openai import OpenAI
from dataclasses import dataclass
from enum import Enum
class Tier(Enum):
NANO = "nano" # Classification, extraction
MID = "mid" # Summarization, Q&A
FRONTIER = "frontier" # Reasoning, code gen, analysis
@dataclass
class ModelConfig:
provider: str
model: str
cost_per_1k_input: float
cost_per_1k_output: float
MODEL_TIERS: dict[Tier, ModelConfig] = {
Tier.NANO: ModelConfig("openai", "gpt-5-nano", 0.00010, 0.00040),
Tier.MID: ModelConfig("deepseek", "deepseek-chat", 0.00014, 0.00028),
Tier.FRONTIER: ModelConfig("anthropic", "claude-sonnet-4-6", 0.003, 0.015),
}
COMPLEXITY_KEYWORDS = {
"high": ["analyze", "compare", "debug", "refactor", "architect",
"design", "optimize", "explain why", "trade-off", "reason"],
"low": ["classify", "extract", "format", "convert", "translate",
"summarize briefly", "yes or no", "list the", "parse"],
}
def classify_complexity(query: str) -> Tier:
query_lower = query.lower()
word_count = len(query.split())
high = sum(1 for kw in COMPLEXITY_KEYWORDS["high"] if kw in query_lower)
low = sum(1 for kw in COMPLEXITY_KEYWORDS["low"] if kw in query_lower)
if high >= 2 or (word_count > 200 and high >= 1):
return Tier.FRONTIER
if low >= 1 and word_count < 50:
return Tier.NANO
return Tier.MID
# Provider clients
clients = {
"anthropic": anthropic.Anthropic(),
"openai": OpenAI(),
"deepseek": OpenAI(api_key="deepseek-key", base_url="https://api.deepseek.com"),
}
def route_and_query(query: str, system_prompt: str = "") -> dict:
tier = classify_complexity(query)
config = MODEL_TIERS[tier]
if config.provider == "anthropic":
resp = clients["anthropic"].messages.create(
model=config.model, max_tokens=1024,
system=[{"type": "text", "text": system_prompt,
"cache_control": {"type": "ephemeral"}}] if system_prompt else [],
messages=[{"role": "user", "content": query}]
)
text, inp, out = resp.content[0].text, resp.usage.input_tokens, resp.usage.output_tokens
else:
resp = clients[config.provider].chat.completions.create(
model=config.model,
messages=[*([{"role": "system", "content": system_prompt}] if system_prompt else []),
{"role": "user", "content": query}]
)
text, inp, out = resp.choices[0].message.content, resp.usage.prompt_tokens, resp.usage.completion_tokens
cost = inp / 1000 * config.cost_per_1k_input + out / 1000 * config.cost_per_1k_output
return {"tier": tier.value, "model": config.model, "response": text, "cost": cost}
# Simple extraction → nano ($0.0001/1K tokens)
result = route_and_query("Extract all email addresses from this text: ...")
# Routed to: nano (gpt-5-nano) — Cost: $0.000024
# Complex reasoning → frontier
result = route_and_query("Analyze the trade-offs between microservices and monolith and design an architecture...")
# Routed to: frontier (claude-sonnet-4-6) — Cost: $0.018500
El framework de Cost-per-Success
El costo por token a secas es engañoso. Un modelo barato que falla el 40% de las veces cuesta más que uno caro que siempre tiene éxito. Usa Cost-per-Success (CPS):
CPS = total_cost / successful_outputs
from dataclasses import dataclass, field
@dataclass
class CostPerSuccessTracker:
results: dict = field(default_factory=lambda: {
"nano": {"cost": 0.0, "success": 0, "total": 0},
"mid": {"cost": 0.0, "success": 0, "total": 0},
"frontier": {"cost": 0.0, "success": 0, "total": 0},
})
def record(self, tier: str, cost: float, success: bool):
self.results[tier]["cost"] += cost
self.results[tier]["total"] += 1
if success:
self.results[tier]["success"] += 1
def cps(self, tier: str) -> float:
r = self.results[tier]
return r["cost"] / r["success"] if r["success"] > 0 else float("inf")
def report(self):
for tier, r in self.results.items():
rate = r["success"] / r["total"] * 100 if r["total"] else 0
print(f"{tier:<10} {r['total']:>5} reqs | {rate:.0f}% success | CPS: ${self.cps(tier):.6f}")
Después de correr 1,000 consultas mixtas:
| Tier | Consultas | Tasa de éxito | Costo total | CPS |
|---|---|---|---|---|
| Nano | 450 | 94% | $0.018 | $0.000043 |
| Mid | 380 | 97% | $0.095 | $0.000258 |
| Frontier | 170 | 99% | $2.856 | $0.016941 |
| Todo frontier (sin routing) | 1,000 | 99% | $16.80 | $0.016970 |
Ahorros con routing
Total con routing: $2.97. Total all-frontier: $16.80. Ahorro: 82%. El CPS del tier nano es 394x más barato que frontier — para tareas simples, los modelos baratos son suficientemente eficientes.
Model Routing: Costo por 1K requests
Misma carga de trabajo con routing vs all-frontier.
Cheapest: gpt-5-nano saves $295.65/mo vs claude-opus-4-6
Open in Calculator →Parte 3: Combinando ambas técnicas#
Routing solo ahorra 70%. Cache solo ahorra 75%. Juntos se componen:
| Optimización | Costo mensual | Ahorro |
|---|---|---|
| Línea base (todo frontier, sin cache) | $5,040 | — |
| + Solo prompt caching | $1,260 | 75% |
| + Solo model routing | $1,512 | 70% |
| + Ambos combinados | $504 | 90% |
$4,536/mes
Ahorro mensual
Caching + routing en 10K requests/día
La implementación es directa — usa el router de la Parte 2 y agrega cache_control a cada llamada de Anthropic (ya mostrado en route_and_query arriba). OpenAI y DeepSeek cachean automáticamente.
Parte 4: Batch API para trabajo offline#
No todo necesita respuestas en tiempo real. Las Batch APIs dan 50% off para procesamiento asíncrono:
from openai import OpenAI
import json
client = OpenAI()
def submit_batch(queries: list[str], system_prompt: str) -> str:
# Build JSONL batch file
requests = [
{"custom_id": f"req-{i}", "method": "POST", "url": "/v1/chat/completions",
"body": {"model": "gpt-5-mini", "max_tokens": 512,
"messages": [{"role": "system", "content": system_prompt},
{"role": "user", "content": q}]}}
for i, q in enumerate(queries)
]
with open("/tmp/batch.jsonl", "w") as f:
for r in requests:
f.write(json.dumps(r) + "\n")
batch_file = client.files.create(file=open("/tmp/batch.jsonl", "rb"), purpose="batch")
job = client.batches.create(
input_file_id=batch_file.id,
endpoint="/v1/chat/completions",
completion_window="24h"
)
print(f"Batch {job.id} submitted — 50% cheaper, results within 24h")
return job.id
def get_results(batch_id: str) -> list[dict] | None:
batch = client.batches.retrieve(batch_id)
if batch.status == "completed":
content = client.files.content(batch.output_file_id)
return [json.loads(line) for line in content.text.strip().split("\n")]
print(f"Status: {batch.status}")
return None
Cuándo usar Batch API
Batch es ideal para generación masiva de contenido, etiquetado de datasets, reportes nocturnos y generación de embeddings — cualquier carga de trabajo donde puedas esperar hasta 24 horas. Al 50% off, se combina con routing para ahorros aún más profundos.
Parte 5: Rastreo de costos — Demuestra tus ahorros#
import json
from datetime import datetime
from collections import defaultdict
class CostTracker:
def __init__(self):
self.records = defaultdict(lambda: {
"requests": 0, "cost": 0.0, "cache_savings": 0.0, "routing_savings": 0.0
})
def record(self, model: str, cost: float, cache_savings: float = 0, routing_savings: float = 0):
key = f"{datetime.now():%Y-%m-%d}:{model}"
self.records[key]["requests"] += 1
self.records[key]["cost"] += cost
self.records[key]["cache_savings"] += cache_savings
self.records[key]["routing_savings"] += routing_savings
def summary(self) -> dict:
total_cost = sum(v["cost"] for v in self.records.values())
saved_cache = sum(v["cache_savings"] for v in self.records.values())
saved_route = sum(v["routing_savings"] for v in self.records.values())
reqs = sum(v["requests"] for v in self.records.values())
baseline = total_cost + saved_cache + saved_route
return {
"total_cost": round(total_cost, 2),
"total_savings": round(saved_cache + saved_route, 2),
"effective_discount": f"{(saved_cache + saved_route) / max(baseline, 0.01) * 100:.1f}%",
"total_requests": reqs,
"avg_cost_per_request": round(total_cost / max(reqs, 1), 6),
}
tracker = CostTracker()
# After a day of traffic:
# {"total_cost": 15.42, "total_savings": 128.76, "effective_discount": "89.3%", ...}
Hoja de referencia rápida#
| Paso | Acción | Ahorro esperado |
|---|---|---|
| 1 | Agrega cache_control a los system prompts de Anthropic | 50-90% en input tokens |
| 2 | Verifica el auto-caching de OpenAI (cached_tokens en la respuesta) | 50% en input tokens |
| 3 | Construye un model router de 3 tiers | 60-70% en gasto total |
| 4 | Mueve cargas batch a Batch API | 50% en trabajos batch |
| 5 | Agrega rastreo de costos para demostrar el ROI | Visibilidad |
Fuentes#
- Anthropic — Prompt Caching docs — 90% discount, 5-min TTL
- OpenAI — Prompt Caching guide — Automatic, 50% discount
- DeepSeek — KV Cache docs — Automatic prefix caching, 90% discount
- Google — Context Caching for Gemini — Configurable TTL, 90% discount
- OpenAI — Batch API reference — 50% discount, 24h window
- Anthropic — Message Batches API — 50% discount, 24h window
- TokenTab — Live Model Pricing — Real-time pricing for 1,800+ models