meta-llama/llama-4-scout-17b-16e-instruct
GroqPricing, specs, and comparison for meta-llama/llama-4-scout-17b-16e-instruct by Groq.
Input Cost
$0.110
/1M tokens
Output Cost
$0.340
/1M tokens
Context Window
131.1K
tokens
Max Output
8.2K
tokens
Features
VisionFunction Calling
Cost Examples
| Usage | Input Cost | Output Cost | Total |
|---|---|---|---|
| 1K tokens in + 500 out | $0.0001 | $0.0002 | $0.0003 |
| 10K tokens in + 2K out | $0.0011 | $0.0007 | $0.0018 |
| 100K tokens in + 10K out | $0.0110 | $0.0034 | $0.0144 |
| 1M tokens in + 100K out | $0.1100 | $0.0340 | $0.1440 |
Similar Models
Models in a similar price range from other providers.
| Model | Provider | Input $/1M | Output $/1M | Context |
|---|---|---|---|---|
| meta-llama/llama-4-scout-17b-16e-instruct | Groq | $0.110 | $0.340 | 131.1K |
| us/gpt-4.1-nano-2025-04-14 | Azure | $0.110 | $0.440 | 1.0M |
| qwen/qwen3-235b-a22b-thinking-2507 | OpenRouter | $0.110 | $0.600 | 262.1K |
| google/gemma-3-27b-it | Novita AI | $0.119 | $0.200 | 98.3K |
| meta.llama3-2-1b-instruct-v1:0 | AWS Bedrock | $0.100 | $0.100 | 128K |
| mistral.ministral-3-3b-instruct | AWS Bedrock | $0.100 | $0.100 | 128K |
อัปเดตราคา LLM รายสัปดาห์
รับแจ้งเตือนเมื่อราคา AI model เปลี่ยน ฟรี ไม่สแปม ยกเลิกได้ตลอด