Jamba 1.5 Large
AI21 Labs
7.3/10
Dimension Breakdown
Tool Calling 7/10
Reliability of function/tool calling — correct schema adherence and parameter extraction
Cost Efficiency 8/10
Price per token relative to output quality for agent tasks
Latency 8/10
Response time — time to first token and total generation time under load
API Reliability 7/10
Uptime, rate limit headroom, and error rates in production
Context Quality 7/10
Long-context coherence and instruction following over turns
Share Your Experience
Have you used Jamba 1.5 Large in production? Help other developers by sharing your review.
Submit a ReviewTop Use Cases
RAG General
Summary
Mamba-SSM architecture provides efficient inference for long-context RAG, but tool-calling reliability lags market leaders.
Sources
Practitioner Reviews
No reviews yet
Be the first to share your experience with Jamba 1.5 Large.
Related Models
- GPT-4o (OpenAI, 8.5/10)
- Gemini 1.5 Pro (Google, 8.4/10)
- Gemini 2.0 Flash Thinking (Google, 8.2/10)
Last updated: May 4, 2026