Webvar
Jamba 1.5 Large (Amazon Bedrock Edition) - logo

Jamba 1.5 Large (Amazon Bedrock Edition)

Jamba 1.5 Large is the first of its kind hybrid Mamba-Transformer architecture at a production grade level offering unmatched efficiency. With an unprecedented context window length (256K), it offers superior quality output for tasks needing large input context & low latency, at a competitive price point for its class.
View offer on AWS
awsPurchase this listing from Webvar in AWS Marketplace using your AWS account. In AWS Marketplace, you can quickly launch pre-configured software with just a few clicks. AWS handles billing and payments, and charges on your AWS bill.

About

AI21 Labs Jamba 1.5 Large is a foundation model built from groundbreaking hybrid architecture, leveraging both the novel Mamba architecture and traditional Transformer architecture to achieve leading quality at the best price.

By drawing on its SSM-Transformer hybrid architecture, as well as its impressive 256K context window, Jamba 1.5 Large efficiently solves a variety of text generation and comprehension use cases for the enterprise. Its 94B active parameters and 398B total parameters lead to superior accuracy in responses.

Jamba 1.5 Large is ideal for enterprise workflows with tasks that are data-heavy and require a model to be able to ingest a large amount of information in order to produce an accurate and thorough response, such as summarizing lengthy documents or enabling question answering across an extensive organizational knowledge base. Jamba 1.5 Large is a model designed for superior quality responses, high throughput and attractive price compared to other models in its size class.

Use cases:

Multi-document analysis: Summarize or compare across multiple documents at once to identify key points and insights. For example, summarizing 8 years worth of 10-K financial reports of a public company.

Multi-document question answering: Query multiple documents, records, or policies in a database. For example, powering a customer-facing chatbot and 15 thorough Q&A exchanges referencing 50 pages of support articles per question.

Organizational search assistant: Improve the retrieval stage of a RAG system for organizational data, resulting in higher quality answers. For example, an internal agent assistant can hold up to 800 pages of organizational data in its context window to provide accurate and thorough responses.

Related Products

How it works?

Search

Search 25000+ products and services vetted by AWS.

Request private offer

Our team will send you an offer link to view.

Purchase

Accept the offer in your AWS account, and start using the software.

Manage

All your transactions will be consolidated into one bill in AWS.

Create Your Marketplace with Webvar!

Launch your marketplace effortlessly with our solutions. Optimize sales processes and expand your reach with our platform.