mistral-7b-instruct-v0.2
Model ID: @hf/mistral/mistral-7b-instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2. Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1: 32k context window (vs 8k context in v0.1), rope-theta = 1e6, and no Sliding-Window Attention.
Properties
Task Type: Text Generation
Use the Playground
Try out this model with Workers AI Model Playground. It does not require any setup or authentication and an instant way to preview and test a model directly in the browser.
Launch the Model PlaygroundWorker
cURL
Prompting
Part of getting good results from text generation models is asking questions correctly. LLMs are usually trained with specific predefined templates, which should then be used with the model's tokenizer for better results when doing inference tasks
We recommend using unscoped prompts for inference with LoRA.
Unscoped prompts
You can use unscoped prompts to send a single question to the model without
worrying about providing any context. Workers AI will automatically convert
your prompt
input to a reasonable default scoped prompt internally
so that you get the best possible prediction.
You can also use unscoped prompts to construct the model chat template manually. In this case, you can use the raw parameter. Here's an input example of a Mistral chat template prompt:
Responses
Using streaming
The recommended method to handle text generation responses is streaming.
LLMs work internally by generating responses sequentially using a process of repeated inference — the full output of a LLM model is essentially a sequence of hundreds or thousands of individual prediction tasks. For this reason, while it only takes a few milliseconds to generate a single token, generating the full response takes longer, on the order of seconds.
You can use streaming to start displaying the response as soon as the first tokens are generated, and append each additional token until the response is complete. This yields a much better experience for the end user. Displaying text incrementally as it's generated not only provides instant responsiveness, but also gives the end-user time to read and interpret the text.
To enable, set the stream
parameter to true.
Using the Workers API:
Using the REST API:
Streaming responses use server-sent events; the are easy to use, simple to implement on the server side, standardized, and broadly available across many platforms natively or as a polyfill.
Handling streaming responses in the client
Below is an example showing how to parse this response in JavaScript, from the browser:
Non-streaming response
Non-streaming responses may be helpful in some contexts, and they are possible; however, be aware that we limit the maximum number of output sequence tokens to avoid timeouts. Whenever possible, use streaming.
API Schema
The following schema is based on JSON Schema