Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Language Model

Mixtral 8X22b Instruct V0.1Sparse Power, Dense Results

Deploy Mixtral Capabilities Fast

SMoE Architecture

39B Active Parameters

Uses 39B of 141B params for dense 70B speed at lower cost.

Multilingual Chat

English to Spanish Native

Handles en, fr, it, de, es with 64K context for precise recall.

Function Calling

Native Tool Integration

Supports function calls for app development and tech stack updates.

Examples

See what Mixtral 8X22b Instruct V0.1 can create

Copy any prompt below and try it yourself in the playground.

Math Proof

Prove the Pythagorean theorem step-by-step using geometric arguments, then verify with coordinates.

Code Debugger

Debug this Python function for sorting linked lists and optimize for O(n log n) time.

Multilingual Summary

Summarize quantum computing advances in French, then translate key terms to German.

Function Call

Get weather in Paris using get_current_weather tool with celsius format.

For Developers

A few lines of code.
Instruct model. Few lines.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about Mixtral 8X22b Instruct V0.1

Read the docs

Instruct fine-tuned version of Mixtral-8x22B-v0.1 for chat. SMoE model with 141B total params, 39B active. Supports 64K context.

Access via vLLM or Transformers engines. Launch with xinference or Hugging Face. Handles chat, function calling natively.

Fluent in English, French, Italian, German, Spanish. Strong in math, coding tasks.

Faster than 70B dense models via sparse activation. Cost-efficient for large-scale apps.

64K tokens for document recall. Instruct format requires user/assistant alternation.

Supports AWQ Int4, GPTQ Int4. Use PyTorch format for full precision.

Ready to create?

Start generating with Mixtral 8X22b Instruct V0.1 on ModelsLab.