Qwen3-MT
Qwen3-MT is a machine translation model developed by Alibaba Cloud's Qwen team, released on July 25, 2025. It is fine-tuned from Qwen3 with a lightweight Mixture-of-Experts backbone and trained on trillions of multilingual tokens spanning formal, technical, and conversational text. The model covers 92 major languages and prominent dialects, reaching over 95% of the global population.
| General | |
|---|---|
| Release date | 25 Jul 2025 |
| Developer | Qwen / Alibaba Cloud |
| Type | Machine translation model (MoE fine-tune) |
| License | Commercial API |
| Documentation | Alibaba Cloud Model Studio |
| API | DashScope via Qwen API Platform |
Core Features
- 92 languages: covers major world languages and prominent dialects, reaching 95% of the global population.
- Terminology control: allows custom terminology dictionaries to keep brand names, technical terms, and product names consistent.
- Domain prompting: a domain hint lets the model adapt output style for legal, medical, technical, or conversational text.
- Translation memory: integrates past translation pairs so repeated segments stay consistent across large documents.
- Competitive pricing: priced at $0.5 per million tokens, significantly lower than dense large models for translation workloads.
Performance
Qwen3-MT outperforms comparably-sized models on translation benchmarks, including GPT-4.1-mini and Gemini-2.5-Flash, while remaining competitive with larger models like GPT-4.1 and Gemini-2.5-Pro on translation quality metrics.
Tools and Resources
- Qwen-MT Blog Post: technical overview and benchmark results.
- Alibaba Cloud Model Studio: full API documentation and integration guide.
- Qwen API Platform: generate an API key to get started.
- DashScope SDK (PyPI): Python SDK for calling Qwen3-MT and other Qwen models.
Ecosystem and Integrations
- Served through Alibaba Cloud DashScope, accessible with the OpenAI-compatible endpoint or the native DashScope SDK.
- Supports batch translation for high-volume document workflows.
- Term dictionaries and translation memory integrate via API request parameters, requiring no custom fine-tuning.
Get started by generating an API key on the Qwen API Platform and following the Model Studio translation guide.