Users who have used large models know that, due to technical reasons, most models support English much better than Chinese.
Today, I'd like to recommend the most powerful Chinese model: Tongyi Qianwen (Alibaba Cloud's Qwen).
Recently, Alibaba Cloud released version 2.5 of Tongyi Qianwen. In Chinese contexts, Qwen2.5 surpasses GPT-4 in multiple capabilities including text understanding, text generation, knowledge Q&A, life advice, casual chat, and dialogue safety;
Compared to Qwen2.1, Qwen2.5 improves understanding, logical reasoning, instruction following, and coding ability by 9%, 16%, 19%, and 10% respectively.
On the authoritative benchmark OpenCompass, Qwen2.5 tied with GPT-4 Turbo, marking the first time a domestic model has achieved this milestone on this benchmark.
[Image]
Tongyi Qianwen has consistently adhered to open source, maintaining an open approach from models to cloud services. The hundred-billion-parameter Qwen was open-sourced directly, and the Bailian platform is compatible with open-source frameworks, maintaining maximum openness and promoting the collective progress of domestic large models.
The more open the technology, the faster it advances. On the recent HuggingFace Open LLM Leaderboard for open-source models, we were surprised to find that the newly open-sourced Qwen1.5-110B has topped the chart, performing even better than Llama-3-70B.
Last March, OpenAI released GPT-4. Now, the release of Qwen2.5 indicates that after more than a year of catching up, domestic large models have finally entered the core arena and can compete with world-class models. This is truly a domestic large model that you can actually use!
Related Reading
Programmer Wanfeng specializes in AI programming training. Beginners can start working on AI projects after watching his tutorial 《30 Lectures · AI Programming Bootcamp》, a collaboration with Turing Community.
🎓 AI 编程实战课程
想系统学习 AI 编程?程序员晚枫的 AI 编程实战课 帮你从零上手!
- 👉 课程报名:点击这里报名,前3讲免费试听
- 👉 免费试看:B站免费试看前3讲,先看看适不适合自己