[docs] Added transformers error desp.
This commit is contained in:
279
docs/transformers_compatibility.md
Normal file
279
docs/transformers_compatibility.md
Normal file
@@ -0,0 +1,279 @@
|
|||||||
|
# Transformers 低版本兼容性问题
|
||||||
|
|
||||||
|
## 概述
|
||||||
|
|
||||||
|
本文档详细记录了 nano-vllm 在低版本 transformers(< 4.51.0)环境下的兼容性问题。这些问题源于 nano-vllm 使用了 transformers 4.51.0 才引入的 `Qwen3Config` 类。
|
||||||
|
|
||||||
|
## 问题背景
|
||||||
|
|
||||||
|
### 测试环境
|
||||||
|
|
||||||
|
| 环境 | 版本 | 说明 |
|
||||||
|
|------|------|------|
|
||||||
|
| Docker 镜像 | `tzj/ruler:v0.3` | NVIDIA PyTorch 24.08 容器 |
|
||||||
|
| transformers | 4.45.2 | 系统预装版本 |
|
||||||
|
| Python | 3.10.12 | 系统版本 |
|
||||||
|
| PyTorch | 2.5.0a0+872d972 | CUDA 12.6 |
|
||||||
|
|
||||||
|
### 冲突场景
|
||||||
|
|
||||||
|
在 RULER benchmark 测试环境中,NeMo 框架依赖 transformers 4.45.2 和特定版本的 `huggingface_hub`。升级 transformers 到 4.51.0+ 会导致:
|
||||||
|
|
||||||
|
```
|
||||||
|
ImportError: cannot import name 'ModelFilter' from 'huggingface_hub'
|
||||||
|
```
|
||||||
|
|
||||||
|
因此需要 nano-vllm 适配低版本 transformers,以便在同一环境中运行。
|
||||||
|
|
||||||
|
## 详细问题分析
|
||||||
|
|
||||||
|
### 1. 核心问题:Qwen3Config 不存在
|
||||||
|
|
||||||
|
**错误信息**:
|
||||||
|
```python
|
||||||
|
ImportError: cannot import name 'Qwen3Config' from 'transformers'
|
||||||
|
(/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)
|
||||||
|
```
|
||||||
|
|
||||||
|
**问题根源**:
|
||||||
|
- `Qwen3Config` 是在 transformers **4.51.0** 版本中首次引入
|
||||||
|
- transformers 4.45.2 只包含 `Qwen2` 系列模型
|
||||||
|
|
||||||
|
**受影响版本**:
|
||||||
|
| transformers 版本 | Qwen3 支持 | 可用 Qwen 模型 |
|
||||||
|
|------------------|-----------|---------------|
|
||||||
|
| < 4.51.0 | 不支持 | qwen2, qwen2_audio, qwen2_moe, qwen2_vl |
|
||||||
|
| >= 4.51.0 | 支持 | qwen2 系列 + qwen3, qwen3_moe |
|
||||||
|
|
||||||
|
### 2. 影响范围
|
||||||
|
|
||||||
|
#### 2.1 直接影响的文件
|
||||||
|
|
||||||
|
| 文件路径 | 问题代码 | 影响 |
|
||||||
|
|---------|---------|------|
|
||||||
|
| `nanovllm/models/qwen3.py:4` | `from transformers import Qwen3Config` | 直接导入失败 |
|
||||||
|
| `nanovllm/models/__init__.py:6` | `from nanovllm.models import qwen3` | 触发 qwen3 导入 |
|
||||||
|
|
||||||
|
#### 2.2 级联影响
|
||||||
|
|
||||||
|
由于 `nanovllm/models/__init__.py` 无条件导入了 `qwen3` 模块,会导致以下级联失败:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# 这些导入都会失败
|
||||||
|
from nanovllm.models import llama # FAILED
|
||||||
|
from nanovllm.models import get_model_class # FAILED
|
||||||
|
import nanovllm # FAILED
|
||||||
|
```
|
||||||
|
|
||||||
|
**测试验证**:
|
||||||
|
```python
|
||||||
|
# transformers 4.45.2 环境
|
||||||
|
|
||||||
|
>>> from nanovllm.models.registry import register_model
|
||||||
|
SUCCESS # registry 本身可以导入
|
||||||
|
|
||||||
|
>>> from nanovllm.config import Config
|
||||||
|
SUCCESS # config 不依赖 Qwen3Config
|
||||||
|
|
||||||
|
>>> from nanovllm.models import llama
|
||||||
|
FAILED: cannot import name 'Qwen3Config' from 'transformers'
|
||||||
|
# 因为 models/__init__.py 先导入了 qwen3
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Qwen3Config 使用位置
|
||||||
|
|
||||||
|
在 `nanovllm/models/qwen3.py` 中的使用:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Line 4
|
||||||
|
from transformers import Qwen3Config
|
||||||
|
|
||||||
|
# Line 128-129: 类型注解
|
||||||
|
class Qwen3DecoderLayer(nn.Module):
|
||||||
|
def __init__(self, config: Qwen3Config) -> None:
|
||||||
|
...
|
||||||
|
|
||||||
|
# Line 170-171: 类型注解
|
||||||
|
class Qwen3Model(nn.Module):
|
||||||
|
def __init__(self, config: Qwen3Config) -> None:
|
||||||
|
...
|
||||||
|
|
||||||
|
# Line 200-203: 类型注解
|
||||||
|
class Qwen3ForCausalLM(nn.Module):
|
||||||
|
def __init__(self, config: Qwen3Config) -> None:
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Qwen3Config 属性使用
|
||||||
|
|
||||||
|
代码中使用了以下 `Qwen3Config` 属性:
|
||||||
|
|
||||||
|
| 属性 | 位置 | 用途 |
|
||||||
|
|------|------|------|
|
||||||
|
| `hidden_size` | Line 131, 147, 173 | 隐藏层维度 |
|
||||||
|
| `num_attention_heads` | Line 132 | 注意力头数 |
|
||||||
|
| `num_key_value_heads` | Line 133 | KV 头数 |
|
||||||
|
| `max_position_embeddings` | Line 134 | 最大位置编码 |
|
||||||
|
| `rms_norm_eps` | Line 135, 147, 148, 175 | RMSNorm epsilon |
|
||||||
|
| `attention_bias` | Line 136 (getattr) | 是否使用注意力偏置 |
|
||||||
|
| `head_dim` | Line 137 (getattr) | 注意力头维度 |
|
||||||
|
| `rope_theta` | Line 138 (getattr) | RoPE base |
|
||||||
|
| `rope_scaling` | Line 139 (getattr) | RoPE scaling 配置 |
|
||||||
|
| `intermediate_size` | Line 144 | FFN 中间层维度 |
|
||||||
|
| `hidden_act` | Line 145 | 激活函数类型 |
|
||||||
|
| `vocab_size` | Line 173, 206 | 词表大小 |
|
||||||
|
| `num_hidden_layers` | Line 174 | Transformer 层数 |
|
||||||
|
| `tie_word_embeddings` | Line 207 | 是否共享词嵌入 |
|
||||||
|
|
||||||
|
## 解决方案建议
|
||||||
|
|
||||||
|
### 方案 1: 条件导入(推荐)
|
||||||
|
|
||||||
|
修改 `nanovllm/models/__init__.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
"""Model registry and model implementations."""
|
||||||
|
|
||||||
|
from nanovllm.models.registry import register_model, get_model_class, MODEL_REGISTRY
|
||||||
|
|
||||||
|
# Import models to trigger registration
|
||||||
|
# Llama is always available
|
||||||
|
from nanovllm.models import llama
|
||||||
|
|
||||||
|
# Qwen3 requires transformers >= 4.51.0
|
||||||
|
try:
|
||||||
|
from nanovllm.models import qwen3
|
||||||
|
except ImportError:
|
||||||
|
import warnings
|
||||||
|
warnings.warn(
|
||||||
|
"Qwen3 models require transformers >= 4.51.0. "
|
||||||
|
"Install with: pip install 'transformers>=4.51.0'"
|
||||||
|
)
|
||||||
|
|
||||||
|
__all__ = ["register_model", "get_model_class", "MODEL_REGISTRY"]
|
||||||
|
```
|
||||||
|
|
||||||
|
修改 `nanovllm/models/qwen3.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import torch
|
||||||
|
from torch import nn
|
||||||
|
import torch.distributed as dist
|
||||||
|
|
||||||
|
# Conditional import for Qwen3Config
|
||||||
|
try:
|
||||||
|
from transformers import Qwen3Config
|
||||||
|
except ImportError:
|
||||||
|
# Create a placeholder for type hints when Qwen3Config is not available
|
||||||
|
Qwen3Config = None
|
||||||
|
raise ImportError(
|
||||||
|
"Qwen3Config requires transformers >= 4.51.0. "
|
||||||
|
"Current version does not support Qwen3 models."
|
||||||
|
)
|
||||||
|
|
||||||
|
# ... rest of the code
|
||||||
|
```
|
||||||
|
|
||||||
|
### 方案 2: 使用 AutoConfig(兼容性更好)
|
||||||
|
|
||||||
|
修改 `nanovllm/models/qwen3.py` 以使用 `AutoConfig` 而非具体的 `Qwen3Config`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from typing import TYPE_CHECKING, Any
|
||||||
|
|
||||||
|
# Only import Qwen3Config for type checking
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from transformers import Qwen3Config
|
||||||
|
|
||||||
|
# Runtime: use duck typing
|
||||||
|
class Qwen3DecoderLayer(nn.Module):
|
||||||
|
def __init__(self, config: Any) -> None: # Accept any config-like object
|
||||||
|
super().__init__()
|
||||||
|
# Access attributes via getattr for safety
|
||||||
|
self.self_attn = Qwen3Attention(
|
||||||
|
hidden_size=config.hidden_size,
|
||||||
|
num_heads=config.num_attention_heads,
|
||||||
|
num_kv_heads=config.num_key_value_heads,
|
||||||
|
max_position=config.max_position_embeddings,
|
||||||
|
rms_norm_eps=config.rms_norm_eps,
|
||||||
|
qkv_bias=getattr(config, 'attention_bias', True),
|
||||||
|
head_dim=getattr(config, 'head_dim', None),
|
||||||
|
rope_theta=getattr(config, "rope_theta", 1000000),
|
||||||
|
rope_scaling=getattr(config, "rope_scaling", None),
|
||||||
|
)
|
||||||
|
# ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### 方案 3: 版本检查与优雅降级
|
||||||
|
|
||||||
|
在 `nanovllm/__init__.py` 或启动时添加版本检查:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import transformers
|
||||||
|
from packaging import version
|
||||||
|
|
||||||
|
TRANSFORMERS_VERSION = version.parse(transformers.__version__)
|
||||||
|
QWEN3_MIN_VERSION = version.parse("4.51.0")
|
||||||
|
|
||||||
|
QWEN3_AVAILABLE = TRANSFORMERS_VERSION >= QWEN3_MIN_VERSION
|
||||||
|
|
||||||
|
if not QWEN3_AVAILABLE:
|
||||||
|
import warnings
|
||||||
|
warnings.warn(
|
||||||
|
f"transformers {transformers.__version__} does not support Qwen3 models. "
|
||||||
|
f"Upgrade to >= 4.51.0 for Qwen3 support."
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 适配优先级
|
||||||
|
|
||||||
|
建议按以下优先级进行适配:
|
||||||
|
|
||||||
|
1. **P0 - models/__init__.py**: 添加 try-except 使 Llama 模型可独立使用
|
||||||
|
2. **P1 - qwen3.py**: 添加清晰的错误信息,说明版本要求
|
||||||
|
3. **P2 - 类型注解**: 可选地改为 `Any` 或使用 `TYPE_CHECKING`
|
||||||
|
4. **P3 - 文档**: 在 README 和 pyproject.toml 中说明版本依赖
|
||||||
|
|
||||||
|
## 测试验证
|
||||||
|
|
||||||
|
适配后应验证以下场景:
|
||||||
|
|
||||||
|
### 测试 1: 低版本环境(transformers 4.45.2)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 预期结果:Llama 模型可用,Qwen3 提示版本不足
|
||||||
|
docker run --rm \
|
||||||
|
-v /path/to/nano-vllm:/workspace/nano-vllm \
|
||||||
|
-e PYTHONPATH=/workspace/nano-vllm \
|
||||||
|
tzj/ruler:v0.3 \
|
||||||
|
python -c "
|
||||||
|
from nanovllm.models import get_model_class, MODEL_REGISTRY
|
||||||
|
print('Available models:', list(MODEL_REGISTRY.keys()))
|
||||||
|
# Expected: ['LlamaForCausalLM']
|
||||||
|
# Warning: Qwen3 models require transformers >= 4.51.0
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 测试 2: 高版本环境(transformers >= 4.51.0)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 预期结果:Llama 和 Qwen3 模型均可用
|
||||||
|
pip install 'transformers>=4.51.0'
|
||||||
|
python -c "
|
||||||
|
from nanovllm.models import get_model_class, MODEL_REGISTRY
|
||||||
|
print('Available models:', list(MODEL_REGISTRY.keys()))
|
||||||
|
# Expected: ['LlamaForCausalLM', 'Qwen3ForCausalLM', 'Qwen2ForCausalLM']
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 相关参考
|
||||||
|
|
||||||
|
- [Transformers Qwen3 文档](https://huggingface.co/docs/transformers/en/model_doc/qwen3)
|
||||||
|
- [Qwen3 GitHub](https://github.com/QwenLM/Qwen3)
|
||||||
|
- [Transformers 版本历史](https://github.com/huggingface/transformers/releases)
|
||||||
|
|
||||||
|
## 版本信息
|
||||||
|
|
||||||
|
| 日期 | 版本 | 变更 |
|
||||||
|
|------|------|------|
|
||||||
|
| 2025-01-11 | 1.0 | 初始文档,记录 transformers 4.45.2 兼容性问题 |
|
||||||
Reference in New Issue
Block a user