🐛 fix: support multiple EOS tokens for GLM-4

GLM-4 uses multiple EOS tokens [151329, 151336, 151338] where 151336
(<|user|>) should also stop generation. Previously only the first EOS
from tokenizer was used, causing generation to always hit max_tokens.

Changes:
- config.py: Change eos type to int | list[int]
- llm_engine.py: Read eos_token_id from hf_config (contains full list)
- scheduler.py: Use set for efficient multi-EOS lookup

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Zijie Tian
2026-01-28 13:23:53 +08:00
parent 726e4b58cf
commit 29e102720b
3 changed files with 12 additions and 4 deletions

View File

@@ -31,7 +31,13 @@ class LLMEngine:
self.events.append(event)
self.model_runner = ModelRunner(config, 0, self.events)
self.tokenizer = AutoTokenizer.from_pretrained(config.model, use_fast=True, trust_remote_code=True)
config.eos = self.tokenizer.eos_token_id
# Get EOS token(s) from config (may be int or list, e.g., GLM-4 uses list)
# Prefer hf_config.eos_token_id which contains full list, fallback to tokenizer
eos_from_config = getattr(config.hf_config, 'eos_token_id', None)
if eos_from_config is not None:
config.eos = eos_from_config
else:
config.eos = self.tokenizer.eos_token_id
# Set Sequence.block_size to match the KV cache block size
Sequence.block_size = config.kvcache_block_size
self.scheduler = Scheduler(config, self.model_runner.kvcache_manager)