♻️ refactor: create ops module and move chunked_attention

- Create nanovllm/ops/ module for low-level attention operators
- Move chunked_attention.py from kvcache/ to ops/
- Update imports in full_policy.py (3 locations)
- Fix: remove dead code in OffloadEngine.reset() referencing
  non-existent layer_k/v_buffer_a/b attributes

Verified with needle test (32K offload): PASSED

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Zijie Tian
2026-01-20 02:50:14 +08:00
parent e440c45e73
commit 690456dbf9
4 changed files with 22 additions and 10 deletions

View File

@@ -255,7 +255,6 @@ class OffloadEngine:
Clears:
- GPU ring buffer slots (k_cache_gpu, v_cache_gpu)
- Per-layer decode buffers (decode_k_buffer, decode_v_buffer)
- Cross-layer pipeline buffers (layer_k/v_buffer_a/b)
- Per-layer prefill buffers (prefill_k/v_buffer)
- All pending async transfer events
"""
@@ -267,12 +266,6 @@ class OffloadEngine:
self.decode_k_buffer.zero_()
self.decode_v_buffer.zero_()
# Clear cross-layer pipeline buffers
self.layer_k_buffer_a.zero_()
self.layer_v_buffer_a.zero_()
self.layer_k_buffer_b.zero_()
self.layer_v_buffer_b.zero_()
# Clear per-layer prefill buffers
self.prefill_k_buffer.zero_()
self.prefill_v_buffer.zero_()