Use FlashInfer's optimized merge_state kernel for attention output merging
in chunked prefill. End-to-end improvement: +0.8% (32K) to +2.4% (64K).
Key changes:
- Add merge_attention_outputs_flashinfer() with LSE format conversion
- FlashInfer uses log2, flash_attn uses ln: convert via LOG2_E/LN_2
- Keep original Triton kernel for fallback
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create nanovllm/ops/ module for low-level attention operators
- Move chunked_attention.py from kvcache/ to ops/
- Update imports in full_policy.py (3 locations)
- Fix: remove dead code in OffloadEngine.reset() referencing
non-existent layer_k/v_buffer_a/b attributes
Verified with needle test (32K offload): PASSED
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>