Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.
8.7
Rating
0
Installs
AI & LLM
Category
Excellent vLLM serving skill with comprehensive task knowledge and clear structure. The description accurately captures capabilities (high-throughput serving, quantization, tensor parallelism, OpenAI compatibility). Three well-structured workflows cover production deployment, batch inference, and quantization with actionable checklists and code examples. Task knowledge is outstanding with specific commands, performance targets (TTFT <500ms, >100 req/sec), troubleshooting, and hardware requirements. Structure is clean with advanced topics properly delegated to reference files. Novelty is strong - setting up production-grade LLM serving with vLLM's specialized features (PagedAttention, continuous batching, quantization) would require significant research and experimentation for a CLI agent. Minor improvement possible: could be more explicit about when to choose specific quantization methods based on hardware constraints. Overall, this is a production-ready skill that meaningfully reduces deployment complexity and token costs.
Loading SKILL.md…

Skill Author