Dynamically scale model token budgets using resource telemetry, prompt size, and profile presets. Use when token limits must adapt to hardware constraints, per-request size, or safe/fast/quality modes.
4.3
Rating
0
Installs
AI & LLM
Category
The skill addresses dynamic token budget allocation based on resource telemetry and profiles—a useful LLM operations capability. The description clearly states when to use it (hardware constraints, per-request sizing, profile modes). However, descriptionCoverage is moderate because a CLI agent would need more detail on input parameters and output format. taskKnowledge scores mid-range as the workflow outlines four clear steps and references a Python script and JSON presets (assumed present), but lacks specifics on how to pass parameters or interpret results. structure is reasonable with a concise overview and references to external files. novelty is modest; while helpful for orchestration, resource-aware token budgeting is conceptually straightforward and could be replicated with moderate CLI scripting, reducing the unique value proposition.
Loading SKILL.md…