Inference Latency Profiler - Auto-activating skill for ML Deployment. Triggers on: inference latency profiler, inference latency profiler Part of the ML Deployment skill category.
3.4
Rating
0
Installs
Machine Learning
Category
This skill is essentially a template with minimal actual content. The description is circular and uninformative ('provides assistance for inference latency profiler tasks'), offering no specifics about what inference latency profiling entails, what metrics are measured (p50/p95/p99 latency, throughput, memory), or what frameworks/models are supported. taskKnowledge scores very low as there are no concrete steps, code examples, or profiling methodologies provided—just generic placeholder text. Structure is adequate as a simple template but there's little content to organize. Novelty is low because while inference profiling can be complex, this skill provides no specialized knowledge, tools, or automated profiling scripts that would reduce token costs over a CLI agent asking generic questions. The skill needs substantial domain-specific content about profiling techniques, benchmark frameworks (e.g., TensorRT, ONNX Runtime), metric collection, bottleneck analysis, and actionable optimization strategies to be useful.
Loading SKILL.md…