Open-source AI observability platform for LLM tracing, evaluation, and monitoring. Use when debugging LLM applications with detailed traces, running evaluations on datasets, or monitoring production AI systems with real-time insights.
8.7
Rating
0
Installs
AI & LLM
Category
Excellent skill for Phoenix AI observability. The description clearly covers when to use Phoenix (debugging, evaluation, monitoring) and key capabilities. Task knowledge is comprehensive with complete code examples for installation, tracing, evaluation, datasets, experiments, and production deployment across multiple frameworks (OpenAI, LangChain, LlamaIndex, Anthropic). Structure is clean with logical sections, though slightly dense in the main file - minor details could be delegated to reference files. Novelty is strong: setting up OpenTelemetry tracing, custom evaluators, and production-grade observability infrastructure would require significant tokens and expertise for a CLI agent alone. The skill effectively packages complex LLM ops workflows into reusable patterns. Minor improvement: some advanced sections (custom evaluators, complex experiments) could be moved to advanced-usage.md for better separation of concerns.
Loading SKILL.md…

Skill Author