Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.
8.7
Rating
0
Installs
AI & LLM
Category
Exceptional skill documentation for LlamaIndex RAG framework. The description is comprehensive and clearly delineates when to use this skill versus alternatives. Task knowledge is outstanding with complete, runnable code examples covering all major use cases (basic RAG, agents, chat engines, vector stores, multi-modal). Structure is excellent with logical progression from quick start to advanced patterns, clear section headings, and appropriate use of reference files for deeper topics. The skill demonstrates strong novelty as building production RAG pipelines with proper indexing, retrieval, and query optimization would require extensive token usage and multiple iterations for a CLI agent. Minor improvement possible in making the main SKILL.md slightly more concise by moving some integration examples to reference files, but this is a very minor point given the skill's complexity.
Loading SKILL.md…

Skill Author