It comes bundled with the core components you need to get started, already integrated and set up for you in Docker containers It makes it really easy to experiment with new models, hosted locally on your machine (such as Llama2) or via APIs (like OpenAI’s GPT) It is already set up to help you use the Retrieval Augmented Generation (RAG) architecture for LLM apps, which, in my opinion, is the easiest way to integrate an LLM into an application and give it access to your own data