AI, out-of-the-box!
Elastic AI features in Elastic Security, Observability, and Search are now enabled by default in Elastic Cloud.
Getting started with generative AI (GenAI) shouldn’t be a project in itself. Too often teams encounter organizational friction that slows adoption of AI-based features, from third-party contracts and external API keys, to additional terms of service and billing management. With the Elastic Managed LLM, you can sidestep these blockers and get powerful AI features for automatic ingest, threat detection, problem investigation, root cause analysis, and more, ready to go from day one.
Prefer your own model? We’ve got you covered there, too, with the ability to integrate any popular third-party LLM of your choosing.
https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt5a6df87d39574ae5/685103294c53c84bece8879e/attack-discovery.png,attack-discovery.pngOut-of-the-box AI for SREs: Accelerated problem resolutionAll AI features in Elastic Observability are ready to use out of the box — no setup required. Teams can accelerate root cause analysis, streamline incident response, and start getting value from generative AI on day one. For organizations that need more control, connecting a preferred LLM is still fully supported.
The Elastic Managed LLM powers all generative AI capabilities in Elastic Observability, including:
AI Assistant for Observability: The AI Assistant combines generative AI with RAG to reduce hallucinations and improve accuracy by grounding responses in your organization’s knowledge, including runbooks, past incidents, trouble tickets, documentation, and GitHub issues. It helps SREs troubleshoot faster by generating queries, dashboards, and visualizations to surface relevant data and enables natural language investigation across logs, metrics, and traces. In addition to conversational guidance, the AI Assistant also delivers embedded contextual insights directly in the UI, explaining log messages and APM errors without requiring a chat session.
Automatic Import: By automating the development of bespoke ingest pipelines, the Automatic Import feature extends Elastic’s 400+ out-of-the-box integrations with support for custom use cases. It reduces ingest time required from several days to less than 10 minutes and significantly lowers the learning curve for onboarding unstructured data. It builds a custom ingest pipeline based on sample data that accurately maps raw data into Elastic Common Schema (ECS) and custom fields, populates contextual information, and categorizes events.
With the default Elastic Managed LLM, AI Playground and the Search AI Assistant are ready to use out of the box, without need for additional setup or API keys for an external model. Playground offers a low-code interface for rapidly prototyping RAG workflows with your own data. Now, you can test the latest GenAI capabilities and start building instantly — no model configuration needed. If you prefer your own model, you still have the flexibility to use the open inference API to connect any provider or custom endpoint of your choice.
https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt4a907834a226efc3/6851051280eb856f0140cd90/Ai-playground.png,Ai-playground.pngElastic’s unique approach to AIElastic delivers AI where it matters most, natively integrated with your data, workflows, and use cases. With a default managed LLM enabled out of the box, teams can start using AI immediately, without setup or third-party contracts. For more flexibility, developers can also connect to public LLMs using Elastic’s open inference API.
What truly sets Elastic apart is how it combines Search AI capabilities for security and observability:
Retrieval augmented generation (RAG) is built in using Elastic’s native vector database with embeddings sourcing relevant context from your environment. AI features can reference your internal knowledge bases (runbooks, incidents, documentation, GitHub issues, etc.) to enable relevant and grounded responses.
Unified access to all your data means AI isn’t limited to predefined datasets. With 400+ integrations combined with other organizational knowledge sources, Elastic can enrich AI insights with logs, metrics, traces, runbooks, and more, all indexed and searchable in one place.
Search and analytics leverage Elastic’s platform strengths: fast query execution, aggregations, and built-in functions — ensuring AI-driven insights are grounded in real-time data and provide accurate and actionable results.
With the default LLM, you get:
A model tested and evaluated by Elastic.
Integrated billing and platform governance — linked to your Elastic subscription with no separate accounts, terms of service, or compliance gaps. Data is handled securely and adheres to the privacy and security controls you’ve already put in place.
Single-vendor support, so your team isn’t stuck chasing third parties.
Zero config in most cases — AI is simply ready when you are.
Whether you need speed, control, or customization, Elastic gives you a flexible, production-ready AI stack designed for how modern teams work.