Published
October 18, 2025
Every engineering leader knows the pattern: a new tool looks promising, but the moment you try to pilot it, the vendor piles on integration demands that intensify InfoSec reviews. Suddenly you’re being asked to give root access, install intrusive agents, or grant broad cloud API permissions — all before the vendor has even proven they can deliver value. The result is predictable: the InfoSec process becomes a burden, and momentum dies.
Observability tools have traditionally required different levels of access. Some need deeper integration to produce and store telemetry, but many can operate with read-only or data-stream access. With the rise of AI SREs, however, some vendors are asking for far more than they need. Sometimes it’s because they can’t technically execute within constraints and take the shortcut of demanding more data. Other times it’s more deliberate — invasive integration creates dependence and makes it harder for you to switch. Either way, vendors are using “quality of context” as an excuse for overly invasive integration, even though an AI SRE’s role is to interpret telemetry, not create it.
The ideal vendor takes a lighter path: working with you to decide what subset of your system is worth accessing. In the age of LLMs, the ideal vendor also gives you flexibility on model choice, allowing you to use one your company has already approved, rather than forcing you into theirs.
We believe these considerations are not just relevant to AI SREs, but to any enterprise integrating vertical agentic SaaS applications — whether in security, product analytics, networking, or BizOps.
Observability vendors approach data access in different ways — each with trade-offs for InfoSec, integration effort, and vendor lock-in.
Querying existing observability platforms (Datadog, New Relic, Splunk, Elastic, Prometheus) through read-only APIs is the fastest, lowest-risk way to get started. It creates little to no operational overhead and is ideal for pilots or proof-of-concepts.
The trade-off is twofold. First, you’re effectively locked into your observability provider: while it’s easy to swap out your AI SRE vendor, you remain dependent on the API surface, rate limits, and data model of the platform underneath. Second, API access may not give enough depth or scale for larger rollouts — constraints that can limit visibility and cap accuracy once you move beyond an initial pilot.
Tapping directly into telemetry pipelines (Kafka, Cribl, OpenTelemetry) avoids API rate limits and broadens visibility without changing the nature of the data. It introduces some operational complexity, but one that’s manageable at enterprise scale. For many teams, this is a natural step after proving value in a pilot: it expands the AI SRE’s reach across more systems, increases accuracy, and can even reduce dependence on a single observability vendor by letting you bypass them.
Installing agents, sidecars, or eBPF-based collectors generates new telemetry and alters existing data flows. This can make sense for observability vendors whose job is to produce telemetry, but for AI SREs it introduces heavier trade-offs: more operational overhead, new InfoSec surface area, and vendor dependence.
Invasive integration often forces lock-in, making you dependent without the vendor having earned the right to that broader access. There are scenarios where this approach is necessary, such as when pipelines aren't available, APIs are too limiting, or the vendor has compelling new data or connections they can pull and leverage to your benefit.
Vendors are using “quality of context” as an excuse for overly invasive integration, even though an AI SRE’s role is to interpret telemetry, not create it.
For observability vendors like Datadog or Dynatrace, deep integrations are justified — their job is to produce telemetry. AI SREs are different: their role is to interpret telemetry, not create it. When AI SRE vendors push for invasive access, it often signals either technical shortcuts or an intentional bid for lock-in.
Traversal takes a different approach. We meet customers where they are. Some of our largest deployments run entirely on API access, while others scale to pipeline integration to improve accuracy and reduce blind spots. The right answer depends on your environment, but it rarely requires invasive agents to prove value.
When AI SRE vendors push for invasive access, it often signals either technical shortcuts or an intentional bid for lock-in — making you dependent without the vendor having earned the right to that broader access.
Integration isn’t the only place enterprises get stuck in InfoSec reviews. The choice of LLM model can be just as sensitive, especially with intensifying GenAI protocols.
In many large organizations, AI security councils maintain a list of pre-approved models that teams are required to use. These may be older or not the vendor’s first choice, but they’re often non-negotiable. In other cases, enterprises have already invested in models tailored to their own environment, where performance can actually be higher. That’s why it’s critical to work with a vendor who can adapt when your enterprise requires a specific model. A vendor may have their own recommendation, but they should also be able to support the one your enterprise has already approved.
Handled well, this isn’t just about avoiding months of wasted time on new approvals — it’s also an opportunity to maximize the value of models your enterprise has already committed to and strengthen the case for those investments.
In many large organizations, AI security councils maintain a list of pre-approved models that teams are required to use. In the age of LLMs, the ideal vendor also gives you flexibility on model choice, aowing you to use one your company has already approved, rather than forcing you into theirs.
The gold standard isn’t a single integration pattern — it’s flexibility. Start with the lightest integration that makes sense, usually API access, and prove value quickly. When accuracy or scale demands more, move to pipelines to deepen the analysis. Agents and sidecars may be necessary in some edge cases, but they should never be the default starting point for an AI SRE.
At Traversal, we’ve proven value with just API access for numerous customers — but we also know that pipelines can unlock greater scale and accuracy. The key is that the vendor adapts to your environment, not the other way around.
Working with an AI SRE shouldn’t mean months of integration before results. Start with what you already have, expand only as needed, and insist on flexibility in model choice. That’s how you stay out of InfoSec hell — and keep the focus where it belongs: improving reliability.
Want to see how AI SRE can strengthen resilience without creating security headaches? Subscribe to our newsletter for practical insights and product updates from Traversal.