AI-powered breaches: AI is turning Telemetry into an attack surface

The recent Salesforce breach and Anthropic’s research show that AI-powered attacks on telemetry have started. See how AI-powered pipelines can secure your data in motion.

September 19, 2025
AI-powered breaches: AI is turning Telemetry into an attack surface

A wake-up call from Salesforce

The recent Salesforce breach should serve as a wake-up call for every CISO and CTO. In this incident, AI bots armed with stolen credentials stole massive amounts of data using AI bots and stolen credentials to move laterally in ways legacy defenses weren’t prepared to stop. The lesson is clear: attackers are no longer just human adversaries – they’re deploying agentic AI to move with scale, speed, and persistence.

This isn’t an isolated case. Threat actors are now leveraging AI to weaponize the weakest links in enterprise infrastructure, and one of the most vulnerable surfaces is telemetry data in motion. Unlike hardened data lakes and encrypted storage, telemetry pipelines often carry credentials, tokens, PII, and system context in plaintext or poorly secured formats. These streams, replicated across brokers, collectors, and SIEMs, are ripe for AI-powered exploitation.

The stakes are simple: if telemetry remains unguarded, AI will find and weaponize what you missed.

Telemetry in the age of AI: What it is and what it hides

Telemetry – logs, traces, metrics, and events data – has been treated as operational “exhaust” in digital infrastructure for the last 2-3 decades. It flows continuously from SaaS apps, cloud services, microservices, IoT/OT devices, and security tools into SIEMs, observability platforms, and data lakes. But in practice, telemetry is:

  • High volume and heterogeneous: pulled from thousands of sources across different ecosystems, raw telemetry comes in a variety of different formats that are very contextual and difficult to parse and normalize
  • Loosely governed: less rigorously controlled then data at rest; often duplicated, unprocessed before being moved, and destined for a variety of different tools and destinations
  • Widely replicated: stored in caches, queues, and temporary buffers multiple times en route

Critically, telemetry often contains secrets. API keys, OAuth tokens, session IDs, email addresses, and even plaintext passwords leak into logs and traces, Despite OWASP (Open Worldwide Application Security Project) and OTel (OpenTelemetry) guidance to sanitize at the source, most organizations still rely on downstream scrubbing. By then, the sensitive data has already transited multiple hops. This happens because security teams view telemetry as “ops noise” rather than an active attack surface. If a bot scraped your telemetry flow for an hour, what credentials or secrets would it find?  

Why this matters now: AI has changed the cost curve

Three developments make telemetry a prime target today:  

AI-assisted breaches are real

The recent Salesforce breach showed that attackers no longer rely on manual recon or brute force. With AI bots, adversaries chain stolen credentials with automated discovery to expand their foothold. What once took weeks of trial-and-error can now be scripted and executed in minutes.

AI misuse is scaling faster than expected

“Vibe hacking” would be laughable if it wasn’t a serious threat. Anthropic recently disclosed that they had detected and investigated a malicious actor that had used Claude to generate exploit code, reverse engineer vulnerabilities, and accelerate intrusion workflows. What’s chilling is not just the capability – but the automation of persistence. AI agents don’t get tired, don’t miss details, and can operate continuously across targets.

Secrets in telemetry are the low-hanging fruit

Credential theft remains the #1 initial action in breaches. Now, AI makes it trivial to scrape secrets from sprawling logs, correlate them across systems, and weaponize them against SaaS, cloud, and OT infrastructure. Unlike data at rest, data in motion is transient, poorly governed, and often invisible or to the left of traditional SIEM rules.

The takeaway? Attackers are combining stolen credentials from telemetry with AI automation to multiply their effectiveness.

Where enterprises get burned – common challenges

Most enterprises secure data at rest but leave data in motion exposed. The Salesforce incident highlights this blind spot: the weak link wasn’t encrypted storage but credentials exposed in telemetry pipelines. Common failure patterns include:

  1. Over-collection mindset:
    Shipping everything “just in case”, including sensitive fields like auth headers or query payloads.
  2. Downstream-only reaction:
    Scrubbing secrets inside SIEMs – after they’ve crossed multiple hops and have left duplicates in various caches.
  3. Schema drift:
    New field names can bypass static masking rules, silently re-exposing secrets.
  4. Broad permissions:
    Message brokers and collectors – and AI bots and agents – often run with wide service accounts, becoming perfect targets.
  5. Observability != security:  
    Telemetry platforms optimize for visibility, not policy enforcement.
  6. No pipeline observability:
    Teams monitor telemetry pipelines like plumbing, focusing on throughput but ignoring sensitive-field policy violations or policy gaps.
  7. Incident blind spots: When breaches occur, teams can’t trace which sensitive data moved where – delaying containment and raising compliance risk.

Securing data in motion: Principles & Best Practices

If data in motion is now the crown jewel target, the defense must match. A modern telemetry security strategy requires:

  1. Minimize at the edge:
  • Default-deny sensitive collection. Drop or hash secrets at the source before the first hop.
  • Apply OWASP and OpenTelemetry guidance for logging hygiene.
  1. Policy as code:
  • Codify collection, redaction, routing, and retention rules as version-controlled policy.
  • Enforce peer review for changes that affect sensitive fields.
  1. Drift-aware redaction:
  • Use AI-driven schema detection to catch new fields and apply auto-masking
  1. Encrypt every hop:
  • mTLS (Mutual Transport Layer Security) between collectors, queues, and processors
  • Short-lived credentials and isolated broker permissions
  1. Sensitivity-aware routing:
  • Segment flows: send only detection-relevant logs to SIEM, archive the rest in low-cost storage
  1. ATT&CK-aligned visibility:
  • Map log sources to MITRE ATT&CK techniques; onboard what improves coverage, not just volume.
  1. Pipeline observability:
  • Monitor for unmasked fields, anomalous routing, or unexpected destinations.
  1. Secret hygiene:
  • Combine CI/CD secret scanning with real-time telemetry scanning
  • Automate token revocation and rotation when leaks occur
  1. Simulate the AI adversary:
  • Run tabletop exercises assuming an AI bot is scraping your pipelines
  • Identify what secrets it would find, and see how fast you can revoke them

DataBahn: Purpose-built for Data-in-motion Security

DataBahn was designed for exactly this use-case: building secure, reliable, resilient, and intelligent telemetry pipelines. Identifying, isolating, and quarantining PII is a feature the platform was built around.

  1. At the source: Smart Edge and its lightweight agents or phantom collectors allow for the dropping or masking of sensitive fields at the source. It also provides local encryption, anomaly detection, and silent-device monitoring.
  1. In transit: Cruz learns schemas to detect and prevent drift; automates the masking of PII data; learns what data is sensitive and proactively catches it

This reduces the likelihood of breach, makes it harder for bad actors to access credentials and move laterally, and elevates telemetry from a low-hanging fruit to a secure data exchange.

Conclusion: Telemetry is the new point to defend

The Salesforce breach demonstrated that attackers don’t need to brute-force their way into your systems—they just have to extract what you’ve already leaked within your data networks. Anthropic’s disclosure of Claude misuse highlights that this problem will grow faster than defenders are capable of handling or are prepared for.

The message is clear: AI has collapsed the time between leak and loss. Enterprises must treat telemetry as sensitive, secure it in motion, and monitor pipelines as rigorously as they monitor applications.

DataBahn offers a 30-minute Data-in-Motion Risk Review. In that session, we’ll map your top telemetry sources to ATT&CK, highlight redaction gaps, and propose a 60-day hardening plan tailored to your SIEM and AI roadmap.

Ready to unlock full potential of your data?
Share

See related articles

What is a SIEM?

A Security Information and Event Management (SIEM) system aggregates logs and security events from across an organization’s IT infrastructure. It correlates and analyzes data in real time, using built-in rules, analytics, and threat intelligence to identify anomalies and attacks as they happen. SIEMs provide dashboards, alerts, and reports that help security teams respond quickly to incidents and satisfy compliance requirements. In essence, a SIEM acts as a central security dashboard, giving analysts a unified view of events and threats across their environment.

Pros and Cons of SIEM

Pros of SIEM:

  • Real-time monitoring and alerting for known threats via continuous data collection
  • Centralized log management provides a unified view of security events
  • Built-in compliance reporting and audit trails simplify regulatory obligations
  • Extensive integration ecosystem with standard enterprise tools
  • Automated playbooks and correlation rules accelerate incident triage and response

Cons of SIEM:

  • High costs for licensing, storage, and processing at large data volumes
  • Scalability issues often require filtering or short retention windows
  • May struggle with cloud-native environments or unstructured data without heavy customization
  • Requires ongoing tuning and maintenance to reduce false positives
  • Vendor lock-in due to proprietary data formats and closed architectures

What is a Security Data Lake?

A security data lake is a centralized big-data repository (often cloud-based) designed to store and analyze vast amounts of security-related data in its raw form. It collects logs, network traffic captures, alerts, endpoint telemetry, threat intelligence feeds, and more, without enforcing a strict schema on ingestion. Using schema-on-read, analysts can run SQL queries, full-text searches, machine learning, and AI algorithms on this raw data. Data lakes can scale to petabytes, making it possible to retain years of data for forensic analysis.

Pros and Cons of Security Data Lakes

Pros of Data Lakes:

  • Massive scalability and lower storage costs, especially with cloud-based storage
  • Flexible ingestion: accepts any data type without predefined schema
  • Enables advanced analytics and threat hunting via machine learning and historical querying
  • Breaks down data silos and supports collaboration across security, IT, and compliance
  • Long-term data retention supports regulatory and forensic needs

Cons of Data Lakes:

  • Requires significant data engineering effort and strong data governance
  • Lacks native real-time detection—requires custom detections and tooling
  • Centralized sensitive data increases security and compliance challenges
  • Integration with legacy workflows and analytics tools can be complex
  • Without proper structure and tooling, can become an unmanageable “data swamp”  

A Hybrid Approach: Security Data Fabric

Rather than choosing one side, many security teams adopt a hybrid architecture that uses both SIEM and data lake capabilities. Often called a “security data fabric,” this strategy decouples data collection, storage, and analysis into flexible layers. For example:

  • Data Filtering and Routing: Ingest all security logs through a centralized pipeline that tags and routes data. Send only relevant events and alerts to the SIEM (to reduce noise and license costs), while streaming raw logs and enriched telemetry to the data lake for deep analysis.
  • Normalized Data Model: Preprocess and normalize data on the way into the lake so that fields (timestamps, IP addresses, user IDs, etc.) are consistent. This makes it easier for analysts to query and correlate data across sources.
  • Tiered Storage Strategy: Keep recent or critical logs indexed in the SIEM for fast, interactive queries. Offload bulk data to the data lake’s cheaper storage tiers (including cold storage) for long-term retention. Compliance logs can be archived in the lake where they can be replayed if needed.
  • Unified Analytics: Let the SIEM focus on real-time monitoring and alerting. Use the data lake for ad-hoc investigations and machine-learning-driven threat hunting. Security analysts can run complex queries on the full dataset in the lake, while SIEM alerts feed into a coordinated response plan.
  • Integration with Automation: Connect the SIEM and data lake to orchestration/SOAR platforms. This ensures that alerts or insights from either system trigger a unified incident response workflow.

This modular security data fabric is an emerging industry best practice. It helps organizations avoid vendor lock-in and balance cost with capability. For instance, by filtering out irrelevant data, the SIEM can operate leaner and more accurately. Meanwhile, threat hunters gain access to the complete historical dataset in the lake.

Choosing the Right Strategy

Every organization’s needs differ. A full-featured SIEM might be sufficient for smaller environments or for teams that prioritize quick alerting and compliance out-of-the-box. Large enterprises or those with very high data volumes often need data lake capabilities to scale analytics and run advanced machine learning. In practice, many CISOs opt for a combined approach: maintain a core SIEM for active monitoring and use a security data lake for additional storage and insights.

Key factors include data volume, regulatory requirements, budget, and team expertise. Data lakes can dramatically reduce storage costs and enable new analytics, but they require dedicated data engineering and governance. SIEMs provide mature detection features and reporting, but can become costly and complex at scale. A hybrid “data fabric” lets you balance these trade-offs and future-proof the security stack.

At the end of the day, rethinking SIEM doesn’t necessarily mean replacing it. It means integrating SIEM tools with big-data analytics in a unified way. By leveraging both technologies — the immediate threat detection of SIEM and the scalable depth of data lakes — security teams can build a more flexible, robust analytics platform.

Ready to modernize your security analytics? Book a demo with Databahn to see how a unified security data fabric can streamline threat detection and response across your organization.

The Old Guard of Data Governance: Access and Static Rules

For years, data governance has been synonymous with gatekeeping. Enterprises set up permissions, role-based access controls, and policy checklists to ensure the right people had the right access to the right data. Compliance meant defining who could see customer records, how long logs were retained, and what data could leave the premises. This access-centric model worked in a simpler era – it put up fences and locks around data. But it did little to improve the quality, context, or agility of data itself. Governance in this traditional sense was about restriction more than optimization. As long as data was stored and accessed properly, the governance box was checked.

However, simply controlling access doesn’t guarantee that data is usable, accurate, or safe in practice. Issues like data quality, schema changes, or hidden sensitive information often went undetected until after the fact. A user might have permission to access a dataset, but if that dataset is full of errors or policy violations (e.g. unmasked personal data), traditional governance frameworks offer no immediate remedy. The cracks in the old model are growing more visible as organizations deal with modern data challenges.

Why Traditional Data Governance Is Buckling  

Today’s data environment is defined by velocity, variety, and volume. Rigid governance frameworks are struggling to keep up. Several pain points illustrate why the old access-based model is reaching a breaking point:

Unmanageable Scale: Data growth has outpaced human capacity. Firehoses of telemetry, transactions, and events are pouring in from cloud apps, IoT devices, and more. Manually reviewing and updating rules for every new source or change is untenable. In fact, every new log source or data format adds more drag to the system – analysts end up chasing false positives from mis-parsed fields, compliance teams wrestle with unmasked sensitive data, and engineers spend hours firefighting schema drift. Scaling governance by simply throwing more people at the problem no longer works.

Constant Change (Schema Drift): Data is not static. Formats evolve, new fields appear, APIs change, and schemas drift over time. Traditional pipelines operating on “do exactly what you’re told” logic will quietly fail when an expected field is missing or a new log format arrives. By the time humans notice the broken schema, hours or days of bad data may have accumulated. Governance based on static rules can’t react to these fast-moving changes.

Reactive Compliance: In many organizations, compliance checks happen after data is already collected and stored. Without enforcement woven into the pipeline, sensitive data can slip into the wrong systems or go unmasked in transit. Teams are then stuck auditing and cleaning up after the fact instead of controlling exposure at the source. This reactive posture not only increases legal risk but also means governance is always a step behind the data. As one industry leader put it, “moving too fast without solid data governance is exactly why many AI and analytics initiatives ultimately fail”.

Operational Overhead: Legacy governance often relies on manual effort and constant oversight. Someone has to update access lists, write new parser scripts, patch broken ETL jobs, and double-check compliance on each dataset. These manual processes introduce latency at every step. Each time a format changes or a quality issue arises, downstream analytics suffer delays as humans scramble to patch pipelines. It’s no surprise that analysts and engineers end up spending over 50% of their time fighting data issues instead of delivering insights. This drag on productivity is unsustainable.

Rising Costs & Noise: When governance doesn’t intelligently filter or prioritize data, everything gets collected “just in case.” The result is mountains of low-value logs stored in expensive platforms, driving up SIEM licensing and cloud storage costs. Security teams drown in noisy alerts because the pipeline isn’t smart enough to distinguish signal from noise. For example, trivial heartbeat logs or duplicates continue flowing into analytics tools, adding cost without adding value. Traditional governance has no mechanism to optimize data volumes – it was never designed for cost-efficiency, only control.

The old model of governance is cracking under the pressure. Access controls and check-the-box policies can’t cope with dynamic, high-volume data. The status quo leaves organizations with blind spots and reactive fixes: false alerts from bad data, sensitive fields slipping through unmasked, and engineers in a constant firefight to patch leaks. These issues demand excessive manual effort and leave little time for innovation. Clearly, a new approach is needed – one that doesn’t just control data access, but actively manages data quality, compliance, and context at scale.

From Access Control to Autonomous Agents: A New Paradigm

What would it look like if data governance were proactive and intelligent instead of reactive and manual? Enter the world of agentic data governance – where intelligent agents imbued in the data pipeline itself take on the tasks of enforcing policies, correcting errors, and optimizing data flow autonomously. This shift is as radical as it sounds: moving from static rules to living, learning systems that govern data in real time.

Instead of simply access management, the focus shifts to agency – giving the data pipeline itself the ability to act. Traditional automation can execute predefined steps, but it “waits” for something to break or for a human to trigger a script. In contrast, an agentic system learns from patterns, anticipates issues, and makes informed decisions on the fly. It’s the difference between a security guard who follows a checklist and an analyst who can think and adapt. With intelligent agents, data governance becomes an active process: the system doesn’t need to wait for a human to notice a compliance violation or a broken schema – it handles those situations in real time.

Consider a simple example of this autonomy in action. In a legacy pipeline, if a data source adds a new field or changes its format, the downstream process would typically fail silently – dropping the field or halting ingestion – until an engineer debugs it hours later. During that window, you’d have missing or malformed data and maybe missed alerts. Now imagine an intelligent agent in that pipeline: it recognizes the schema change before it breaks anything, maps the new field against known patterns, and automatically updates the parsing logic to accommodate it. No manual intervention, no lost data, no blind spots. That is the leap from automation to true autonomy – predicting and preventing failures rather than merely reacting to them.

This new paradigm doesn’t just prevent errors; it builds trust. When your governance processes can monitor themselves, fix issues, and log every decision along the way, you gain confidence that your data is complete, consistent, and compliant. For security teams, it means the data feeding their alerts and reports is reliable, not full of unseen gaps. For compliance officers, it means controls are enforced continuously, not just at periodic checkpoints. And for data engineers, it means a lot less 3 AM pager calls and tedious patching – the boring stuff is handled by the system. Organizations need more than an AI co-pilot; they need “a complementary data engineer that takes over all the exhausting work,” freeing up humans for strategic tasks. In other words, they need agentic AI working for them.

How Databahn’s Cruz Delivers Agentic Governance

At DataBahn, we’ve turned this vision of autonomous data governance into reality. It’s embodied in Cruz, our agentic AI-powered data engineer that works within DataBahn’s security data fabric. Cruz is not just another monitoring tool or script library – as we often describe it, Cruz is “an autonomous AI data engineer that monitors, detects, adapts, and actively resolves issues with minimal human intervention.” In practice, that means Cruz and the surrounding platform components (from smart edge collectors to our central data fabric) handle the heavy lifting of governance automatically. Instead of static pipelines with bolt-on rules, DataBahn provides a self-healing, policy-aware pipeline that governs itself in real time.

With these agentic capabilities, DataBahn’s platform transforms data governance from a static, after-the-fact function into a dynamic, self-healing workflow. Instead of asking “Who should access this data?” you can start trusting the system to ask “Is this data correct, compliant, and useful – and if not, how do we fix it right now?”. Governance becomes an active verb, not just a set of nouns (policies, roles, classifications) sitting on a shelf. By moving governance into the fabric of data operations, DataBahn ensures your pipelines are not only efficient, but defensible and trustworthy by default.

Embracing Autonomous Data Governance

The shift from access to agency means your governance framework can finally scale with your data and complexity. Instead of a gatekeeper saying “no,” you get a guardian angel for your data: one that tirelessly cleans, repairs, and protects your information assets across the entire journey from collection to storage. For CISOs and compliance leaders, this translates to unprecedented confidence – policies are enforced continuously and audit trails are built into every transaction. For data engineers and analysts, it means freedom from the drudgery of pipeline maintenance and an end to the 3 AM pager calls; they gain an automated colleague who has their back in maintaining data integrity.

The era of autonomous, agentic governance is here, and it’s changing data management forever. Organizations that embrace this model will see their data pipelines become strategic assets rather than liabilities. They’ll spend less time worrying about broken feeds or inadvertent exposure, and more time extracting value and insights from a trusted data foundation. In a world of exploding data volumes and accelerating compliance demands, intelligent agents aren’t a luxury – they’re the new necessity for staying ahead.

If you’re ready to move from static control to proactive intelligence in your data strategy, it’s time to explore what agentic AI can do for you. Contact DataBahn or book a demo to see how Cruz and our security data fabric can transform your governance approach.

Every second, billions of connected devices quietly monitor the pulse of the physical world: measuring pressure in refineries, tracking vibrations on turbine blades, adjusting the temperature of precision manufacturing lines, counting cars at intersections, and watching valves that regulate clean water. This is the telemetry that keeps our world running. It is also increasingly what’s putting the world at risk.

Why is OT telemetry becoming a cybersecurity priority?

In 2021, attackers tried to poison a water plant in Oldsmar, Florida, by changing chemical levels. In 2022, ransomware actors breached Tata Power in India, exfiltrating operational data and disrupting key functions. These weren’t IT breaches – they targeted operational technology (OT): the systems where the digital meets the physical. When compromised, they can halt production, damage equipment, or endanger lives.

Despite this growing risk, the telemetry from these systems – the rich, continuous streams of data describing what’s happening in the real world – aren’t entering enterprise-grade security and analytics tools such as SIEMs.

What makes OT telemetry data so hard to integrate into security tools?

For decades, OT telemetry was designed for control, not correlation. Its data is continuous, dense, and expensive to store – the exact opposite of the discrete, event-based logs that SIEMs and observability tools were built for. This mismatch created an architectural blind spot: the systems that track our physical world can’t speak the same language as the systems that secure our digital one. Today, as plants and utilities connect to the cloud, that divide has become a liability.  

OT Telemetry is Different by Design

Security teams managed discrete events – a log, an edit to a file, an alert. OT telemetry reflects continuous signals – temperature, torque, flow, vibrations, cycles. Traditional security logs are timestamped records of what happened. OT data describes what’s happening, sampled dozens or even thousands of times per minute. This creates three critical mismatches in OT and IT telemetry data:

  • Format: Continuous numeric data doesn’t fit text-based log schemas
  • Purpose: OT telemetry optimizes continuing performance while security telemetry is used to flag anomalies and detect threats
  • Economics: SIEMs and analytics tools charge on the basis of ingestion. Continuous data floods these models, turning visibility into runaway cost.

This is why most enterprises either down-sample OT data or skip it entirely; and why most SIEMs don’t have the capacity to ingest OT data out of the box.

Why does this increase risk?

Without unified telemetry, security teams only see fragments of their operational truth. Silent sources or anomalous readings might seem harmless to OT engineers but might signal malicious interference; but that clue needs to be seen and investigated with SOCs to uncover the truth. Each uncollected and unanalyzed bit of data widens the gap between what has happened, what is happening, and what could happen in the future. In our increasingly connected and networked enterprises, that’s where risk lies.

From isolation to integration: bridging the gap

For decades, OT systems operated in isolated environments – air-gapped networks, proprietary closed-loop control systems, and field devices that only speak to their own kind. However, as enterprises sought real-time visibility and data-driven optimization, operational systems started getting linked to enterprise networks and cloud platforms. Plants started streaming production metrics to dashboards; energy firms connected sensors to predictive maintenance systems, and industrial vendors began managing equipment remotely.  

The result: enormous gains in efficiency – and a sudden explosion of exposure.

Attackers can now reach into building control systems inside manufacturing facilities, power plants, and supply chain networks to reach what was once unreachable. Suddenly, a misconfigured VPN or a vulnerability in middleware systems that connect OT to IT systems (current consensus suggests this is what exposed the JLR systems in the recent hack) could become an attacker’s entry point into core operations.

Why is telemetry still a cost center and not a value stream?

For many CISOs, CIOs, and CTOs, OT telemetry remains a budget line item – something to collect sparingly because of the cost of ingesting and storing it, especially in their favorite security tools and systems built over years of operations. But this misses the larger shift underway.

This data is no longer about just monitoring machines – it’s about protecting business continuity and understanding operational risk. The same telemetry that can predict a failing compressor can also help security teams catch and track a cyber intrusion.  

Organizations that treat this data and its security management purely as a compliance expense will always be reactive; those that see this as a strategic dataset – feeding security, reliability, and AI-driven optimization – will turn it into a competitive advantage.

AI as a catalyst: turning telemetry into value

AI has always been most effective when it’s fed by diverse, high-quality data. This is the mindset with which the modern security team treated data, but ingestion-based pricing made them allergic to collecting OT telemetry at scale. But this same mindset is now reaching operational systems, and leading organizations around the world are treating IoT and OT telemetry as strategic data sources for AI-driven security, optimization, and resilience.

AI thrives on context, and no data source offers more context than telemetry that connects the digital and physical worlds. Patterns in OT data can reveal early indications of faltering equipment, sub-optimal logistical choices, and resource allocation signals that can help the enterprise save. It can also provide early indication of attack and defray significant business continuity and operational safety risk.

But for most enterprises, this value is still locked behind scale, complexity, and gaps in their existing systems and tools. Collecting, normalizing, and routing billions of telemetry signals from globally distributed sites is challenging to build manually. Existing tools to solve these problems (SIEM collectors, log forwarders) aren’t built for these data types and still require extensive effort to repurpose.  

This is where Agentic AI can become transformative. Rather than analyzing data downstream after extensive tooling to manage data, AI can be harnessed to manage and govern telemetry from the point of ingestion.

  • Automatically detect new data formats or schema drifts, and generate parsers in minutes on the fly
  • Recognize patterns of redundancy and noise and recommend filtering or forking of data by security relevance to store everything while analyzing only that data which matters
  • Enforce data governance policies in real time – routing sensitive telemetry to compliant destinations
  • Learn from historical behavior to predict which signals are security-relevant versus purely operational

The result is a system that scales not by collecting less, but by collecting everything and routing intelligently. AI is not just the reason to collect more telemetry – it is also the means to make that data valuable and sustainable at scale.

Case Study: Turning 80 sites of OT chaos into connected intelligence

A global energy producer operating more than 80 distributed industrial sites faced the same challenge shared by many manufacturers: limited bandwidth, siloed OT networks, and inconsistent data formats. Each site generates between a few gigabytes to hundreds of gigabytes of log data daily – a mix of access control logs, process telemetry, and infrastructure events. Only a fraction of this data reached their security operations center. The rest stayed on-premise, trapped in local systems that couldn’t easily integrate with their SIEM or data lake. This created blind spots and with recent compliance developments in their region, they needed to integrate this into their security architecture.

The organization decided to re-architect their telemetry layer around a modular, pipeline-first approach. After an evaluation process, they chose Databahn as their vendor to accomplish this. They deployed Databahn’s collectors at the edge, capable of compressing and filtering data locally before securely transmitting it to centralized storage and security tools.

With bandwidth and network availability varying dramatically across sites, edge intelligence became critical. The collectors automatically prioritized security-relevant data for streaming, compressing non-relevant telemetry for slower transmission to conserve network capacity when needed. When a new physical security system needed to be onboarded – one with no existing connectors – an AI-assisted parser system was built in a few days, not months. This agility helped the team reduce their backlog of pending log sources and immediately increase their visibility across their OT environments.

In parallel, they used policy-driven routing to send filtered telemetry not only to their security tools, but also to the organization’s data lake – enabling business and engineering teams to analyze the same data for operational insights.

The outcome?

  • Improved visibility across all their sites in a few weeks
  • Data volume to their SIEM dropped to 60% despite increased coverage, due to intelligent reduction and compression
  • New source of centralized and continuous intelligence established for multiple functional teams to analyze and understand

This is the power of treating telemetry as a strategic asset: and of using the pipeline as the control plane to ensure that the increased coverage and visibility don’t come at the cost of security posture or by destroying the IT/Security budget.

Continuous data, continuous resilience, continuous value

The convergence of IT and OT has and will continue to represent an increase in the attack surface and the vulnerability of digital systems deeply connected to physical reality. For factories and manufacturers like Jaguar Land Rover, this is about protecting their systems from ransomware actors. For power manufacturers and utilities distributors, it could mean the difference between life and death for their business, employees, and citizens with major national security implications.  

To meet this increased risk threshold, telemetry must become the connective tissue of resilience. It must be more closely watched, more deeply understood, and more intelligently managed. Its value must be gauged as early as possible, and its volume must be routed intelligently to sanctify detection and analytics equipment while retaining the underlying data for bulk analysis.

The next decade of enterprise security and AI will depend upon how effectively organizations bridge this divide from the present into the ideal future. The systems that today are being kept out of SIEMs to stop them from flooding will need to fuel your AI. The telemetry from isolated networks will have to be connected to power real-time visibility across your enterprise.

The world will run on this data – and so should the security of your organization.

Hi 👋 Let’s schedule your demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Trusted by leading brands and partners

optiv
mobia
la esfera
inspira
evanssion
KPMG
Guidepoint Security
EY
ESI