Every second, billions of connected devices quietly monitor the pulse of the physical world: measuring pressure in refineries, tracking vibrations on turbine blades, adjusting the temperature of precision manufacturing lines, counting cars at intersections, and watching valves that regulate clean water. This is the telemetry that keeps our world running. It is also increasingly what’s putting the world at risk.
Why is OT telemetry becoming a cybersecurity priority?
In 2021, attackers tried to poison a water plant in Oldsmar, Florida, by changing chemical levels. In 2022, ransomware actors breached Tata Power in India, exfiltrating operational data and disrupting key functions. These weren’t IT breaches – they targeted operational technology (OT): the systems where the digital meets the physical. When compromised, they can halt production, damage equipment, or endanger lives.
Despite this growing risk, the telemetry from these systems – the rich, continuous streams of data describing what’s happening in the real world – aren’t entering enterprise-grade security and analytics tools such as SIEMs.
What makes OT telemetry data so hard to integrate into security tools?
For decades, OT telemetry was designed for control, not correlation. Its data is continuous, dense, and expensive to store – the exact opposite of the discrete, event-based logs that SIEMs and observability tools were built for. This mismatch created an architectural blind spot: the systems that track our physical world can’t speak the same language as the systems that secure our digital one. Today, as plants and utilities connect to the cloud, that divide has become a liability.
OT Telemetry is Different by Design
Security teams managed discrete events – a log, an edit to a file, an alert. OT telemetry reflects continuous signals – temperature, torque, flow, vibrations, cycles. Traditional security logs are timestamped records of what happened. OT data describes what’s happening, sampled dozens or even thousands of times per minute. This creates three critical mismatches in OT and IT telemetry data:
- Format: Continuous numeric data doesn’t fit text-based log schemas
- Purpose: OT telemetry optimizes continuing performance while security telemetry is used to flag anomalies and detect threats
- Economics: SIEMs and analytics tools charge on the basis of ingestion. Continuous data floods these models, turning visibility into runaway cost.
This is why most enterprises either down-sample OT data or skip it entirely; and why most SIEMs don’t have the capacity to ingest OT data out of the box.
Why does this increase risk?
Without unified telemetry, security teams only see fragments of their operational truth. Silent sources or anomalous readings might seem harmless to OT engineers but might signal malicious interference; but that clue needs to be seen and investigated with SOCs to uncover the truth. Each uncollected and unanalyzed bit of data widens the gap between what has happened, what is happening, and what could happen in the future. In our increasingly connected and networked enterprises, that’s where risk lies.
From isolation to integration: bridging the gap
For decades, OT systems operated in isolated environments – air-gapped networks, proprietary closed-loop control systems, and field devices that only speak to their own kind. However, as enterprises sought real-time visibility and data-driven optimization, operational systems started getting linked to enterprise networks and cloud platforms. Plants started streaming production metrics to dashboards; energy firms connected sensors to predictive maintenance systems, and industrial vendors began managing equipment remotely.
The result: enormous gains in efficiency – and a sudden explosion of exposure.
Attackers can now reach into building control systems inside manufacturing facilities, power plants, and supply chain networks to reach what was once unreachable. Suddenly, a misconfigured VPN or a vulnerability in middleware systems that connect OT to IT systems (current consensus suggests this is what exposed the JLR systems in the recent hack) could become an attacker’s entry point into core operations.
Why is telemetry still a cost center and not a value stream?
For many CISOs, CIOs, and CTOs, OT telemetry remains a budget line item – something to collect sparingly because of the cost of ingesting and storing it, especially in their favorite security tools and systems built over years of operations. But this misses the larger shift underway.
This data is no longer about just monitoring machines – it’s about protecting business continuity and understanding operational risk. The same telemetry that can predict a failing compressor can also help security teams catch and track a cyber intrusion.
Organizations that treat this data and its security management purely as a compliance expense will always be reactive; those that see this as a strategic dataset – feeding security, reliability, and AI-driven optimization – will turn it into a competitive advantage.
AI as a catalyst: turning telemetry into value
AI has always been most effective when it’s fed by diverse, high-quality data. This is the mindset with which the modern security team treated data, but ingestion-based pricing made them allergic to collecting OT telemetry at scale. But this same mindset is now reaching operational systems, and leading organizations around the world are treating IoT and OT telemetry as strategic data sources for AI-driven security, optimization, and resilience.
AI thrives on context, and no data source offers more context than telemetry that connects the digital and physical worlds. Patterns in OT data can reveal early indications of faltering equipment, sub-optimal logistical choices, and resource allocation signals that can help the enterprise save. It can also provide early indication of attack and defray significant business continuity and operational safety risk.
But for most enterprises, this value is still locked behind scale, complexity, and gaps in their existing systems and tools. Collecting, normalizing, and routing billions of telemetry signals from globally distributed sites is challenging to build manually. Existing tools to solve these problems (SIEM collectors, log forwarders) aren’t built for these data types and still require extensive effort to repurpose.
This is where Agentic AI can become transformative. Rather than analyzing data downstream after extensive tooling to manage data, AI can be harnessed to manage and govern telemetry from the point of ingestion.
- Automatically detect new data formats or schema drifts, and generate parsers in minutes on the fly
- Recognize patterns of redundancy and noise and recommend filtering or forking of data by security relevance to store everything while analyzing only that data which matters
- Enforce data governance policies in real time – routing sensitive telemetry to compliant destinations
- Learn from historical behavior to predict which signals are security-relevant versus purely operational
The result is a system that scales not by collecting less, but by collecting everything and routing intelligently. AI is not just the reason to collect more telemetry – it is also the means to make that data valuable and sustainable at scale.
Case Study: Turning 80 sites of OT chaos into connected intelligence
A global energy producer operating more than 80 distributed industrial sites faced the same challenge shared by many manufacturers: limited bandwidth, siloed OT networks, and inconsistent data formats. Each site generates between a few gigabytes to hundreds of gigabytes of log data daily – a mix of access control logs, process telemetry, and infrastructure events. Only a fraction of this data reached their security operations center. The rest stayed on-premise, trapped in local systems that couldn’t easily integrate with their SIEM or data lake. This created blind spots and with recent compliance developments in their region, they needed to integrate this into their security architecture.
The organization decided to re-architect their telemetry layer around a modular, pipeline-first approach. After an evaluation process, they chose Databahn as their vendor to accomplish this. They deployed Databahn’s collectors at the edge, capable of compressing and filtering data locally before securely transmitting it to centralized storage and security tools.
With bandwidth and network availability varying dramatically across sites, edge intelligence became critical. The collectors automatically prioritized security-relevant data for streaming, compressing non-relevant telemetry for slower transmission to conserve network capacity when needed. When a new physical security system needed to be onboarded – one with no existing connectors – an AI-assisted parser system was built in a few days, not months. This agility helped the team reduce their backlog of pending log sources and immediately increase their visibility across their OT environments.
In parallel, they used policy-driven routing to send filtered telemetry not only to their security tools, but also to the organization’s data lake – enabling business and engineering teams to analyze the same data for operational insights.
The outcome?
- Improved visibility across all their sites in a few weeks
- Data volume to their SIEM dropped to 60% despite increased coverage, due to intelligent reduction and compression
- New source of centralized and continuous intelligence established for multiple functional teams to analyze and understand
This is the power of treating telemetry as a strategic asset: and of using the pipeline as the control plane to ensure that the increased coverage and visibility don’t come at the cost of security posture or by destroying the IT/Security budget.
Continuous data, continuous resilience, continuous value
The convergence of IT and OT has and will continue to represent an increase in the attack surface and the vulnerability of digital systems deeply connected to physical reality. For factories and manufacturers like Jaguar Land Rover, this is about protecting their systems from ransomware actors. For power manufacturers and utilities distributors, it could mean the difference between life and death for their business, employees, and citizens with major national security implications.
To meet this increased risk threshold, telemetry must become the connective tissue of resilience. It must be more closely watched, more deeply understood, and more intelligently managed. Its value must be gauged as early as possible, and its volume must be routed intelligently to sanctify detection and analytics equipment while retaining the underlying data for bulk analysis.
The next decade of enterprise security and AI will depend upon how effectively organizations bridge this divide from the present into the ideal future. The systems that today are being kept out of SIEMs to stop them from flooding will need to fuel your AI. The telemetry from isolated networks will have to be connected to power real-time visibility across your enterprise.
The world will run on this data – and so should the security of your organization.





.png)
.png)
.png)







.avif)

.avif)






