Custom Styles

Sentinel Data Lake: Expanding the Microsoft Security Ecosystem – and enhancing it with Databahn

Sentinel Data Lake extends Microsoft’s security ecosystem with scalable, long-term telemetry storage. Databahn’s new integration lets enterprises connect and orchestrate data from every source to Sentinel and Sentinel Data Lake – securely, efficiently, and intelligently.

Security Data Pipeline Platforms

Microsoft has recently opened access to Sentinel Data Lake, an addition to their extensive security product platform which augments analytics, extends data storage, and simplifies long-term querying of large amounts of security telemetry. The launch enhances Sentinel’s cloud-native SIEM capabilities with a dedicated, open-format data lake designed for scalability, compliance, and flexible analytics. 

For CISOs and security architects, this is a significant development. It allows organizations to finally consolidate years of telemetry and threat data into a single location – without the storage compromises typically associated with log analytics. We have previously discussed how Security Data Lakes empower enterprises with control over their data, including the concept of a headless SIEM. With Databahn being the first security data pipeline to natively support Sentinel Data Lake, enterprises can now bridge their entire data network – Microsoft and non-Microsoft alike – into a single, governed ecosystem. 

What is Sentinel Data Lake? 

Sentinel Data Lake is Microsoft’s cloud-native, open-format security data repository designed to unify analytics, compliance, and long-term storage under one platform. It works alongside the Sentinel SIEM, providing a scalable data foundation. 

  • Data flows from Sentinel or directly from sources into the Data Lake, stored in open Parquet format. 
  • SOC teams can query the same data using KQL, notebooks, or AI/ML workloads – without duplicating it across systems 
  • Security operations gain access to months or even years of telemetry while simplifying analytics and ensuring data sovereignty. 

In a modern SOC architecture, the Sentinel Data Lake becomes the cold and warm layer of the security data stack, while the Sentinel SIEM remains the hot, detection-focused layer delivering high-value analytics. Together, they deliver visibility, depth, and continuity across timeframes while shortening MTTD and MTTR by enabling SOCs to focus and review security-relevant data. 

Why use Sentinel Data Lake? 

For security and network leaders, Sentinel Data Lake directly answers three recurring pain points: 

  1. Long-term Retention without penalty
    Retain security telemetry for up to 12 years without the ingest or compute costs of Log Analytics tables 
  1. Unified View across Timeframes and Teams
    Analysts, threat hunters, and auditors can access historical data alongside real-time detections – all in a consistent schema 
  1. Simplified, Scalable Analytics
    With data in an open columnar format, teams can apply AI/ML models, Jupyter notebooks, or federated search without data duplication or export 
  1. Open, Extendable Architecture
    The lake is interoperable – not locked to Microsoft-only data sources – supporting direct query or promotion to analytics tiers 

Sentinel Data Lake represents a meaningful evolution toward data ownership and flexibility in Microsoft’s security ecosystem and complements Microsoft’s full-stack approach to provide end-to-end support across the Azure and broader Microsoft ecosystem.  

However, enterprises continue – and will continue – to leverage a variety of non-Microsoft sources such as SaaS and custom applications, IoT/OT sources, and transactional data. That’s where Databahn comes in. 

Databahn + Sentinel Data Lake: Bridging the Divide 

While Sentinel Data Lake provides the storage and analytical foundation, most enterprises still operate across diverse, non-Microsoft ecosystems – from network appliances and SaaS applications to industrial OT sensors and multi-cloud systems. 

Databahn is the first vendor to deliver a pre-built, production-ready connector for Microsoft Sentinel Data Lake, enabling customers to: 

  • Ingest data from any source – Microsoft or otherwise – into Sentinel Data Lake 
  • Normalize, enrich, and tier logs before ingestion to streamline data movement so SOCs focus on security-relevant data  
  • Apply agentic AI automation to detect schema drift, monitor pipeline health, and optimize log routing in real-time 

By integrating Databahn with Sentinel Data Lake, organizations can bridge the gap between Microsoft’s new data foundation and their existing hybrid telemetry networks – ensuring that every byte of security data, regardless of origin, is trusted, transformed, and ready to use. 

Databahn + Sentinel: Better Together 

The launch of Microsoft Sentinel Data Lake represents a major evolution in how enterprises manage security data, shifting from short-term log retention to a long-term, unified visibility-oriented window into data across timeframes. But while the data lake solves storage and analysis challenges, the real bottleneck still lies in how data enters the ecosystem. 

Databahn is the missing connective tissue that turns Sentinel + Data Lake stack into a living, responsive data network – one that continuously ingests, transforms, and optimizes security telemetry from every layer of the enterprise. 

Extending Telemetry Visibility Across the Enterprise 

Most enterprise Sentinel customers operate hybrid or multi-cloud environments. They have: 

  • Azure workloads and Microsoft 365 logs 
  • AWS or GCP resources 
  • On-prem firewalls, OT networks, IoT devices 
  • Hundreds of SaaS applications and third-party security tools 
  • Custom applications and workflows 

While Sentinel provides prebuilt connectors for many Microsoft sources – and many prominent third-party platforms – integrating non-native telemetry remains one of the biggest challenges. Databahn enables SOCs to overcome that hurdle with: 

  • 500+ pre-built integrations covering Microsoft and non-Microsoft sources; 
  • AI-powered parsing that automatically adapts to new or changing log formats – without manual regex or parser building or maintenance 
  • Smart Edge collectors that run on-prem or in private cloud environments to collect, compress, and securely route logs into Sentinel or the Data Lake 

This means a Sentinel user can now ingest heterogeneous telemetry at scale with a small fraction of the data engineering effort and cost, and without needing to maintain custom connectors or one-off ingestion logic. 

Ingestion Optimization: Making Storage Efficient & Actionable 

The Sentinel Data Lake enables long-term retention – but at petabyte scale, logistics and control become critical. Databahn acts as an intelligent ingestion layer that ensures that only the right data lands in the right place.  

With Databahn, organizations can: 

  • Orchestrate data based on relevance before ingestion: By ensuring that only analytics-relevant logs go to Sentinel, you reduce alert fatigue and enable faster response times for SOCs. Lower-value or long-term search/query data for compliance and investigations can be routed to the Sentinel Data Lake. 
  • Apply normalization and enrichment policies: Automating incoming data and logs with Advanced Security Information Model (ASIM) makes cross-source correlation seamless inside Sentinel and the Data Lake. 
  • Deduplicate redundant telemetry: Dropping redundant or duplicated logs across EDR, XDR, and identity can significantly reduce ingest volume and lower the effort of analyzing, storing, and navigating through large volumes of telemetry 

By optimizing data before it enters Sentinel, Databahn not only reduces storage costs but also enhances the signal-to-noise ratio in downstream detections, making threat hunting and detection faster and easier. 

Unified Governance, Visibility, and Policy Enforcement 

As organizations scale their Sentinel environments, data governance becomes a major challenge: where is data coming from? Who has access to what? Are there regional data residency or other compliance rules being enforced? 

Databahn provides governance at the collection and aggregation stage of logs to the left of Sentinel that benefits users and gives them more control. Through policy-based routing and tagging, security teams can: 

  • Enforce data localization and residency rules; 
  • Apply real-time redaction or tokenization of PII before ingestion; 
  • Maintain a complete lineage and audit trail of every data movement – source, parser, transform, and destination 

All of this integrates seamlessly with Sentinel’s built-in auditing and Azure Policy framework, giving CISOs a unified governance model for data movement. 

Autonomous Data Engineering and Self-healing Pipelines 

Having visibility and access to all your security data becomes less relevant when there is missing data or gaps due to brittle pipelines or spikes in telemetry. Databahn’s agentic AI builds an automation layer that guarantees lossless data collection, continuously monitors data health, and fixes schema consistency and tracks telemetry health. 

Within a Sentinel + Data Lake environment, this means: 

  • Automatic detection and repair of schema drift, ensuring data remains queryable in both Sentinel and Data Lake as source formats evolve. 
  • Adaptive pipeline routing – if the Sentinel ingestion endpoint throttles or the Data Lake job queue backs up, Databahn reroutes or buffers data automatically to prevent loss. 
  • AI-powered insights to update DCRs, to keep Sentinel’s ingestion logic aligned with real-world telemetry changes 

This AI-powered orchestration turns the Sentinel + Data Lake environment from a static integration into a living, self-optimizing system that minimizes downtime and manual overhead. 

With Sentinel Data Lake, Microsoft has reimagined how enterprises store and analyze their security data. With Databahn, that vision extends further – to every device, every log source, and every insight that drives your SOC. 

Together, they deliver: 

  • Unified ingestion across Microsoft and non-Microsoft ecosystems 
  • Adaptive, AI-powered data routing and governance 
  • Massive cost reduction through pre-ingest optimization and tiered storage 
  • Operational resilience through self-healing pipelines and full observability 

This partnership doesn’t just simplify data management — it redefines how modern SOCs manage, move, and make sense of security telemetry. Databahn delivers a ready-to-use integration with Sentinel Data Lake, enabling enterprises to connect Sentinel Data Lake to their existing Sentinel ecosystem, or plan their evaluation and migration to the new and enhanced Microsoft Security platform with Sentinel at its heart with ease.

Ready to unlock full potential of your data?
Share

See related articles

What is a SIEM?

A Security Information and Event Management (SIEM) system aggregates logs and security events from across an organization’s IT infrastructure. It correlates and analyzes data in real time, using built-in rules, analytics, and threat intelligence to identify anomalies and attacks as they happen. SIEMs provide dashboards, alerts, and reports that help security teams respond quickly to incidents and satisfy compliance requirements. In essence, a SIEM acts as a central security dashboard, giving analysts a unified view of events and threats across their environment.

Pros and Cons of SIEM

Pros of SIEM:

  • Real-time monitoring and alerting for known threats via continuous data collection
  • Centralized log management provides a unified view of security events
  • Built-in compliance reporting and audit trails simplify regulatory obligations
  • Extensive integration ecosystem with standard enterprise tools
  • Automated playbooks and correlation rules accelerate incident triage and response

Cons of SIEM:

  • High costs for licensing, storage, and processing at large data volumes
  • Scalability issues often require filtering or short retention windows
  • May struggle with cloud-native environments or unstructured data without heavy customization
  • Requires ongoing tuning and maintenance to reduce false positives
  • Vendor lock-in due to proprietary data formats and closed architectures

What is a Security Data Lake?

A security data lake is a centralized big-data repository (often cloud-based) designed to store and analyze vast amounts of security-related data in its raw form. It collects logs, network traffic captures, alerts, endpoint telemetry, threat intelligence feeds, and more, without enforcing a strict schema on ingestion. Using schema-on-read, analysts can run SQL queries, full-text searches, machine learning, and AI algorithms on this raw data. Data lakes can scale to petabytes, making it possible to retain years of data for forensic analysis.

Pros and Cons of Security Data Lakes

Pros of Data Lakes:

  • Massive scalability and lower storage costs, especially with cloud-based storage
  • Flexible ingestion: accepts any data type without predefined schema
  • Enables advanced analytics and threat hunting via machine learning and historical querying
  • Breaks down data silos and supports collaboration across security, IT, and compliance
  • Long-term data retention supports regulatory and forensic needs

Cons of Data Lakes:

  • Requires significant data engineering effort and strong data governance
  • Lacks native real-time detection—requires custom detections and tooling
  • Centralized sensitive data increases security and compliance challenges
  • Integration with legacy workflows and analytics tools can be complex
  • Without proper structure and tooling, can become an unmanageable “data swamp”  

A Hybrid Approach: Security Data Fabric

Rather than choosing one side, many security teams adopt a hybrid architecture that uses both SIEM and data lake capabilities. Often called a “security data fabric,” this strategy decouples data collection, storage, and analysis into flexible layers. For example:

  • Data Filtering and Routing: Ingest all security logs through a centralized pipeline that tags and routes data. Send only relevant events and alerts to the SIEM (to reduce noise and license costs), while streaming raw logs and enriched telemetry to the data lake for deep analysis.
  • Normalized Data Model: Preprocess and normalize data on the way into the lake so that fields (timestamps, IP addresses, user IDs, etc.) are consistent. This makes it easier for analysts to query and correlate data across sources.
  • Tiered Storage Strategy: Keep recent or critical logs indexed in the SIEM for fast, interactive queries. Offload bulk data to the data lake’s cheaper storage tiers (including cold storage) for long-term retention. Compliance logs can be archived in the lake where they can be replayed if needed.
  • Unified Analytics: Let the SIEM focus on real-time monitoring and alerting. Use the data lake for ad-hoc investigations and machine-learning-driven threat hunting. Security analysts can run complex queries on the full dataset in the lake, while SIEM alerts feed into a coordinated response plan.
  • Integration with Automation: Connect the SIEM and data lake to orchestration/SOAR platforms. This ensures that alerts or insights from either system trigger a unified incident response workflow.

This modular security data fabric is an emerging industry best practice. It helps organizations avoid vendor lock-in and balance cost with capability. For instance, by filtering out irrelevant data, the SIEM can operate leaner and more accurately. Meanwhile, threat hunters gain access to the complete historical dataset in the lake.

Choosing the Right Strategy

Every organization’s needs differ. A full-featured SIEM might be sufficient for smaller environments or for teams that prioritize quick alerting and compliance out-of-the-box. Large enterprises or those with very high data volumes often need data lake capabilities to scale analytics and run advanced machine learning. In practice, many CISOs opt for a combined approach: maintain a core SIEM for active monitoring and use a security data lake for additional storage and insights.

Key factors include data volume, regulatory requirements, budget, and team expertise. Data lakes can dramatically reduce storage costs and enable new analytics, but they require dedicated data engineering and governance. SIEMs provide mature detection features and reporting, but can become costly and complex at scale. A hybrid “data fabric” lets you balance these trade-offs and future-proof the security stack.

At the end of the day, rethinking SIEM doesn’t necessarily mean replacing it. It means integrating SIEM tools with big-data analytics in a unified way. By leveraging both technologies — the immediate threat detection of SIEM and the scalable depth of data lakes — security teams can build a more flexible, robust analytics platform.

Ready to modernize your security analytics? Book a demo with Databahn to see how a unified security data fabric can streamline threat detection and response across your organization.

The Old Guard of Data Governance: Access and Static Rules

For years, data governance has been synonymous with gatekeeping. Enterprises set up permissions, role-based access controls, and policy checklists to ensure the right people had the right access to the right data. Compliance meant defining who could see customer records, how long logs were retained, and what data could leave the premises. This access-centric model worked in a simpler era – it put up fences and locks around data. But it did little to improve the quality, context, or agility of data itself. Governance in this traditional sense was about restriction more than optimization. As long as data was stored and accessed properly, the governance box was checked.

However, simply controlling access doesn’t guarantee that data is usable, accurate, or safe in practice. Issues like data quality, schema changes, or hidden sensitive information often went undetected until after the fact. A user might have permission to access a dataset, but if that dataset is full of errors or policy violations (e.g. unmasked personal data), traditional governance frameworks offer no immediate remedy. The cracks in the old model are growing more visible as organizations deal with modern data challenges.

Why Traditional Data Governance Is Buckling  

Today’s data environment is defined by velocity, variety, and volume. Rigid governance frameworks are struggling to keep up. Several pain points illustrate why the old access-based model is reaching a breaking point:

Unmanageable Scale: Data growth has outpaced human capacity. Firehoses of telemetry, transactions, and events are pouring in from cloud apps, IoT devices, and more. Manually reviewing and updating rules for every new source or change is untenable. In fact, every new log source or data format adds more drag to the system – analysts end up chasing false positives from mis-parsed fields, compliance teams wrestle with unmasked sensitive data, and engineers spend hours firefighting schema drift. Scaling governance by simply throwing more people at the problem no longer works.

Constant Change (Schema Drift): Data is not static. Formats evolve, new fields appear, APIs change, and schemas drift over time. Traditional pipelines operating on “do exactly what you’re told” logic will quietly fail when an expected field is missing or a new log format arrives. By the time humans notice the broken schema, hours or days of bad data may have accumulated. Governance based on static rules can’t react to these fast-moving changes.

Reactive Compliance: In many organizations, compliance checks happen after data is already collected and stored. Without enforcement woven into the pipeline, sensitive data can slip into the wrong systems or go unmasked in transit. Teams are then stuck auditing and cleaning up after the fact instead of controlling exposure at the source. This reactive posture not only increases legal risk but also means governance is always a step behind the data. As one industry leader put it, “moving too fast without solid data governance is exactly why many AI and analytics initiatives ultimately fail”.

Operational Overhead: Legacy governance often relies on manual effort and constant oversight. Someone has to update access lists, write new parser scripts, patch broken ETL jobs, and double-check compliance on each dataset. These manual processes introduce latency at every step. Each time a format changes or a quality issue arises, downstream analytics suffer delays as humans scramble to patch pipelines. It’s no surprise that analysts and engineers end up spending over 50% of their time fighting data issues instead of delivering insights. This drag on productivity is unsustainable.

Rising Costs & Noise: When governance doesn’t intelligently filter or prioritize data, everything gets collected “just in case.” The result is mountains of low-value logs stored in expensive platforms, driving up SIEM licensing and cloud storage costs. Security teams drown in noisy alerts because the pipeline isn’t smart enough to distinguish signal from noise. For example, trivial heartbeat logs or duplicates continue flowing into analytics tools, adding cost without adding value. Traditional governance has no mechanism to optimize data volumes – it was never designed for cost-efficiency, only control.

The old model of governance is cracking under the pressure. Access controls and check-the-box policies can’t cope with dynamic, high-volume data. The status quo leaves organizations with blind spots and reactive fixes: false alerts from bad data, sensitive fields slipping through unmasked, and engineers in a constant firefight to patch leaks. These issues demand excessive manual effort and leave little time for innovation. Clearly, a new approach is needed – one that doesn’t just control data access, but actively manages data quality, compliance, and context at scale.

From Access Control to Autonomous Agents: A New Paradigm

What would it look like if data governance were proactive and intelligent instead of reactive and manual? Enter the world of agentic data governance – where intelligent agents imbued in the data pipeline itself take on the tasks of enforcing policies, correcting errors, and optimizing data flow autonomously. This shift is as radical as it sounds: moving from static rules to living, learning systems that govern data in real time.

Instead of simply access management, the focus shifts to agency – giving the data pipeline itself the ability to act. Traditional automation can execute predefined steps, but it “waits” for something to break or for a human to trigger a script. In contrast, an agentic system learns from patterns, anticipates issues, and makes informed decisions on the fly. It’s the difference between a security guard who follows a checklist and an analyst who can think and adapt. With intelligent agents, data governance becomes an active process: the system doesn’t need to wait for a human to notice a compliance violation or a broken schema – it handles those situations in real time.

Consider a simple example of this autonomy in action. In a legacy pipeline, if a data source adds a new field or changes its format, the downstream process would typically fail silently – dropping the field or halting ingestion – until an engineer debugs it hours later. During that window, you’d have missing or malformed data and maybe missed alerts. Now imagine an intelligent agent in that pipeline: it recognizes the schema change before it breaks anything, maps the new field against known patterns, and automatically updates the parsing logic to accommodate it. No manual intervention, no lost data, no blind spots. That is the leap from automation to true autonomy – predicting and preventing failures rather than merely reacting to them.

This new paradigm doesn’t just prevent errors; it builds trust. When your governance processes can monitor themselves, fix issues, and log every decision along the way, you gain confidence that your data is complete, consistent, and compliant. For security teams, it means the data feeding their alerts and reports is reliable, not full of unseen gaps. For compliance officers, it means controls are enforced continuously, not just at periodic checkpoints. And for data engineers, it means a lot less 3 AM pager calls and tedious patching – the boring stuff is handled by the system. Organizations need more than an AI co-pilot; they need “a complementary data engineer that takes over all the exhausting work,” freeing up humans for strategic tasks. In other words, they need agentic AI working for them.

How Databahn’s Cruz Delivers Agentic Governance

At DataBahn, we’ve turned this vision of autonomous data governance into reality. It’s embodied in Cruz, our agentic AI-powered data engineer that works within DataBahn’s security data fabric. Cruz is not just another monitoring tool or script library – as we often describe it, Cruz is “an autonomous AI data engineer that monitors, detects, adapts, and actively resolves issues with minimal human intervention.” In practice, that means Cruz and the surrounding platform components (from smart edge collectors to our central data fabric) handle the heavy lifting of governance automatically. Instead of static pipelines with bolt-on rules, DataBahn provides a self-healing, policy-aware pipeline that governs itself in real time.

With these agentic capabilities, DataBahn’s platform transforms data governance from a static, after-the-fact function into a dynamic, self-healing workflow. Instead of asking “Who should access this data?” you can start trusting the system to ask “Is this data correct, compliant, and useful – and if not, how do we fix it right now?”. Governance becomes an active verb, not just a set of nouns (policies, roles, classifications) sitting on a shelf. By moving governance into the fabric of data operations, DataBahn ensures your pipelines are not only efficient, but defensible and trustworthy by default.

Embracing Autonomous Data Governance

The shift from access to agency means your governance framework can finally scale with your data and complexity. Instead of a gatekeeper saying “no,” you get a guardian angel for your data: one that tirelessly cleans, repairs, and protects your information assets across the entire journey from collection to storage. For CISOs and compliance leaders, this translates to unprecedented confidence – policies are enforced continuously and audit trails are built into every transaction. For data engineers and analysts, it means freedom from the drudgery of pipeline maintenance and an end to the 3 AM pager calls; they gain an automated colleague who has their back in maintaining data integrity.

The era of autonomous, agentic governance is here, and it’s changing data management forever. Organizations that embrace this model will see their data pipelines become strategic assets rather than liabilities. They’ll spend less time worrying about broken feeds or inadvertent exposure, and more time extracting value and insights from a trusted data foundation. In a world of exploding data volumes and accelerating compliance demands, intelligent agents aren’t a luxury – they’re the new necessity for staying ahead.

If you’re ready to move from static control to proactive intelligence in your data strategy, it’s time to explore what agentic AI can do for you. Contact DataBahn or book a demo to see how Cruz and our security data fabric can transform your governance approach.

Every second, billions of connected devices quietly monitor the pulse of the physical world: measuring pressure in refineries, tracking vibrations on turbine blades, adjusting the temperature of precision manufacturing lines, counting cars at intersections, and watching valves that regulate clean water. This is the telemetry that keeps our world running. It is also increasingly what’s putting the world at risk.

Why is OT telemetry becoming a cybersecurity priority?

In 2021, attackers tried to poison a water plant in Oldsmar, Florida, by changing chemical levels. In 2022, ransomware actors breached Tata Power in India, exfiltrating operational data and disrupting key functions. These weren’t IT breaches – they targeted operational technology (OT): the systems where the digital meets the physical. When compromised, they can halt production, damage equipment, or endanger lives.

Despite this growing risk, the telemetry from these systems – the rich, continuous streams of data describing what’s happening in the real world – aren’t entering enterprise-grade security and analytics tools such as SIEMs.

What makes OT telemetry data so hard to integrate into security tools?

For decades, OT telemetry was designed for control, not correlation. Its data is continuous, dense, and expensive to store – the exact opposite of the discrete, event-based logs that SIEMs and observability tools were built for. This mismatch created an architectural blind spot: the systems that track our physical world can’t speak the same language as the systems that secure our digital one. Today, as plants and utilities connect to the cloud, that divide has become a liability.  

OT Telemetry is Different by Design

Security teams managed discrete events – a log, an edit to a file, an alert. OT telemetry reflects continuous signals – temperature, torque, flow, vibrations, cycles. Traditional security logs are timestamped records of what happened. OT data describes what’s happening, sampled dozens or even thousands of times per minute. This creates three critical mismatches in OT and IT telemetry data:

  • Format: Continuous numeric data doesn’t fit text-based log schemas
  • Purpose: OT telemetry optimizes continuing performance while security telemetry is used to flag anomalies and detect threats
  • Economics: SIEMs and analytics tools charge on the basis of ingestion. Continuous data floods these models, turning visibility into runaway cost.

This is why most enterprises either down-sample OT data or skip it entirely; and why most SIEMs don’t have the capacity to ingest OT data out of the box.

Why does this increase risk?

Without unified telemetry, security teams only see fragments of their operational truth. Silent sources or anomalous readings might seem harmless to OT engineers but might signal malicious interference; but that clue needs to be seen and investigated with SOCs to uncover the truth. Each uncollected and unanalyzed bit of data widens the gap between what has happened, what is happening, and what could happen in the future. In our increasingly connected and networked enterprises, that’s where risk lies.

From isolation to integration: bridging the gap

For decades, OT systems operated in isolated environments – air-gapped networks, proprietary closed-loop control systems, and field devices that only speak to their own kind. However, as enterprises sought real-time visibility and data-driven optimization, operational systems started getting linked to enterprise networks and cloud platforms. Plants started streaming production metrics to dashboards; energy firms connected sensors to predictive maintenance systems, and industrial vendors began managing equipment remotely.  

The result: enormous gains in efficiency – and a sudden explosion of exposure.

Attackers can now reach into building control systems inside manufacturing facilities, power plants, and supply chain networks to reach what was once unreachable. Suddenly, a misconfigured VPN or a vulnerability in middleware systems that connect OT to IT systems (current consensus suggests this is what exposed the JLR systems in the recent hack) could become an attacker’s entry point into core operations.

Why is telemetry still a cost center and not a value stream?

For many CISOs, CIOs, and CTOs, OT telemetry remains a budget line item – something to collect sparingly because of the cost of ingesting and storing it, especially in their favorite security tools and systems built over years of operations. But this misses the larger shift underway.

This data is no longer about just monitoring machines – it’s about protecting business continuity and understanding operational risk. The same telemetry that can predict a failing compressor can also help security teams catch and track a cyber intrusion.  

Organizations that treat this data and its security management purely as a compliance expense will always be reactive; those that see this as a strategic dataset – feeding security, reliability, and AI-driven optimization – will turn it into a competitive advantage.

AI as a catalyst: turning telemetry into value

AI has always been most effective when it’s fed by diverse, high-quality data. This is the mindset with which the modern security team treated data, but ingestion-based pricing made them allergic to collecting OT telemetry at scale. But this same mindset is now reaching operational systems, and leading organizations around the world are treating IoT and OT telemetry as strategic data sources for AI-driven security, optimization, and resilience.

AI thrives on context, and no data source offers more context than telemetry that connects the digital and physical worlds. Patterns in OT data can reveal early indications of faltering equipment, sub-optimal logistical choices, and resource allocation signals that can help the enterprise save. It can also provide early indication of attack and defray significant business continuity and operational safety risk.

But for most enterprises, this value is still locked behind scale, complexity, and gaps in their existing systems and tools. Collecting, normalizing, and routing billions of telemetry signals from globally distributed sites is challenging to build manually. Existing tools to solve these problems (SIEM collectors, log forwarders) aren’t built for these data types and still require extensive effort to repurpose.  

This is where Agentic AI can become transformative. Rather than analyzing data downstream after extensive tooling to manage data, AI can be harnessed to manage and govern telemetry from the point of ingestion.

  • Automatically detect new data formats or schema drifts, and generate parsers in minutes on the fly
  • Recognize patterns of redundancy and noise and recommend filtering or forking of data by security relevance to store everything while analyzing only that data which matters
  • Enforce data governance policies in real time – routing sensitive telemetry to compliant destinations
  • Learn from historical behavior to predict which signals are security-relevant versus purely operational

The result is a system that scales not by collecting less, but by collecting everything and routing intelligently. AI is not just the reason to collect more telemetry – it is also the means to make that data valuable and sustainable at scale.

Case Study: Turning 80 sites of OT chaos into connected intelligence

A global energy producer operating more than 80 distributed industrial sites faced the same challenge shared by many manufacturers: limited bandwidth, siloed OT networks, and inconsistent data formats. Each site generates between a few gigabytes to hundreds of gigabytes of log data daily – a mix of access control logs, process telemetry, and infrastructure events. Only a fraction of this data reached their security operations center. The rest stayed on-premise, trapped in local systems that couldn’t easily integrate with their SIEM or data lake. This created blind spots and with recent compliance developments in their region, they needed to integrate this into their security architecture.

The organization decided to re-architect their telemetry layer around a modular, pipeline-first approach. After an evaluation process, they chose Databahn as their vendor to accomplish this. They deployed Databahn’s collectors at the edge, capable of compressing and filtering data locally before securely transmitting it to centralized storage and security tools.

With bandwidth and network availability varying dramatically across sites, edge intelligence became critical. The collectors automatically prioritized security-relevant data for streaming, compressing non-relevant telemetry for slower transmission to conserve network capacity when needed. When a new physical security system needed to be onboarded – one with no existing connectors – an AI-assisted parser system was built in a few days, not months. This agility helped the team reduce their backlog of pending log sources and immediately increase their visibility across their OT environments.

In parallel, they used policy-driven routing to send filtered telemetry not only to their security tools, but also to the organization’s data lake – enabling business and engineering teams to analyze the same data for operational insights.

The outcome?

  • Improved visibility across all their sites in a few weeks
  • Data volume to their SIEM dropped to 60% despite increased coverage, due to intelligent reduction and compression
  • New source of centralized and continuous intelligence established for multiple functional teams to analyze and understand

This is the power of treating telemetry as a strategic asset: and of using the pipeline as the control plane to ensure that the increased coverage and visibility don’t come at the cost of security posture or by destroying the IT/Security budget.

Continuous data, continuous resilience, continuous value

The convergence of IT and OT has and will continue to represent an increase in the attack surface and the vulnerability of digital systems deeply connected to physical reality. For factories and manufacturers like Jaguar Land Rover, this is about protecting their systems from ransomware actors. For power manufacturers and utilities distributors, it could mean the difference between life and death for their business, employees, and citizens with major national security implications.  

To meet this increased risk threshold, telemetry must become the connective tissue of resilience. It must be more closely watched, more deeply understood, and more intelligently managed. Its value must be gauged as early as possible, and its volume must be routed intelligently to sanctify detection and analytics equipment while retaining the underlying data for bulk analysis.

The next decade of enterprise security and AI will depend upon how effectively organizations bridge this divide from the present into the ideal future. The systems that today are being kept out of SIEMs to stop them from flooding will need to fuel your AI. The telemetry from isolated networks will have to be connected to power real-time visibility across your enterprise.

The world will run on this data – and so should the security of your organization.

Hi 👋 Let’s schedule your demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Trusted by leading brands and partners

optiv
mobia
la esfera
inspira
evanssion
KPMG
Guidepoint Security
EY
ESI