UNS: Powering the future of smart manufacturing

DIGITAL MANUFACTURINGUNS

11 min read

Summary

The unified namespace is shaping the manufacturing industry as the next frontier of transformation. The technology is empowering the next wave of technology innovation through its data-centric approach. In this article, we breakdown what the unified namespace is, provide a realistic example of how it works in practice and outline how manufacturers can get started in the space.

Ultimately, the question isn't whether to implement UNS, it's how quickly you can get started.

The Unified Namespace:

Fundamentally transforming the biotech manufacturing data ecosystem

Overview

The biopharmaceutical manufacturing industry faces a critical challenge in its technological transformation: data exists everywhere but is near impossible to access in a consistent approach. Equipment generates thousands of data points, quality systems track critical parameters, and manufacturing execution systems orchestrate complex processes. However, these systems rarely communicate effectively with each other.

The Unified Namespace (UNS) represents a paradigm shift in how biotech manufacturers approach data architecture. The decades-old pattern of point-to-point integrations between siloed systems is no longer sustainable approach. Data is now the fundamental currency in technology. UNS creates a central, event-driven data layer where all manufacturing data flows in real-time, contextualized, and immediately accessible to any authorized system or user.

This isn't just another IT buzzword. The UNS is a fundamental reimagining of manufacturing data architecture. Leading biotech companies are leveraging the UNS framework to achieve regulatory compliance, operational efficiency, and the data foundation necessary for AI and advanced analytics.

Ambiguity to reality: Defining the Unified Namepace

Core Definition

The Unified Namespace (UNS) is a centralized, hierarchical data architecture that serves as the single source of truth for all operational data in a manufacturing environment. It operates as an event-driven messaging backbone where:

  • All data flows to single, unified location: Equipment, sensors, manufacturing execution systems (MES), laboratory information management systems (LIMS), enterprise resource planning (ERP), and other systems publish their data to the UNS

  • Data is semantically organized: Information is structured in a logical, hierarchical namespace (typically following ISA-95 standards) that mirrors the physical and logical organization of the facility and the company

  • Information subscriptions vs. point integrations: Any authorized system or application can subscribe to relevant data streams without requiring custom point-to-point integrations; following a publish and subscribe (pub-sub) model

  • Context is preserved: Data includes values and the metadata describing what it represents, where it came from, quality indicators, and timestamps

The Traditional Problem: Integration spaghetti

Traditional facilities face an integration nightmare when it comes to integrating new and legacy systems.

The legacy point-to-point integration model:

  • Historically, data or system integrations have been driven through a system-system integration points. This process usually leverages middleware to contextualize the data for each system to understand

  • Each system that needed data required a custom integration; increasing engineering work, maintenance costs and timelines

  • These integrations are one off and cannot be used across systems

  • To fully integrate a facility with 20 systems this would require 190+ individual integrations

  • Each integration is brittle, custom-coded, and expensive to maintain

  • When a new system is added or an upgrade made, each connect would need to be rebuilt or modified for every integration touching that system

Real-World Impact:

  • Integration projects can routinely take 12-18 months and cost $500K-$2M per system pair

  • Data is trapped in silos, available only to pre-integrated systems

  • Real-time visibility is nearly impossible. Most data is only available hours or days later through batch ETL (extract, transform, lead) processes

  • Advanced use cases, such as AI and digital twins cannot be enabled without easy access to data. This is a large contributing factor to advanced use cases failing within manufacturing networks

The Unified Namespace Solution

UNS flips this model to focus on data. Data architecture and re-usability are at the forefront of its approach. The key capabilities of a UNS framework:

Hub-and-spoke architecture:

  • Each system publishes its data once to the namespace

  • Any system that needs that data subscribes to it using the namespace topic

  • Adding a new system requires one integration to the namespace topic to cosume the published data. Any other system that wants that data can consume the same published data

  • The namespace becomes the single source of truth

  • Data context must be created; common definitions and contextualization of the data must be create to make it understandable by consumers. This will be further broken down in the example below

Key architectural components:

  • Message Broker: The technical foundation of brokering exchanges. The message broker typically uses MQTT (Message Queuing Telemetry Transport) protocol, which is a lightweight, publish-subscribe protocol.

  • Hierarchical Topic Structure: Data is organized in a logical hierarchy following ISA-95/ISA-88 standards:

  • Contextualized Data: Each data point contains contextualized metadata to make it easy to understand and follow. A key principle of this is ‘contextualization at the source’ of data

    • Value: The actual measurement or state

    • Timestamp: When it was generated (at the source)

    • Quality: Good/Bad/Uncertain status

    • Metadata: Units of measure, source system, equipment ID, batch context

  • Contextualized data examples:

    • Enterprise/Site/Area/Line/Equipment/Sensor/Metric

    • BlueBell Bio/Allentown/FillFinish/Line3/Filler/Temperature/Value

  • Event-Driven Architecture: An event driven architecture is a key principle of this concept. Data flows continuously as events occur, not through scheduled batch processes. When a reactor temperature changes, that event immediately publishes to the UNS.

  • Decoupled Systems: Publishers don't know who's consuming their data; subscribers don't know where data originates. This loose coupling makes the architecture flexible and resilient.

The UNS Process: A Detailed Flow Example

Let's walk through a concrete example of how UNS operates in a biotech manufacturing context; specifically, a bioreactor batch in a monoclonal antibody (mAb) production facility.

The Scenario: Bioreactor Cell Culture Process

Context: A 2,000L bioreactor is running a 14-day fed-batch cell culture process to produce a therapeutic monoclonal antibody. Multiple systems need to monitor and control the process.

Traditional Architecture Data Flow

  1. Bioreactor PLC (Programmable Logic Controller) monitors temperature, pH, dissolved oxygen, agitation speed, and pressure

  2. SCADA System receives data from PLC via OPC-UA and displays it to operators

  3. MES (Manufacturing Execution System) needs the same data for batch records but requires a separate integration from the PLC or SCADA

  4. Historian (like OSIsoft PI) archives time-series data via another integration from SCADA

  5. LIMS needs fermentation parameters when receiving samples but requires manual data entry or custom integration with MES

  6. QMS (Quality Management System) needs trend data for batch release but must pull from historian via scheduled reports

  7. ERP needs batch status and yield data via custom integration with MES

  8. Data Lake for analytics requires ETL jobs pulling from multiple systems overnight

Leading to inefficient results:

  • 7+ custom integrations for a single piece of equipment

  • Data is only available in different systems at different times with different latencies

  • No single view of process state

  • Data scientists spend weeks gathering and reconciling data across systems

  • Adding a new analytics tool requires 4-5 new integrations

Unified Namespace Data Flow

Step 1: Bioreactor Publishes Data to UNS

The bioreactor controller publishes data to hierarchical topics in the UNS:

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Temperature/Value: 37.2°C

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Temperature/Timestamp: 2026-01-27T14:23:15.234Z

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Temperature/Quality: Good

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Temperature/Units: Celsius

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Temperature/SetPoint: 37.0°C

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/pH/Value: 7.12

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/pH/Timestamp: 2026-01-27T14:23:15.234Z

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/pH/Quality: Good

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/DissolvedOxygen/Value: 42.3

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/DissolvedOxygen/Units: Percent

Key Point: The bioreactor publishes once, every second, to the UNS. It doesn't know or care who's listening.

Step 2: Multiple systems subscribe to relevant data

The SCADA System subscribes to:

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/#

    • (The "#" wildcard means "all data from this equipment")

  • Receives all process parameters in real-time

  • Displays on operator HMI screens

  • Triggers alarms if parameters exceed limits

MES subscribes to:

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Value

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Timestamp

  • Receives parameter data and associates it with Batch-2026-047

  • Automatically populates batch record with time-series data

  • No manual transcription or duplicate integration needed

Historian subscribes to:

  • BlueBell Bio/Allentown/CellCulture/#

    • (All cell culture area data)

  • Archives all time-series process data automatically

  • Data arrives with timestamp from source (no timestamp conflicts)

  • Contextualized with batch, equipment, and area information

Process Analytics Engine subscribes to:

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Temperature/Value

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/pH/Value

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/DissolvedOxygen/Value

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/VCD/Value

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/Viability/Value

  • Receives real-time process data

  • Runs multivariate models to predict viable cell density (VCD) trends

Process Analytics engine publishes its predictions back to the UNS

  • BlueBell Bio/Allentown//CellCulture/BioReactor-2A/Predictions/VCD_24hr: 8.2e6 cells/mL

MES, SCADA, and other systems subscribe to these predictions

Step 3: Enriched context from multiple sources

Let's use the example of when a sample is taken and tested within a biotech lab. LIMs can publish the information automatically to UNS. That information can be consumed by other solutions to gain insights into the process, any deviations and the results from LIMs

LIMS publishes lab results to UNS:

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/LabResults/VCD: 7.8e6 cells/mL

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/LabResults/Viability: 94.2%

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/LabResults/Glucose: 3.2 g/L

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/LabResults/Lactate: 1.8 g/L

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/LabResults/SampleID: S-2026-047-D7

  • BlueBell Bio/Allentown/CellCulture/BioReactor-2A/LabResults/BatchID: Batch-2026-047

Multiple systems subscribe to the UNS topic:

  • MES: Automatically links lab results to batch record (same namespace structure)

  • Process Analytics Engine: Compares predicted vs. actual VCD, refines model

  • QMS: Receives data for trending and batch release decision

  • Data Science Platform: Has process parameters + lab results in same time-aligned format

Step 4: Cross-system intelligence

When maturity is reached within UNS, this framework allows advanced use cases and closed loop processing for advanced automation

Advanced optimization example:

An AI-powered feed optimization system subscribes to:

  • Bioreactor process parameters (temperature, pH, DO, agitation)

  • Lab results (VCD, viability, glucose, lactate, product titer)

  • Feed pump flow rates and feed composition

The system:

  • Analyzes multivariate relationships in real-time

  • Predicts optimal feed rate adjustments to maximize product titer

  • Publishes recommended feed rate to UNS:

    • BlueBellBio/Allentown/CellCulture/BioReactor-2A/FeedOptimization/RecommendedRate: 2.8 L/hr

    • BlueBellBio/Allentown/CellCulture/BioReactor-2A/FeedOptimization/Confidence: 87%

    • BlueBellBio/Allentown/CellCulture/BioReactor-2A/FeedOptimization/Rationale: "Glucose trending low, VCD above target"

Creating a connected workflow:

  • SCADA displays the recommendation to the operator

  • Operator reviews and approves (or modifies)

  • MES logs the decision and rationale in batch record

  • If approved, control system adjusts feed pump

  • Adjustment publishes back to UNS, completing the data loop

Key Insight: This cross-system intelligence happened without any system knowing about the others directly. The AI system didn't need integrations with SCADA, MES, LIMS, and the control system—it just subscribed to and published to the UNS.

Industry example

Eli Lilly and Company — Equipment connectivity platform 1

Eli Lilly has been actively building an equipment connectivity platform leveraging the UNS concept starting in 2022. The equipment connectivity platform has created a standardized interface between their MES (manufacturing execution system) and LES (lab execution system) across sites. they have connected hundreds of instruments into their platform with a goal to scale across several of their sites

Implementation Considerations for Biotech Companies

1.0 Message Brokers

Your broker choice determines how reliable, scalable, and compliant your UNS will be

The message broker is the real-time “nervous system” of your UNS. It handles how data is published and consumed across systems using event-driven patterns instead of point-to-point integrations.

Message broker examples and when to use them:

  • MQTT Brokers (HiveMQ, Mosquitto, AWS IoT Core, Azure IoT Hub

    • Best for OT/IT integration, edge devices, PLCs, and real-time manufacturing data.

    • Lightweight, low latency

    • Supports hierarchical topics and publish/subscribe natively

    • Ideal for UNS implementations inside manufacturing plants and regulated environments.

    • Less cost intensive

  • Kafka

    • Best for extremely high-volume, high-throughput data streams (e.g., millions of events/sec).

    • Often used when organizations already have Kafka in their enterprise data platform.

    • Less native to OT, but powerful for large-scale analytics and event streaming

    • More powerful option, most effectively used to support enterprise scaling for connecting multiple manufacturing sites are connected to a centralized framework

  • Evaluation Criteria

    • Throughput & latency – Can it handle real-time manufacturing data without delays?

    • High availability & clustering – No single point of failure for production operations.

    • Security – Native support for TLS, certificates, RBAC, and enterprise identity integration.

    • Regulatory readiness – Support for audit trails, access control, and validation documentation

2.0 Data connectors for manufacturing and IT systems

Data needs to be shared and contextualized

Data connectors are software products or modules that extract, transform, and publish data from source systems into a UNS usually with semantic/contextual mapping. These connectors should be chosen based on your manufacturing equipment landscape and use cases. The more standardized the landscape, the more standardized the connectors can be.

An important aspect of the UNS is to get messages into the same protocol, usually MQTT as a lightweight, web based messaging protocol. These connectors may need to transform other protocosl like OPC-UA or files into the MQTT protocol.

Common solution approaches we have seen in the industry include:

  • HighByte Intelligence Hub

  • Kepware (by PTC)

  • Ignition MQTT Engine + Ignition Edge

  • Custom built connectors as needed

3.0 Security & compliance

UNS becomes the central nervous system of operations — so it must be locked down, auditable, and compliant with regulations. This is especially true if the data is to be used in GMP / GxP scenarios.

Common UNS methodology and solutions may not meet the standards needed to support a regulated system or platform. Each UNS component and the overall platform should be evaluated to see if extensions needs to be made to capture and store additional information to ensure it meets regulatory guidelines.

Example considerations include:

  • Role-Based Access Control (RBAC) at Topic Level

    • Control what can publish, subscribe, or modify data. Examples:

      • Operators can view data

      • MES can write batch IDs

      • AI systems can publish predictions

      • Only control systems can write set points

    • Access and control decision making should be done at the system level

  • Encryption in Transit & At Rest

    • TLS for all connections

    • Encrypted storage for logs, retained messages, and archives

  • Audit logging of data and access

    • Who accessed what, when, and why

    • Required for investigations, deviations, and compliance reviews

  • 21 CFR Part 11 / GxP Alignment additional alignment

    • Electronic records must be secure, attributable, legible, contemporaneous, original, and accurate (ALCOA+)

    • E-signatures and approvals must be traceable and tamper-evident

4.0 Edge processing

Enabling speed, reliability and efficiency

Edge computing serves as the critical first layer in a Unified Namespace architecture. The edge layer processes data at or near the source before transmitting it to the centralized data infrastructure. Edge devices perform initial filtering, aggregation, normalization, and contextualization right on the factory floor. Processing locally is more effective than sending raw sensor readings, machine states, and operational events directly to cloud or enterprise systems. The proximity to data sources delivers three essential benefits:

  • Speed: by enabling millisecond-level responses for time-sensitive operations like quality control or safety shutdowns without waiting for cloud round-trips

  • Reliability: by maintaining local processing capabilities even when network connectivity to central systems is disrupted

  • Efficiency: by reducing bandwidth consumption and storage costs through intelligent data reduction

Edge processing means only sending meaningful changes, aggregated metrics, or anomalies rather than continuous raw data streams.

For AI applications, edge computing can also run lightweight inference models locally for immediate decision-making while sending summarized results to the UNS, creating a hierarchical intelligence architecture where quick tactical decisions happen at the edge and strategic insights are generated centrally.

What Happens at the Edge

  • Local filtering & aggregation

    • Convert raw signals into meaningful events

    • Example: Instead of publishing every vibration sample, publish health indicators and alarms

  • Enrichment

    • Add context: equipment ID, recipe step, batch phase, unit of measure

  • Anomaly detection & Pre-processing

    • Detect abnormal behavior before data even hits the central systems

    • Reduces noise and improves data quality

  • Bandwidth reduction

    • Only relevant, structured data is sent to the UNS

    • Prevents cloud and network overload

The call for UNS:

The Unified Namespace is not just a better integration architecture—it's the data foundation for the future of biopharmaceutical manufacturing:

The biopharmaceutical industry stands at an inflection point. Traditional manufacturing data architectures, built on point-to-point integrations and batch data processing, cannot support the AI-driven, real-time, adaptive manufacturing practices that patients, regulators, and competitive dynamics demand.

The Unified Namespace is not optional—its foundational infrastructure for modern biopharmaceutical manufacturing. Leading Biotech companies aren't implementing UNS for incremental improvements. They're building the data architecture that will enable the next decade of manufacturing innovation.

For biotech companies still operating with traditional integration architectures, the message is clear: start now. The companies that build UNS foundations today will have compounding advantages in operational efficiency, quality, regulatory compliance, and innovation velocity. Those that delay will find themselves competing with adversaries that have fundamentally more capable manufacturing operations

The UNS is the foundation for unlocking advanced manufacturing use cases such as:

  • Enabling AI at scale:

    • AI models require vast amounts of clean, contextualized, time-series data

    • UNS provides this data in a format AI can consume directly

    • Reduces data preparation from 80% of AI project time to near-zero

  • Digital twins and In-silico manufacturing:

    • Digital twins need real-time data feeds and bidirectional communication

    • UNS provides the data infrastructure for digital twin ecosystems

    • Enables "test in silicon, manufacture in real world" workflows

  • Autonomous manufacturing:

    • Self-optimizing processes require closed-loop control with AI

    • UNS enables AI to read process state, make decisions, and execute actions

    • Path toward lights-out manufacturing for routine operations

  • Regulatory evolution:

    • Regulators increasingly expect real-time data and process understanding

    • UNS enables continuous verification and real-time release testing

    • Supports risk-based approach to quality assurance

  • Supply chain integration:

    • Extend UNS beyond facility walls to suppliers and logistics partners

    • Establishes End-to-end visibility from raw material to patient

    • Enables demand-driven, responsive manufacturing

How we can help

We have deep expertise in building successful digital and technology solutions in biotech manufacturing, including building UNS strategies and platforms. Our capabilities span the full transformation journey - from developing comprehensive data strategies that unify your MES, LIMS, ERP, and quality systems, to designing digital solutions grounded in GxP-compliant workflows. We understand that digital success in regulated environments requires more than impressive demos; it demands a business-centric view grounded in current digitalization maturity, data accessibility, and compliance requirements. Our experts have helped several Fortune 500 biotech and pharmaceutical companies successfully navigate these challenges. We bring battle-tested frameworks for overcoming the unique barriers biotech manufacturers face; from fragmented data across batch records and process historians to navigating regulatory constraints while modernizing infrastructure. We can help you achieve meaningful business value and ultimately transform with purpose.

The question isn't whether to implement UNS—it's how quickly you can get started.

Citations:

  1. https://www.hivemq.com/case-studies/eli-lilly/

Related posts