Summary of "Insights Hub Business Intelligence - Preparation of data sources"
High-level summary
The video is a product/how-to walkthrough of the Insights Hub Business Intelligence (BI) and Business Intelligence Data applications. It focuses on preparing and publishing data sources for analytics (Tableau workbooks), emphasizing the operational workflow for building, configuring, synchronizing, and securing BI data sources from:
- IoT time series
- Integrated Data Lake files (CSV / parquet)
- Opcenter Intelligence (data warehouse)
The goal: enable analysts to build visualizations in the BI app with reliable, governed data sources.
Key product capabilities and operational flow
- Two-app delivery: customers receive
- Business Intelligence (visualization / analytics)
- Business Intelligence Data (data-source management / synchronization)
- Role requirement: the author role must be enabled to use BI Data.
- Centralized access control: BI Data inherits permissions from Insights Hub.
- Data-source metadata exposes key operational details: type icon, sync status, last-updated timestamp, and size of the update.
Typical data-source creation workflow
- Choose source type:
- IoT time series (asset types / instances)
- Integrated Data Lake (CSV or parquet folder)
- Time series events
- Opcenter Intelligence query
- Select scope:
- Specific assets or all instances of an asset type
- Pick aspects/variables to include (e.g., production, electricity)
- Configure update mechanism:
- Continuous sliding window (duration + frequency)
- Fixed historical time range
- Optional: enable aggregated data if source size is large; an override exists to relax limits for special cases
- Name the data source and assign it to a project
- Finish → data source enters pending state awaiting first synchronization
- Open the Business Intelligence app and create workbooks/charts with the standard Tableau UI
Data preview and schema checks
- CSV and parquet files are validated on import for variable detection and data types before publishing.
- When using Opcenter Intelligence you can preview columns and use an Advanced Query to expand/join related entities (OData expand, filter, select parameters).
Opcenter Intelligence integration
- Choose a data entity (table) and preview its columns.
- Advanced Query options allow expanding and joining contextual tables (for example, Site or Equipment) using OData expand syntax so joins can be performed upstream instead of in the BI layer.
Permissions, governance and access control
- Permissions are inherited from Insights Hub, providing centralized access control so only authorized users can access specific data sources.
- Data-source metadata (icon, sync status, timestamps, update size) is available for operational monitoring and auditing.
Scalability, limits and performance considerations
- Time-series data point limits:
- Continuous update time-series: up to 3,000,000 data points (limit decreases with higher update frequency to protect performance)
- Fixed time-range IoT data sources: up to 30,000,000 data points
- Synchronization status and update size are exposed per data source as operational KPIs.
- Update frequency and time-window are tunable; choose trade-offs between freshness and performance / data volume.
Concrete examples and case studies (from the video)
- IoT asset-type data source:
- Selected an asset type with 21 asset instances
- Included production and electricity aspects
- Excluded quality codes
- Integrated Data Lake:
- Bill of Materials.csv imported and validated (static part data)
- Option to use a whole folder of parquet files so the data source updates automatically when new files are added
- Opcenter Intelligence:
- Operation Response table chosen
- Used Advanced Query to expand joins with Site or Equipment via OData expand syntax
Actionable recommendations and best practices
- Match update mechanism to the use case:
- Use continuous updates for rolling fresh data (tune window and frequency)
- Use fixed-range sources for large historical analyses (benefit from higher point limits)
- Use aggregated data mode when raw data size threatens performance
- Prefer parquet folders for Data Lake ingestion to enable incremental auto-updates when new files arrive
- Name data sources meaningfully and assign them to appropriate projects for discoverability and governance
- Verify variable detection and data types when importing CSVs to avoid downstream analysis errors
- Push joins and filters into Opcenter (OData Advanced Query) to reduce downstream transformation overhead in BI
- Monitor data-source “last updated” and “size of update” as routine operational KPIs to detect sync issues or unexpected volume changes
- Ensure author role and Insights Hub permissions are configured before expecting users to create/manage BI data sources
Frameworks, processes and playbooks
-
Data-source creation playbook (concise): select source → scope (assets / variables) → update mode (continuous / fixed) → aggregate / override options → name + project → synchronize → use in BI
-
Data governance model:
- Centralized permission inheritance from Insights Hub for consistent access control
- Data ingestion best practice:
- Prefer parquet folders for incremental ingestion; validate CSV schemas on import
- Query / ETL pattern:
- Push joins and filters into Opcenter (OData) to reduce downstream transformation work in BI
Operational callout: keep synchronization status, update size, and last-updated timestamps visible and monitored as part of routine data-ops.
Operational metrics to track (recommended)
- Data-source sync status (pending / complete / failed)
- Last synchronization timestamp (freshness)
- Size of last update (data volume)
- Data-point counts per source (compare to 3M / 30M limits)
- Update frequency and latency (time between data generation and availability in BI)
- Number of assets / instances included (example: 21 instances)
Presenters / sources
- Source: Insights Hub Business Intelligence video (Insights Hub Launchpad / Business Intelligence and Business Intelligence Data applications). No individual presenters named.
Category
Business
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.