AVEVA Historian, formerly from Wonderware, is a time-series optimized data store, allowing the user to capture and store high-fidelity industrial big data, to unlock trapped potential for operational improvements.
N/A
Databricks Data Intelligence Platform
Score 8.7 out of 10
N/A
Databricks in San Francisco offers the Databricks Lakehouse Platform (formerly the Unified Analytics Platform), a data science platform and Apache Spark cluster manager. The Databricks Unified Data Service aims to provide a reliable and scalable platform for data pipelines, data lakes, and data platforms. Users can manage full data journey, to ingest, process, store, and expose data throughout an organization. Its Data Science Workspace is a collaborative environment for practitioners to run…
Paired with Citect SCADA or System Platform, this is an excellent process historian. It also works well collecting OPC data. For basic data storage, retrieval, and analysis, this is well suited. This is not well suited for very large deployments. Multiple instances would need to be used to scale up, and the data fed into a second-tier/enterprise historian for corporate user consumption.
Medium to Large data throughput shops will benefit the most from Databricks Spark processing. Smaller use cases may find the barrier to entry a bit too high for casual use cases. Some of the overhead to kicking off a Spark compute job can actually lead to your workloads taking longer, but past a certain point the performance returns cannot be beat.
Query performance--for very long-term/large queries; the latest version which we are yet to commission has some improvements in this area
User interface--the trend, query, and Excel add-ins are basic and could do with a refresh; web-based clients are a paid add-on and less full featured, so not a true replacement
Connectivity--Wonderware System Platform driver packs are required for additional data source types, where native connectors are not provided by other products
Connect my local code in Visual code to my Databricks Lakehouse Platform cluster so I can run the code on the cluster. The old databricks-connect approach has many bugs and is hard to set up. The new Databricks Lakehouse Platform extension on Visual Code, doesn't allow the developers to debug their code line by line (only we can run the code).
Maybe have a specific Databricks Lakehouse Platform IDE that can be used by Databricks Lakehouse Platform users to develop locally.
Visualization in MLFLOW experiment can be enhanced
Because it is an amazing platform for designing experiments and delivering a deep dive analysis that requires execution of highly complex queries, as well as it allows to share the information and insights across the company with their shared workspaces, while keeping it secured.
in terms of graph generation and interaction it could improve their UI and UX
One of the best customer and technology support that I have ever experienced in my career. You pay for what you get and you get the Rolls Royce. It reminds me of the customer support of SAS in the 2000s when the tools were reaching some limits and their engineer wanted to know more about what we were doing, long before "data science" was even a name. Databricks truly embraces the partnership with their customer and help them on any given challenge.
AVEVA Historian, formerly Wonderware, was the best of the process tier historians in terms of reliability and functionality. It is still under development and not a "dead" product. It is also more cost effective than the more full-featured enterprise historians, such as PI, which our organization is not yet ready for. The feature set is at the right cost level, coupled with current support, were the key factors in the decision.
The most important differentiating factor for Databricks Lakehouse Platform from these other platforms is support for ACID transactions and the time travel feature. Also, native integration with managed MLflow is a plus. EMR, Cloudera, and Hortonworks are not as optimized when it comes to Spark Job Execution. Other platforms need to be self-managed, which is another huge hassle.
Increased efficiency, reduction in labour for preparing reports--data is available to be queried and reported with less effort
Increased production efficiency--near real-time data availability and comparisons to historical data has been used to make faster and better operational decisions
Increased reliability--data has been used for maintenance optimization and planning purposes