An enthral tool for providing analytics ready data pipelines.
December 14, 2021

An enthral tool for providing analytics ready data pipelines.

Martin Lance | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User

Overall Satisfaction with Upsolver

Define pipelines using only SQL on the auto-generated schema on reading. Add upserts and deletes to data lake tables. Blend streaming and large-scale batch data. Automate schema evolution and reprocessing from the previous state. Allows for automatic orchestration of pipelines. Capacity to fully manage execution at scale. Strong consistency is guaranteed over object storage.
  • Data lake table management.
  • High performance at scale on complex data.
  • Capacity to parquet based data for fast queries.
  • Enables low latency dimension tables using streaming upsets.
  • Continuous lock free compaction.
  • Automatic schema on read and data profiling.
  • Lower cloud compute and data engineering cost.
  • Free for small workloads.
  • Connects faster to a solution guru whenever a problem arise.
  • Integrations and connectors.
  • Effective stream processing engines.
  • Ability to write to Amazon.
Great in streamlining workload. Continuously serve data to lakes, warehouses, databases, and streaming systems. Near-zero maintenance overhead for analytics ready data. Blend streaming and large-scale batch data. Low code, SQL-based data transformation. UI-driven ingestion connections with auto-generated schema on reading. Automated pipeline orchestration with built-in data lake best practices.

Do you think Upsolver delivers good value for the price?

Yes

Are you happy with Upsolver's feature set?

Yes

Did Upsolver live up to sales and marketing promises?

Yes

Did implementation of Upsolver go as expected?

Yes

Would you buy Upsolver again?

Yes

Data lineage visibility from source to the lake to target. Effective transactional data from databases using JDBC or CDC. Integration with lake query engines. Automated use of low-cost spot instances. Automated use of low-cost cloud object storage. Automated vacuum at stale and intermediate data. Continuous, high integrity table management.