Products that are considered exceptional by their customers based on a variety of criteria win TrustRadius awards. Learn more about the types of TrustRadius awards to make the best purchase decision. More about TrustRadius Awards
Leaving a video review helps other professionals like you evaluate products. Be the first one in your network to record a review of HBase, and make your voice heard!
Entry-level set up fee?
- No setup fee
- Free Trial
- Free/Freemium Version
- Premium Consulting / Integration Services
Would you like us to let the vendor know that you want pricing?
- HBase stores the big data in a great manner and it is horizontally scalable.
- Another major reason is security, we can secure the HBase database using Atlas, Ranger.
- Store any format of data like structured, semi-structured and unstructured.
- Strongly consistent reads and writes are provided by HBase, we use it for high-speed requirements if we do not need RDBMS-supported features such as full transaction support or typed columns.
- There are very few commands in HBase.
- Stored procedures functionality is not available so it should be implemented.
- HBase is CPU and Memory intensive with large sequential input or output access while as Map Reduce jobs are primarily input or output bound with fixed memory. HBase integrated with Map-reduce jobs will result in random latencies.
- Scalable and truly non-relational data
- HBase operations run in real-time on its database rather than MapReduce jobs
- Scales linearly to support billions of rows with millions of columns
- Difficult for people who are building custom tools for SQL like purposes to understand HBase
- Cannot be used for transactional datasets
- Excellent for read performance
- Great store of file format of avro
- Easy integration into mapreduce
- Replication ability
- Write performance
- Performance support for parquet file format. supports, but performance wise still not there
- API / library availability for spark, rather than creating a new library for it
- Faster lookup of records using the row keys. It helped to fetch thousands of records in a much faster way using the row keys
- As it is a columnar data store, helped us to improve the query performance and aggregations
- Sharding helps us to optimize the data storage and retrieval. HBase provides automatic or manually sharding of tables.
- Dynamic addition of columns and column family helped us to modify the schema with ease.
- Identified issues with Hmaster when handling a huge number of nodes
- Cannot have multiple indexes as row key is the only column which could be indexed.
- HBase does not support partial row keys which limit its query performance.
Hbase cannot be replaced for traditional databases as it cannot support all the features, CPU and memory intensive. Observed increased latency when using with MapReduce job joins.
HBase provides the best of breed solutions for any NoSQL storage needs. One of the main important features is it is part of the HDP Hortonworks stack so it is installed by default so there's nothing else to install or configure. It is easy to administer with Ambari and scales to any size I need. It runs on top of HDFS so my data is safe, secure and scalable.
I use it as a store for data that is ingested via various streaming mechanisms including Apache NiFi, Apache Storm, Apache Spark Streaming, Apache Flink and Streaming Analytics Manager. It provides an easy key-value type store with fast scans for data access. I also run Apache Phoenix on top to provide a fast clean SQL interface to all of my data.
- Scalability. HBase can scale to trillions of records.
- Fast. HBase is extremely fast to scan values or retrieve individual records by key.
- HBase can be accessed by standard SQL via Apache Phoenix.
- Integrated. I can easily store and retrieve data from HBase using Apache Spark.
- It is easy to set up DR and backups.
- Ingest. It is easy to ingest data into HBase via shell, Java, Apache NiFi, Storm, Spark, Flink, Python and other means.
- Not for small data
- Requires a cluster
My preferred use case is for storing data points like time series or data produced by sensors.
I often use HBase when I need data available immediately and I am not looking for transactions. This is a great store for really wide tables with tons of columns. It is also great if you are not sure what type of data you are going to have. It really excels at sparse data.
- HBase data access and retrieval only gets better with larger scale.
- Fault tolerance is built in, if you have unreliable hardware, HBase will make every effort to keep your data online.
- Extremely fast key lookups and write throughput.
- Multi-tenancy is still work in progress
- Usability and beginner friendliness
- It has a bad reputation of being complex
- Very fast query capability
- Resilient: by leveraging hdfs, hbase can handle server failure pretty well
- Very schema dependent - you have to carefully choose your schema and key strategy in order to get good distribution and performance.
- Over aggressive rebalancing - if you have to bounce your system - for example - hbase will spend quite a while trying to rebalance all the data as each server comes online.
- Good write throughput
- Good horizontal scalability
- Easy to operate on
- Better tool for investigating the key-value content for data validation.
- Better tool for row key monitoring since our key contains timestamps.
- Better tool for system-level metric monitoring.
- Strong consistency
- SQL layer
- Too many processes
- Difficult to manage many clusters
- Apache HBase is a widely used java based distributed NoSQL environment on Apache Hadoop.
- While there has been growing interest and efforts in in memory computing, there are investments on Apache Hadoop (or hadoop provider variants) across domains. So that is a large market.
- I worked on HBase for applications which needed to provide strong consistency and interact with Apache Hadoop.
- You could encounter issues like region is not online or NotServingException or region server going down, out of memory errors.
- As HBase works with Zookeeper, care needs to be taken it is correctly set up. Most issues pertain usually to environment setup, configuration, shared load on system or maintenance.
- The performance across workloads when evaluated against other NoSQL variants was not best in class, this is most times okay, but can be improved.
- If you use Apache HBase, and want to upgrade it for some features then you might need to do a compatibility check against your Apache Hadoop and Apache HBase versions, there are dependency to think about.
- The HBase master slave becomes the single point of failure, and may not be a preferred design.It is not highly available system.
- Last I checked it did not have well tested easy integrations with Spark, and that can help.
- What is the application's inherent need? Does this component fit well in the design?
- Does it provide high data security?
- How does it assure there is no data loss?
- How can I make sure it is a highly available system, and no downtime for customer?
- Does it give me the best linear scalability?
- What kind of tuning parameters does it allow the user to configure?
- How does it stack up against other NoSQL variants on features, scalability, ease of use/contribute to and maturity of product?
- What throughput can it attain under different kinds of workloads?