Products that are considered exceptional by their customers based on a variety of criteria win TrustRadius awards. Learn more about the types of TrustRadius awards to make the best purchase decision. More about TrustRadius Awards
Leaving a video review helps other professionals like you evaluate products. Be the first one in your network to record a review of Azure Data Lake Storage, and make your voice heard!
Entry-level set up fee?
- No setup fee
- Free Trial
- Free/Freemium Version
- Premium Consulting / Integration Services
Would you like us to let the vendor know that you want pricing?
- Setting up Azure Data Lake Storage account, container is quite easy
- Access from anywhere and easy maintenance
- Integration with Azure Data Factory service for end to end pipeline is pretty easy
- Can store Any form of data (Structured, Unstructured, Semi) in faster manner
- UI search feature can certainly be improvised e.g. inclusion of wildcards to search a particular file in container
- Sometimes gets Hanged/lagged while monitoring
- Probably the new UI feature can address above issues.
- PowerShell integration
- Azure AD integration
- Price is a bit steep
- CLI could be better
- Permissions are difficult to use compared to their competition
- Provides an overview of any device you will eventually work with in the future.
- Having short videos allows me to go back and study precisely the topics I need without sifting through 30-minute videos to find the vignettes I need.
- study for the certifications also to have them as a reference for work when you have any questions about applying a configuration to the equipment.
- The Internet interface is simple and easy to use. Capacity is good and it's good that HP continues to innovate with this technology
- Azure Data Lake Storage is extremely scalable. It allows us to scale up or down endlessly based on what we need including replication.
- In terms of security, Azure Data Lake Storage fits our requirements really well as we can monitor and encrypt seamlessly. We can also assign permissions through roles and grant network-level access.
- Due to the fact that it can scale, we are able to monitor the cost of storage and any given time and make financial decisions about our infrastructure based on how small or big we want to scale.
- Since the price of Azure Data Lake will fluctuate based on storage size we have to keep a close eye on what data is getting pulled in which can be a cumbersome task as data collection streams need to be throttled to prevent higher storage costs.
- When we want to change the parameters of the data being captured by Azure Data Lake we have to keep in mind the historical data that's already been stored and consider methods for reprocessing it.
- Azure Data Lake can improve its process for distorted data. As data gets loaded the data cleansing process can be a bit more refined.
- Store large amount of data
- Access this data quickly using Synapse Analytics or Spark/Databricks
- Ingest data quickly so our ingestion APIs are never throttled
- I'd like to see a better cross-platform native client. Azure Data Explorer is fine, but it's far from the "SSMS" kind of experience SQL Server users are used to.
- Listing a large number of file is somewhat problematic and slow. Using the native C# library, running directly on an Azure VM, it can take several hours to list just a couple million files.
- Switching from V1 to V2 requires the creation of a new Storage Account and that's pretty inconvenient.
- Well integrated connectors to third party storage sources
- Robust and modular integrity within deployment pipeline
- Helps resolve data storage both structured and unstructured
- Improvement around spark integration
- A single deployment pool to address both normal data and big data
- Data Governance is still not centralized
- Scalable (hosted in the cloud)
- Cannot use blob APIs and NFS 3.0
- Access controls
- Handling unstructured data
The big data compute clusters are easy to set up and the learning curve is somehow easy but still Microsoft needs to provide more intractive instructions.
Our business scope is to work on a large data analytic project where we have to extract a large amount of structured and unstructured data for the data analysis and transformation.
Since we are also hosted our business application on Azure Cloud, the Azure Data Lake Storage is very helpful to use as it can be integrate with other Azure services and we do our analysis on the real-time at one place. Azure Data Lake Storage is built on the Hadoop file system which means it can process massive pentabytes of data in an efficient way.
It helps in streamline the overall efficiency of our requirement and business outcomes.
Except for some query performance improvements, we have faced no issues so far.
- Provides significant performance and security measurements for analytical workloads.
- Quickly process the queries and store large data.
- Supports wide range of file extensions system.
- Secured and scalable data storage solution.
- Limitation in connecting with other non-Azure sources.
- Performance issues with large datasets.
- Improvements in bulk data update and deletion.
- Query performance for exploratory data analysis.
As some features are still in development phase, there are some improvements required to make this a unified storage solution for the organizations.
- Affordable and cost effective for small-medium sized businesses.
- Regulatory Compliance Metrics
- Deployment that's not complicated
- U-SQL is somewhat complex to understand
- You cannot use blob APIs, NFS 3.0, and Data Lake Storage APIs to write to the same instance of a file.
- The WASB driver experiences issues all the time
It may not be feasible or cost effective if you don't have that much data to implement, or if you're a smaller organization with two or less VMs/production servers.
- It's very fast and cost effective.
- Strong support, good performance and scalability
- Easy Integration with Databricks
- Costing of Azure data lake store is very high and maintenance is also high for small companies
- There are so many components and It's takes some time to fully understand it.
- Usefull for big data analytics.
- It is provide unlimited storage.
- You can use any type of data or any size.
- Easy to build structure
- Easy to point any of batch in your big data
- Data import or exporting progress can be faster
- Maybe can add some detailed summary graphs for reporting
- File Storage
- Highly Scalable
- Cross Platform Support
- Not as flexible as a data warehouse.
- Not as optimized for queries as a data warehouse.
- Could use more documentation.
- Data Visualization
- Highly Encrypted
- Cost Efficient
- UI Design is quite complex to understand.
- All Features are not up to date.
- Analyzing large data sometimes makes its slow.
- Flexible semantic file systems
- Hadoop integration
- None that I can remember.
- Ease of integration and setup
- Support for the MS Suite of Applications
- Extensible and upgradable
- Switch in Gen 1 to Gen 2 was a bit tricky.
- It can be performance bound if not properly architected.
- Need to be incremental in how you implement.
- The data lake analytics tool is good and provides loads of computing power to speed up the processing time.
- It provides unlimited storage for structured, semi-structured or unstructured data.
- Cloud-based service and we can easily use it for ETL and ELT processes.
- It works well within the Azure ecosystem but still lacks connectivity from a lot of third-party tools.
- Connectivity to and from multiple non Azure sources and targets is very limited.
- Missing support from vendor.