Boomi is a cloud-based, on-premise, or hybrid integration platform. It offers a low-code/no-code
interface with the capacity for API and EDI connections for integrating with external organizations and
systems, as well as compliance with data protection regulations.
Well suited: To most of the local run of datasets and non-prod systems - scalability is not a problem at all. Including data from multiple types of data sources is an added advantage. MLlib is a decently nice built-in library that can be used for most of the ML tasks. Less appropriate: We had to work on a RecSys where the music dataset that we used was around 300+Gb in size. We faced memory-based issues. Few times we also got memory errors. Also the MLlib library does not have support for advanced analytics and deep-learning frameworks support. Understanding the internals of the working of Apache Spark for beginners is highly not possible.
Legacy systems often need to be replaced or integrated with new applications in order to modernize businesses. A strong API strategy that avoids custom coding and third-party programs is essential to enable this integration. Boomi's new-age connectivity and integration solutions ensure safe, secure, and robust integration. In the age of information, businesses are under more pressure than ever to be able to collect and manage large amounts of data. This data comes in from a variety of sources, including personalized devices such as voice assistants and wearable tech. While this data can be immensely valuable to businesses, they often lack the infrastructure necessary to handle it effectively. This can lead to data build-up in databases or silos, and can eventually lead to problems with integration and security.
More from a development perspective. It is always difficult to use the properties features. It takes a while to understand how the data/variables can be used across an integration.
Dell Boomi should also invest more on API Management and not just seen as a ETL,ESB tool.
Should roll out features more often based on users reviews.
Dell Boomi has provided us with the ability to connect our campus together using our various existing platforms. There are many supported features and have yet to run into something that we cannot do. Its user interface is very intuitive which would allow users to begin developing fairly easily. There is a myriad of resources available
The only thing I dislike about spark's usability is the learning curve, there are many actions and transformations, however, its wide-range of uses for ETL processing, facility to integrate and it's multi-language support make this library a powerhouse for your data science solutions. It has especially aided us with its lightning-fast processing times.
My IT and Finance teams have noted that setting up the tool is a breeze. Dell Boomi has never caused an issue during a system implementation that I am aware of. We are pleased with the tool and recommend others consider it.
The atom sphere takes a time to load, when I open a process or when I open a log. One more slow processing is when I import objects from NetSuite.
About the performance of processing, it looks like Boomi takes a time to initialize some things such as connectors before starting the process. This is also performance we have.
1. It integrates very well with scala or python. 2. It's very easy to understand SQL interoperability. 3. Apache is way faster than the other competitive technologies. 4. The support from the Apache community is very huge for Spark. 5. Execution times are faster as compared to others. 6. There are a large number of forums available for Apache Spark. 7. The code availability for Apache Spark is simpler and easy to gain access to. 8. Many organizations use Apache Spark, so many solutions are available for existing applications.
Boomi support was responsive and knowledgable, however being a closed cloud service, it doesn't have good community support. We found the learning curve to be steep and there aren't avenues like google, forums, or blogs that provide community driven insight into the product or how to go about designing solutions using the tool
All the above systems work quite well on big data transformations whereas Spark really shines with its bigger API support and its ability to read from and write to multiple data sources. Using Spark one can easily switch between declarative versus imperative versus functional type programming easily based on the situation. Also it doesn't need special data ingestion or indexing pre-processing like Presto. Combining it with Jupyter Notebooks (https://github.com/jupyter-incubator/sparkmagic), one can develop the Spark code in an interactive manner in Scala or Python
We decided to go with Dell Boomi because another department in our company was already using the software. We did not research competitor applications to use as our business solution. Dell Boomi was very easy and quick to set up, so once we decided to use Dell Boomi for systems integration, we had it set up and running within a few working days.
Faster turn around on feature development, we have seen a noticeable improvement in our agile development since using Spark.
Easy adoption, having multiple departments use the same underlying technology even if the use cases are very different allows for more commonality amongst applications which definitely makes the operations team happy.
Performance, we have been able to make some applications run over 20x faster since switching to Spark. This has saved us time, headaches, and operating costs.
It has allowed us to scale significantly without having to add headcount, specifically those geared towards data entry. We went from a $10m ARR business to $200m ARR business with the same amount of Order Processors and 12x amount of transactions by leveraging Boomi to perform a lot of the work, and then having the Order Processing team to simply review that the transaction was processed successfully.