Solr spins up nicely and works effectively for small enterprise environments providing helpful mechanisms for fuzzy searches and facetted searching. For larger enterprises with complex business solutions you'll find the need to hire an expert Solr engineer to optimize the powerful platform to your needs. Internationalization is tricky with Solr and many hosting solutions may limit you to a latin character set.
Good for below cases 1. There is a front end and need to correlate data with front end data 2. multiple microservices and need to check the health of each system 3. correlate data from various sources 4. Application performance is a key to be captured 5. application performance is a key metric.
Easy to get started with Apache Solr. Whether it is tackling a setup issue or trying to learn some of the more advanced features, there are plenty of resources to help you out and get you going.
Performance. Apache Solr allows for a lot of custom tuning (if needed) and provides great out of the box performance for searching on large data sets.
Maintenance. After setting up Solr in a production environment there are plenty of tools provided to help you maintain and update your application. Apache Solr comes with great fault tolerance built in and has proven to be very reliable.
These examples are due to the way we use Apache Solr. I think we have had the same problems with other NoSQL databases (but perhaps not the same solution). High data volumes of data and a lot of users were the causes.
We have lot of classifications and lot of data for each classification. This gave us several problems:
First: We couldn't keep all our data in Solr. Then we have all data in our MySQL DB and searching data in Solr. So we need to be sure to update and match the 2 databases in the same time.
Second: We needed several load balanced Solr databases.
Third: We needed to update all the databases and keep old data status.
If I don't speak about problems due to our lack of experience, the main Solr problem came from frequency of updates vs validation of several database. We encountered several locks due to this (our ops team didn't want to use real clustering, so all DB weren't updated). Problem messages were not always clear and we several days to understand the problems.
I wish Splunk Application Performance Monitoring could integrate with packet capture and analysis tools and provide the integrated analysis results on each tier of the application
Splunk is a great tool for log mining. It is less time-consuming and easy to use. It provides high security, it has customisable dashboards. Furthermore, it acts as a search head, and it gives real-time status. It has the ability to collect data from any source and multiple sources, and it can correlate them.
It is good that the correlation analysis and the cause of the failure can be analyzed more extensively through the collection of many data. It is good because it provides a user interface (usability) well and can be used conveniently. It is good because it can utilize anomaly detection through various AI-based ML models.