LogPoint detects, analyzes and responds to threats within an organization’s data for faster security investigations. LogPoint is dedicated to helping overloaded security analysts work more efficiently with accelerated detection and response. LogPoint's SIEM solution with UEBA provides…
N/A
ScienceLogic SL1
Score 8.8 out of 10
Enterprise companies (1,001+ employees)
ScienceLogic is a system and application monitoring and performance management platform. ScienceLogic collects and aggregates data across and IT ecosystems and contextualizes it for actionable insights with the SL1 product offering.
N/A
Splunk Enterprise Security
Score 8.6 out of 10
N/A
Splunk Enterprise Security is an analytics-driven SIEM that helps to combat threats with actionable intelligence and advanced analytics at scale.
N/A
Pricing
LogPoint
ScienceLogic SL1
Splunk Enterprise Security
Editions & Modules
No answers on this topic
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
LogPoint
ScienceLogic SL1
Splunk Enterprise Security
Free Trial
Yes
No
No
Free/Freemium Version
No
No
No
Premium Consulting/Integration Services
Yes
Yes
No
Entry-level Setup Fee
No setup fee
Required
No setup fee
Additional Details
—
ScienceLogic SL1 offers four tiers:
SL1 Advanced – Application Health, Automated Troubleshooting and Remediation Workflows
SL1 Base – Infrastructure Monitoring, Topology & Event Correlation
SL1 Premium – AI/ML-driven Analytics, Low-Code Automated Workflow Authoring
SL1 Standard – Infrastructure Monitoring – with Agents, Business Services, Incident Automation, CMDB Synchronization, Behavioral Correlation
To get pricing for each tier, please contact the vendor.
LogPoint is incredibly useful for pulling information from various log sources and combining them together to offer insights into suspicious or potentially malicious behaviour. It is not intuitive and can take some time to get used to. Once you're up and running though, it's easy to onboard new log sources. Search queries can again be tough to get used to, but LogPoint support is really helpful and can offer assistance with writing more complex searches.
For Windows, the issue is in higher resource consumption related to WinRM monitoring, which provides better options then the SNMP monitoring, which on the other hand is less resource intensive. The problem is also with support for OS with other than English language.
Based on my experience, Splunk is a strong git for some environments and a poor match for others. The distinction is primarily based on infrastructure complexity and budget. It's perfect for large enterprises with a mix of on-prem/cloud infrastructure. It's not a perfect match for small teams with restricted resources.
Writes Powerful Queries: The queries that can be written using the Splunk Query Language are very powerful and highly customizable to meet every need. Ex: Writing queries to search the intersection of two different sources like Network and Endpoint Logs.
Offers Dashboard Abilities: Helps build complex panels for Dashboards in addition to providing several out-of-the-box panels. Ex: creating panels to calculate the performance of analysts in a given timezone.
Helpful Search Aids: It helps to set up complex custom alerts very easily. The interesting fields section is very helpful while threat hunting. Ex: It shows all the users and the frequency of each in a failed login event. The user list on the interesting fields is useful to look for suspicious logins.
Dashboards are quite old and are of Iron age. Need to have AP2 dashboards only instead of AP1 and consistent new design across all functionalities.
Reporting is not improved since Y2020 and need to revamp completely. Need to integrate Dashboards and Reporting. PowerBI Like functionality to be given OOTB. Reports should be extracted in Excel, PDF, HTML and should be heavily automated.
Create and Open APIs for basic and advanced monitoring data extraction.
Topology based Event Correlation and Suppression should be improved drastically. Need to identify critical network interfaces based on Topology and monitor them. Basic customization of Dynamic App and/or Powerpack to exclude/include certain metrics/events to be permitted OOTB instead of customizations.
Integration with ServiceNow to be improved and to be taken to next level. Automation Powerpack should be made available OOTB as part of base product and to be priced attractively.
Take product to next level where we can monitor actual impacted IT or Business Service instead of metrics and events BSM and Topology map to be auto discovered and identify the network dependencies and alternate paths automatically instead of manual creation of BSM.
Improved User Interface Customization: While the interface is generally intuitive, providing more options for users to customize their dashboards and views would enhance the overall user experience. Tailoring the interface to specific roles or use cases could be a valuable addition.
Simplified Alert Management: Streamlining the process of managing alerts, such as grouping or categorizing them based on severity or type, would make it easier for security teams to prioritize and respond to incidents effectively.
Expanded Threat Intelligence Feeds: Increasing the variety and sources of threat intelligence feeds available within ES would provide a broader context for identifying and mitigating emerging threats, ensuring a more comprehensive defense against evolving attack vectors.
It is simply because of all the best possible autonomy solutions it is providing and getting better day by day. Using AI and Devops along with handy automation, The monitoring and Management of devices becomes much easier and the way it is growing in all the aspects is one the best reasons too. Evolution of the SL1 platform in the autonomy monitoring and management is quite appreciable.
The core functions are there. The complexity is due to the complexity of the space. The score is based on comfort (I no longer notice the legacy UI) and the promise that I see in the 8.12 Unified UI (a vast improvement). It is also based on the fact that with 8.12, you can now do everything in the new UI but you still have the legacy UI as a fallback (which should now be unnecessary for new installations)
Maintaining hundreds or even 1000+ SOC use cases is really difficult, considering that the Data sources may not always send the data. A module that detects data freshness issues and detect data format changes would be a great help. the main challenge today using Splunk Enterprise Security is making sure that the detection rules are still working properly given all the changes that occur in data source applications. Also, maintaining the data collects on tens of thousands of servers and more than 100k workstations is a real company IT challenge: the splunkbase forwarder may not support old OS anymore, while these are the most important to monitor. Moving to the Open Telemetry collector has become essential so that only 1 agent is required for both SIEM and application observability.
SL is always there and online when you need to get info from it. The only times when SL was not available in our own data center, was when network links from out side of the data center was down and those links were not in our controll. Having a central database and people accessing it all over the world, may put a bit of constarin on the performance of the dashboards when reports gets generated, but that is far and few n between.
SceinceLogic SL1 architecture helps the platform to give a top-notch performance in every respect, Data collection to reporting happens very smoothly. With the new user interface pages load much faster. Individual appliances carrying the individual task ensure things are working without lag. Integration with ticketing tool(SNOW) is well managed by the ScienceLogic, no issue or much delay has been observed while interacting with an external tool.
It takes a long time for items to load if you are just generally searching through logs. It is best to use the data models which load faster but can be strange in terms of what is coming from which logs where. Yes, you can look it up, but this also requires familiarity with where things are and how to look them up.
LogPoint support is outstanding. They are incredibly helpful, and on occasions have proactively identified issues with our setup, and logged cases on our behalf before we had even noticed there was a problem. If there is a search we need to write that is beyond our skills, LogPoint support can typically write it for us within a couple of days. They are always very responsive, and I am yet to have a bad support experience.
So far, it's good as part of my overall experience, except for a couple of use cases. The support team is well knowledgeable, has technical sound, and is efficient. When support escalates to engineering, the issue gets stuck and takes months to resolve.
It's good when it's responsive, but I've had times where I had to wait quite a while for a response. But these are typically the exceptions rather than the rule. When you do get a response it is always well-informed and appropriate. I would say they've been trending better over time with this.
It was good, Do the online training first and understand it and you will get the most out of the in-person training that way. This also takes you to an advanced level which is very good and the training as been overhauled once again along with new product coming in such as Zebruim / Skylar, worth going through again if it a while back that you first did this.
I experienced only on-line training, but the trainers were very professional and competent. Maybe it could be more useful if they also have an experience in projects because sometimes they didn't have a real project experience to communicate to the students. Anyway, it was very interesting and I learned many thing that's very difficoult (or maybe impossible!) to have by myself, aven if I have more than 10 years of Splunk activity experience.
There are a lot of educational materials and courses on the SL1 training site (Litmos university). However the recording quality is sometimes not very good - screen resolution is low. There is a lack of professional rather than user-oriented documents and there are mistakes in documentation and education is not well structured.
It was very interesting and I learned many thing that's very difficoult (or maybe impossible!) to have by myself. The only problem was that, when I worked with the Splunk Professional Services, I found some difference between the training contents and the information from PS. In addition is required a long experience on Splunk Enterprise for the data ingestion part, in other words I'm able to work with ES because I'm worling on Splunk since 11 years, otherwise I'd some problem.
Implementation is smooth if we are to just support the out-of-the-box features available in ScienceLogic. For any custom requirement, having to go to SL1 Professional Services is the worst part of procuring this suite. And more often than not, SL1 Professional Services also ask to raise feature request. So, you subscribe to Professional Services to only hear back from them that "This feature is not supported and needs to have a separate feature request". At times frustrating.
Science logic SL1 is so user friendly and it's really easy to navigate between function. I would recommend Sciene logic SL1 to all of them who are looking for really useful monitoring tool and expecting easy way of managing it.
Splunk enterprise is the only solution that we’ve been able to identify that provides risk based alerting, which allows our SOC to reduce analyst fatigue which would be a huge problem without it. Before RBA, there were thousands of alerts a day and it was impossible to review all of them
for my exterience, unit pricing and billing frequency are correct. As I already said, I hint to have more discount flexibility, expecially with new customers, because there are competitors less expensive and very aggressive that are dangerous. In addition the possibility to don't pay the license for the development period could be a very interesting feature for the final customers.
Our deployment model is vastly different from product expectations. Our global / internal monitoring foot print is 8 production stacks in dual data centers with 50% collection capacity allocated to each data center with minimal numbers of collection groups. General Collection is our default collection group. Special Collection is for monitoring our ASA and other hardware that cannot be polled by a large number of IP addresses, so this collection group is usually 2 collectors). Because most of our stacks are in different physical data centers, we cannot use the provided HA solution. We have to use the DR solution (DRBD + CNAMEs). We routinely test power in our data centers (yearly). Because we have to use DR, we have a hand-touch to flip nodes and change the DNS CNAME half of the times when there is an outage (by design). When the outage is planned, we do this ahead of the outage so that we don't care that the Secondary has dropped away from the Primary. Hopefully, we'll be able to find a way to meet our constraints and improve our resiliency and reduce our hand-touch in future releases. For now, this works for us and our complexity. (I hear that the HA option is sweet. I just can't consume that.)
- 8 out of 10 and took 2 for the data pipeline and administration part. Even if you'd like to improve yourself or your team, you have to pay a lot of money and it could be more than GIAC education + cert. - Normalization for Data models and CPU-based searches can be a problem sometimes.
I had a fantastic experience with Splunk Professional Services: they worked with us in our last SON project (a SOC migration for a very large customer) and helped to build a multi tenent environment even if ES isn't a multi tenant platform. Th Splunk PS was a very professional and competent people, he is italian and was able to speak with our italian customers.
Once a powerpack is developed and configured for a device for one customer, it is easy to deploy the same powerpack on a second customer estate and configure specifically for that customer without having to reinvent the powerpack. This saves time and therefore money.
Once the customer estate tuning is complete, the Operations team have come trust the alerts. This is especially true when transient or self-correcting alerts are automatically cleared without ops team involvement, but a record is still available for audit and debugging purposes. This saves time and therefore money.
When setup correctly, it provides good visibility into applications, devices and whole customer estates. This saves time and therefore money when issues arise.
We have a 100% success rate on all our ES implementations due to the amazing documentation and Splunk enablement on the subject.
Our Splunk ES business has grown 100% YoY for the last 3 years.
In terms of long term management and maintenance, ES has been highly stable and predictable, reducing our overhead on costly services team for ad hoc maintenance work.