ScienceLogic is a system and application monitoring and performance management platform. ScienceLogic collects and aggregates data across and IT ecosystems and contextualizes it for actionable insights with the SL1 product offering.
N/A
Splunk Observability Cloud
Score 8.4 out of 10
N/A
Splunk Observability Cloud aims to enable operational agility and better customer experience through real-time AI-driven streaming analytics allowing accurate alerts in seconds. It is designed to shorten MTTD and MTTR by providing real-time visibility into cloud infrastructure and services.
$180
per year per host
Pricing
ScienceLogic SL1
Splunk Observability Cloud
Editions & Modules
No answers on this topic
Infrastructure
$15
per month (billed annually) per host
App & Infra
$60
per month (billed annually) per host
End-to-End
$75
per month (billed annually) per host
Offerings
Pricing Offerings
ScienceLogic SL1
Splunk Observability Cloud
Free Trial
No
Yes
Free/Freemium Version
No
No
Premium Consulting/Integration Services
Yes
No
Entry-level Setup Fee
Required
No setup fee
Additional Details
ScienceLogic SL1 offers four tiers:
SL1 Advanced – Application Health, Automated Troubleshooting and Remediation Workflows
SL1 Base – Infrastructure Monitoring, Topology & Event Correlation
SL1 Premium – AI/ML-driven Analytics, Low-Code Automated Workflow Authoring
SL1 Standard – Infrastructure Monitoring – with Agents, Business Services, Incident Automation, CMDB Synchronization, Behavioral Correlation
To get pricing for each tier, please contact the vendor.
ScienceLogic SL1 supports large scale of IT Infrastructure devices and vendors. Was the single tool providing multiple functionalities at same time and allowed to remove additional legacy tools used for monitoring. Allowed integration with incident management and CMDB. Allowed …
Splunk Observability Cloud
No answer on this topic
Features
ScienceLogic SL1
Splunk Observability Cloud
AIOps Features
Comparison of AIOps Features features of Product A and Product B
For Windows, the issue is in higher resource consumption related to WinRM monitoring, which provides better options then the SNMP monitoring, which on the other hand is less resource intensive. The problem is also with support for OS with other than English language.
Its great if you need real-time visibility across complex or regulated environments. Also strong for hybrid or multi-cloud setups where uptime, observability and fast IR are required. It’s probably overkill for smaller teams or environments that don’t have constant changes or compliance reporting needs. It's expensive and has a steep learning curve. Also, in my opinion, do not get yourself into a consumption based model. Costs can certainly get out of control quickly.
The first one is its Kubernetes container monitoring.
I really like this features because as we know how much K8s is vast and to manually monitor each part of the Kubernetes it takes so much time but Splunk Observability Cloud makes it easier. And even once we integrate K8s with Splunk Observability Cloud it gives us some prebuilt dashboards which gives holistic view of our Cluster and its nodes, pods, etc.
The dashbaord feature of Splunk Observability Cloud, it gives us full flexibility to customize our dashboard with a wide range of predefined chart types.
Now it also supports OTEL, which is a plus point for observability. As now everyone is moving towards Otel and in current market there are only few tools who supports OTEL based integrations, Splunk Observability Cloud is one out of them.
Dashboards are quite old and are of Iron age. Need to have AP2 dashboards only instead of AP1 and consistent new design across all functionalities.
Reporting is not improved since Y2020 and need to revamp completely. Need to integrate Dashboards and Reporting. PowerBI Like functionality to be given OOTB. Reports should be extracted in Excel, PDF, HTML and should be heavily automated.
Create and Open APIs for basic and advanced monitoring data extraction.
Topology based Event Correlation and Suppression should be improved drastically. Need to identify critical network interfaces based on Topology and monitor them. Basic customization of Dynamic App and/or Powerpack to exclude/include certain metrics/events to be permitted OOTB instead of customizations.
Integration with ServiceNow to be improved and to be taken to next level. Automation Powerpack should be made available OOTB as part of base product and to be priced attractively.
Take product to next level where we can monitor actual impacted IT or Business Service instead of metrics and events BSM and Topology map to be auto discovered and identify the network dependencies and alternate paths automatically instead of manual creation of BSM.
You can use table-like functionality to generate dashboards, but these queries are heavy on the system.
It could be easier to give insight into what type of line parsing is used for specific documents in a company-managed environment and/or show ways to gain the insights needed.
I would like to see ways to anonymize specific data for shared reports without pre-formatting this in a dashboard on which reports could be based.
It is simply because of all the best possible autonomy solutions it is providing and getting better day by day. Using AI and Devops along with handy automation, The monitoring and Management of devices becomes much easier and the way it is growing in all the aspects is one the best reasons too. Evolution of the SL1 platform in the autonomy monitoring and management is quite appreciable.
Good: Stable system with low error rate Easy to use for simple use cases Bad: UI is not very clear for complex usage Mobile view (when logged in from phone) is bad No library for .net
The core functions are there. The complexity is due to the complexity of the space. The score is based on comfort (I no longer notice the legacy UI) and the promise that I see in the 8.12 Unified UI (a vast improvement). It is also based on the fact that with 8.12, you can now do everything in the new UI but you still have the legacy UI as a fallback (which should now be unnecessary for new installations)
When there is an issue, it’s a win if one can easily identify the root cause. To do the same, it should allow the user to dig deep with multiple data points and compare the data and identify the anomaly. In this use case, it’s good to drive from Splunk 011y.
SL is always there and online when you need to get info from it. The only times when SL was not available in our own data center, was when network links from out side of the data center was down and those links were not in our controll. Having a central database and people accessing it all over the world, may put a bit of constarin on the performance of the dashboards when reports gets generated, but that is far and few n between.
SceinceLogic SL1 architecture helps the platform to give a top-notch performance in every respect, Data collection to reporting happens very smoothly. With the new user interface pages load much faster. Individual appliances carrying the individual task ensure things are working without lag. Integration with ticketing tool(SNOW) is well managed by the ScienceLogic, no issue or much delay has been observed while interacting with an external tool.
So far, it's good as part of my overall experience, except for a couple of use cases. The support team is well knowledgeable, has technical sound, and is efficient. When support escalates to engineering, the issue gets stuck and takes months to resolve.
It was good, Do the online training first and understand it and you will get the most out of the in-person training that way. This also takes you to an advanced level which is very good and the training as been overhauled once again along with new product coming in such as Zebruim / Skylar, worth going through again if it a while back that you first did this.
There are a lot of educational materials and courses on the SL1 training site (Litmos university). However the recording quality is sometimes not very good - screen resolution is low. There is a lack of professional rather than user-oriented documents and there are mistakes in documentation and education is not well structured.
Implementation is smooth if we are to just support the out-of-the-box features available in ScienceLogic. For any custom requirement, having to go to SL1 Professional Services is the worst part of procuring this suite. And more often than not, SL1 Professional Services also ask to raise feature request. So, you subscribe to Professional Services to only hear back from them that "This feature is not supported and needs to have a separate feature request". At times frustrating.
Science logic SL1 is so user friendly and it's really easy to navigate between function. I would recommend Sciene logic SL1 to all of them who are looking for really useful monitoring tool and expecting easy way of managing it.
Splunk Infrastructure Monitoring provides far superior options for anybody using a complex hybrid multi-cloud environment and allows both your SOC and NOC to work together on the same data while driving their own insights. We found other products are still in the old world view of servers and agents residing together within a single data centre, but modern apps are no longer like this.
Our deployment model is vastly different from product expectations. Our global / internal monitoring foot print is 8 production stacks in dual data centers with 50% collection capacity allocated to each data center with minimal numbers of collection groups. General Collection is our default collection group. Special Collection is for monitoring our ASA and other hardware that cannot be polled by a large number of IP addresses, so this collection group is usually 2 collectors). Because most of our stacks are in different physical data centers, we cannot use the provided HA solution. We have to use the DR solution (DRBD + CNAMEs). We routinely test power in our data centers (yearly). Because we have to use DR, we have a hand-touch to flip nodes and change the DNS CNAME half of the times when there is an outage (by design). When the outage is planned, we do this ahead of the outage so that we don't care that the Secondary has dropped away from the Primary. Hopefully, we'll be able to find a way to meet our constraints and improve our resiliency and reduce our hand-touch in future releases. For now, this works for us and our complexity. (I hear that the HA option is sweet. I just can't consume that.)
Once a powerpack is developed and configured for a device for one customer, it is easy to deploy the same powerpack on a second customer estate and configure specifically for that customer without having to reinvent the powerpack. This saves time and therefore money.
Once the customer estate tuning is complete, the Operations team have come trust the alerts. This is especially true when transient or self-correcting alerts are automatically cleared without ops team involvement, but a record is still available for audit and debugging purposes. This saves time and therefore money.
When setup correctly, it provides good visibility into applications, devices and whole customer estates. This saves time and therefore money when issues arise.