Overall Satisfaction with SQL Sentry Performance Advisor
Our company provides web site hosting. We use SQL Sentry Performance Advisor to monitor the production SQL Server instances supporting our hosting environment. Our hosting SQL Servers support between 10-40 different application databases. The primary driver for the workload on our application databases is the amount of traffic our hosted web sites receive. At any given time the traffic can be spiking on a specific hosted web site which will cause its corresponding database to spike and cause issues for the entire SQL Server.
- Monitor many internal SQL Server metrics that can help provide early warning about potential issues.
- Graphing of all metrics over configurable time frames with a comparable baseline for the same time frame. The baseline feature is really helpful for understanding how metrics have changed against what is "normal". The baseline is configurable so you can use yesterday, or the same day last week, or a statistically generated baseline. The workload on our application databases varies greatly between business hours and non-business hours so the baseline feature comparing to same day of the week from the previous week is helpful.
- Top consuming queries - SQL Sentry captures the top consuming queries. This helps to identify which database and which function within a database is consuming the most CPU or Disk i/o.
- Query Plans - when SQL Sentry captures a top consuming query - it captures its associated Query Plan. This is very helpful for figuring out why a query is consuming so many resources.
- SQL Sentry is a stand alone solution. There is no direct correlation between SQL Sentry events and application exception events or web server events. All correlation has to be done manually. This isn't really a problem directly with SQL Sentry - but is more the nature of using a stand alone solution like SQL Sentry.
- Using SQL Sentry for reporting can be slow. SQL Sentry is gathering a lot of data. All of that data is written into a SQL database. Reporting pulls from the same SQL Database. If pulling data for a long time range the resulting report can take a long time to load.
- Data Aggregation over time - When you run a report on a specific time range you have very detailed data available. When you run a report over a longer time range the data is aggregated together and specific anomalies get watered down. For example if you run a report over a 10 minute period you will see data at ~1 sec granularity. This makes it really clear where there was a spike in activity. If you run a report over a 24 hour period you will only see general trends. The data is at about 5 min granularity. So there may be a time period where you see a slight rise. You have to zoom in on that specific time frame to see more granular data. As a pro - SQL Sentry makes the process of zooming in and out on different time ranges very easy.
We looked at Dell Foglight on SQL (previously Quest) as well as Microsoft System Center. SQL Sentry was lower cost than both Foglight and System Center. Both Foglight and System Center promise more than just SQL Server monitoring. They offer an entire platform for monitoring all infrastructure. SQL Sentry only monitors SQL Server. But it does a good job of it.
SQL Sentry is a cost effective tool for monitoring mission critical database servers where you don't have a team of deeply experienced DBAs. It effectively allows you to buy DBA trouble shooting skills. You still need to have engineers with a good understanding of SQL Server - but SQL Sentry provides them with the tools to be able to trouble shoot SQL Performance problems.