Application Performance Management (APM) and Network Performance Monitoring (NPM) look quite similar at first and have misleadingly similar acronyms. Both of these technologies are designed to ensure that IT infrastructure runs smoothly and efficiently. But they are quite different disciplines and use completely different technology.
What is Application Performance Management?
APM tools are software applications that help IT administrators ensure that the enterprise applications that employees use to do their jobs every day deliver a quality user experience and meet defined performance standards or KPIs. It is not enough to understand that there are performance problems. APM can determine the detailed reasons for performance problems.
How does APM Work?
Most APM products comprise several different agents for data collection: An application agent is installed on the server running the application and it sends performance data back to the controller. It collects data on non-normal behavior and analyzes the data to understand the business impact. The software looks at things like RAM and CPU usage, response times, traffic spikes, availability, and uptime. An end-user agent collects performance data from end user machines and gives insight into how an application is performing from an end-user perspective. A database agent looks for slow-running SQL queries—a common reason for application slowdowns. A central controller aggregates all the agent data and generates reports and visualizations to help elucidate the reasons for application slowdowns. Some products also include transaction trace capabilities which provide administrators with data on the slowest individual requests at the code level. This capability is often used by developers troubleshooting their applications.
What is Network Performance Monitoring?
NPM tools are hardware devices that are capable of troubleshooting, root cause analysis, capacity planning, SLA monitoring and resource reporting for end-users. NPM tends to provide retroactive analysis of what went wrong rather than proactive actions to prevent problems from occurring in the first place. NPM tools typically collect and analyze both Infrastructure data and traffic data.
How does NPM Work?
NPM tools collect infrastructure metrics via the Simple Network Management Protocol (SNMP). Many different devices like printers, servers, routers and switches are managed and monitored using this protocol. managed and monitored. Managed devices can also be configured with SNMP nodes that allow them to interface with other network components.
NPM tools also collect traffic data by connecting to the network and collect and analyzing packet data. So, what is packet data? Before transmission, data is broken down into small “packets” of data which can be sent back and forth more efficiently. Essentially, data is not sent across the internet as a single large file, but as a multitude of much smaller data packets each of which is routed in the most efficient way possible (which helps with load balancing). Once the data packets reach their destination, they are reassembled into their original form. NPM tools examine these small packets of data to diagnose underlying network problems. This technology is very effective at monitoring network usage to identify whether a network is congested or normal. It can also identify erroneous packets and pinpoint faulty network devices.
However, as networks became more complex, packet-sniffing NPM tools were not enough to provide a complete picture. Cisco developed NetFlow for its switches and routers. NetFlow provides information on traffic flow and volume to figure out its origin and destination. NetFlow enabled routers are capable of exporting traffic statistics to IT professionals to help them troubleshoot issues.
NPM tools are essential to keeping the infrastructure working smoothly, but do not help explain why the network feels slow.
Do I need both?
The short answer is yes. Because these tools have been designed for different purposes, both are still required. A TrustRadius survey sent to APM reviewers revealed that almost half of APM users also use NPM tools. There is little doubt that APM is the more modern technology and gets much more attention as the focus on user experience becomes ever more important. APM tools align better with the cloud and provide DevOps teams vital information about how applications are performing before things go downhill.
NPM tools though are still very useful for network teams. These technologies reside in the hardware itself and provide oversight of packets flowing through the network. The ability to troubleshoot network traffic to prevent bottlenecks and optimize data flow remains an important one.
Ultimately, the buyers for these technologies are different. NPM tools are purchased by network teams, while APM is purchased by support and development teams.
Are APM and NPM offered in a single platform?
A recent trend towards convergence of these trends has been gaining momentum and several products provide both APM and NPM in a single product platform. While APM and NPM are very different beasts, there is some logic on the side of the convergence argument.
Both technologies indeed produce produce very different data from different layers of the TCP/IP stack. The network is concerned with low levels of the traditional stack, while application protocols reside at higher levels. APM is typically concerned with activities like end-user experience monitoring, transaction profiling and analysis of application logic which operate well above traditional NPM activities like data packet analysis and traffic flow.
But the fact that these tools are bought by different teams tends to exacerbate the schism between network and application troubleshooting. As data volumes show no inclination to stop growing, it will become harder for these respective teams to keep up with what is happening on the network and within applications as discrete activities. Ultimately, convergence, facilitated by AI technology will bring these capabilities under one roof. AI and machine learning technology will unite currently separate parts of IT Operations, but will also automate these activities as they become unmanageable through human oversight alone.