Azure DevOps (formerly VSTS, Microsoft Visual Studio Team System) is an agile development product that is an extension of the Microsoft Visual Studio architecture. Azure DevOps includes software development, collaboration, and reporting capabilities.
$2
per GB (first 2GB free)
LoadRunner Professional
Score 8.7 out of 10
N/A
A solution simplifies performance load testing for colocated teams. With project-based capabilities, so teams can quickly identify abnormal application behavior.
N/A
Pricing
Azure DevOps
OpenText LoadRunner Professional
Editions & Modules
Azure Artifacts
$2
per GB (first 2GB free)
Basic Plan
$6
per user per month (first 5 users free)
Azure Pipelines - Self-Hosted
$15
per extra parallel job (1 free parallel job with unlimited minutes)
Azure Pipelines - Microsoft Hosted
$40
per parallel job (1,800 minutes free with 1 free parallel job)
Basic + Test Plan
$52
per user per month
No answers on this topic
Offerings
Pricing Offerings
Azure DevOps
LoadRunner Professional
Free Trial
No
No
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
Azure DevOps
OpenText LoadRunner Professional
Features
Azure DevOps
OpenText LoadRunner Professional
Load Testing
Comparison of Load Testing features of Product A and Product B
Azure DevOps works well when you’ve got larger delivery efforts with multiple teams and a lot of moving parts, and you need one place to plan work, track it properly, and see how everything links together. It’s especially useful when delivery and development are closely tied and you want backlog items, code and releases connected rather than spread across tools. Where it’s less of a fit is for small teams or simple pieces of work, as it can feel like more setup and process than you really need, and non-technical users often struggle with the interface. It also isn’t great if you want instant, easy programme-level views or a very visual planning experience without putting time into configuration.
Micro Focus LoadRunner and its suite of tools, specifically VuGen works wonderfully for us for all web, http/https and web service calls. We've been able to build tests for near any scenario we need with relative ease. As long as we have crafted up requirements for our scenarios / scripts to managed scope, we've had high success working with scripting and data driving. Our main tests are web service calls - typically chained together to form a full scenario with transactions measuring the journey or a similar (measure along the way) journey through a browser. For web services we will use VuGen and browser we've shifted to Tru Client I have had little-to-no experience scripting against a thick client where a ui-driven test would be required. I know its possible but quite costly due to the need to run the actual desktop client to drive tests. We've been fortunate enough to leverage http calls to represent client traffic.
I did mention it has good visibility in terms of linking, but sometimes items do get lost, so if there was a better way to manage that, that would be great.
The wiki is not the prettiest thing to look at, so it could have refinements there.
HP LoadRunner with new patches and releases sometimes makes no longer support older version of various protocols like Citrix, which makes the task time-consuming when using older versions of LoadRunner for some of the cases. So it should support older version as well while upgrading.
Configuring HP LoadRunner over the firewall involves lots of configuration and may be troublesome. So, there should be a script (power shell script for Windows or shell script for Linux users) to make it easy to use and with less pain.
I would like to see the RunTime Viewer of Vugen in HPLoadRunner based on the browser I selected in the run-time configuration to make it feel more realistic as a real user.
Licensing cost is very high when we need to perform a test on application for a specific group of users.
I don't think our organization will stray from using VSTS/TFS as we are now looking to upgrade to the 2012 version. Since our business is software development and we want to meet the requirements of CMMI to deliver consistent and high quality software, this SDLC management tool is here to stay. In addition, our company uses a lot of Microsoft products, such as Office 365, Asp.net, etc, and since VSTS/TFS has proved itself invaluable to our own processes and is within the Microsoft family of products, we will continue to use VSTS/TFS for a long, long time.
It's a great help to get more information about new feature release and stay updated on what the dev team is working on. I like how easy it is to just login and read through the work items. Each work item has basic details: Title, Description, Assigned to, State, Area (what it belongs to), and iteration (when it’s worked on). See image above.They move through different states (New → Discovery → Ready for Prod → etc.).
When we've had issues, both Microsoft support and the user community have been very responsive. DevOps has an active developer community and frankly, you can find most of your questions already asked and answered there. Microsoft also does a better job than most software vendors I've worked with creating detailed and frequently updated documentation.
Customer service is not that great. It's difficult to get hold of someone if an issue is supposed to be addressed on an urgent basis. No online chat service readily available.
Microsoft Planner is used by project managers and IT service managers across our organization for task tracking and running their team meetings. Azure DevOps works better than Planner for software development teams but might possibly be too complex for non-software teams or more business-focused projects. We also use ServiceNow for IT service management and this tool provides better analysis and tracking of IT incidents, as Azure DevOps is more suited to development and project work for dev teams.
We have saved a ton of time not calculating metrics by hand.
We no longer spend time writing out cards during planning, it goes straight to the board.
We no longer track separate documents to track overall department goals. We were able to create customized icons at the department level that lets us track each team's progress against our dept goals.
The scripts created with traditional web/http protocol are not robust thus re-scripting is required after most every code drop. Troubleshooting and fixing the issue takes more time therefore in most cases we do re-scripting to keep it simple and save time.
In ideal world you would rather spend more time doing testing than scripting in that case mostly you could use an Ajax TruClient protocol. This type of script will only fail when an object in the application is removed or changed completely. This way of scripting will save you more time and helps you maintain the scripts with less re-work effort on a release basis. On the long run you will have a better ROI when you use Ajax TruClient protocol for scripting.