For an organisation that has completely adopted SAFe structure including naming terminology, it is less appropriate and apart from that. It can suit any organisation out there, and it can solve all your problems one way or another by customising it. It is a robust and highly scalable solution to support all the business needs. It improves a lot of productivity and visibility.
We didn't just select Borland Silk Central randomly. In the selection process, we actually evaluated in total 26 available test management tools in the market. We sent surveys to all potential users in the department to collect their wish list of our next management tool, converted them to a criteria list, and used that list to evaluate all 26 tools. We reduced the possible candidate tools to five and organized a small committee to pick the final three. Top management then checked their price tags and selected Borland Silk Central. Based on this evaluation process, I would say Borland Silk Central is suitable to an organization which has no more than 60 testers; needs both manual tests and automated tests; needs on-line support; needs a low learning curve and has a limited budget. My personal view is that this tool reaches the balance points among ease-of-use, budget and support.
If you have a mix of automation & manual test suites, HPALM is the best tool to manage that. It definitely integrates very well with HP automation tools like HP Unified Functional Testing and HP LoadRunner. Automated Suites can be executed, reports can be maintained automatically. It also classifies which test suites are manual & which are automated & managers can see the progress happening in moving from manual to automated suites. In HPA ALM all the functional test suites, performance test suites, security suites can be defined, managed & tracked in one place.
It is a wonderful tool for test management. Whether you want to create test cases, or import it, from execution to snapshot capturing, it supports all activities very well. The linking of defects to test runs is excellent. Any changes in mandatory fields or status of the defect triggers an e-mail and sent automatically to the user that the defect is assigned to.
It also supports devops implementation by interacting with development tool sets such as Jenkins & GIT. It also bring in team collaboration by supporting collaboration tools like Slack and Hubot.
This tool can integrate to any environment, any source control management tool bringing in changes and creates that trace-ability and links between source control changes to requirements to tests across the sdlc life-cycle.
Borland Silk Central is good for the users to associate test requirements, test cases, execution plans and test reports together. Each asset (test case, requirement, etc...) provides links for the users to jump to other assets in a click, and the users can jump back and forth between two assets.
Borland Silk Central is also good in test automation. Although Micro Focus does provide a client tool for test automation, the users don't really need it to automate the tests. In our case, we are using Python to automate the tests and use a batch file to launch tests, and in Borland Silk Central we just call that batch file from server side. The test result is automatically fed back to Silk server.
Micro Focus also publishes the schema of the database behind Borland Silk Central, so it is very easy to extend its function beyond its original design. Moreover, because its schema is published, we can easily retrieve and process its data for business intelligence purpose.
The requirements module is not as user friendly as other applications, such as Blue Bird. Managing requirements is usually done in another tool. However, having the requirements in ALM is important to ensure traceability to tests and defects.
Reporting across multiple ALM repositories is not supported within the tool. Only graphs are included within ALM functionality. Due to size considerations, one or two projects is not a good solution. Alternatively, we have started leveraging the template functionality within ALM and are integrating with a third party reporting tool to work around this issue.
NET (not Octane) requires a package for deployment to machines without administrative rights. Every time there is a change, a new package must be created, which increases the time to deploy. It also forces us to wait until multiple patches have been provided before updating production.
On the other hand, the plugins of Borland Silk Central with third-party tools are programmed poorly. In our case, the plugins for JIRA have a lot of limitations and were almost unusable in our test environment. (They did improve the plugins a little bit later, however.)
The tech support people are located in UK, so frequently it is difficult to get a hold of these guys due to different time zones. Also, most of them obviously don't have enough experience and sometimes drove us nuts in emergency situations.
The last thing I feel is that Micro Focus possibly doesn't provide enough manpower to maintain Borland Silk Central. There are tons of feature requests for Borland Silk Central pending there. Although they have frequent hot fixes every few months, they don't digest these requests quick enough.
Because it lets me track the test cases with detailed scenarios and is clearly separated in folders. Also the defect filter helps me filter only the ones that have been assigned to a particular area of interest. The availability of reports lets me see the essentials fields which I might be missing the data on and helps me to work on these instead of having to go through everything.
It is a great tool, however, it got this rating because there is a lot of learning that takes a lot longer than other tools. There are no mobile versions of ALM even with just a project summary view. I believe ALM is well capable of integration with other analytics tools that can help business solutions prediction based on current and past project data. This is Data held in ALM but with no other use apart from human reading and project progress. ALM looks like a steady platform that I believe can handle more dynamic functionality. You could add an internal communication platform that is not a third party. Limit that communication tool to specific project members.
We have other tools in our organization like Atlassian JIRA and Microsoft Team Foundation Server, which are very capable tools but very narrow in their approach and feature set and does not come even close to the some of the core capabilities of HP ALM. HP ALM is the "System of Record" in our organization. It gives visibility for an artifact throughout the delivery chain, which cut downs unnecessary bottlenecks and noise during releases.
IBM Collaborate Suite - it is way too complicated and the learning curve is too high.
HP Quality Center - it is OK but a little bit expensive.
TestLink, Squash TM and other open source tools: The capabilities of open source tools just can't compare to commercial tools. Although we can modify the source code to improve the tool, we are just test engineers, not developers.
Zephyr: Our testers simply didn't like its UI - too weird.
Borland Silk Central provides a centralized test platform for multiple test departments in the company, so now all of the departments know what each of them is doing. In turn, all departments can coordinate with each other to reduce the duplicated test items and increase the overall test efficiency.
Also, Borland Silk Central enables the users to publish the test procedure (steps) of each test case so all the users can know how each test case is performed. It is not like what we had before, the test procedures resided in difference place from Excel to Google drive or some other weird locations.
Also, because all departments are using Borland Silk Central, all testers of the departments have better communication regarding testing methods. In the past, the department used different test management tools and it was hard for the testers to understand each other's testing methods.
Finally, because all departments share BorlandSilk Central, they also share the same set of reports published to Atlassian Confluence, so now they use the same set of reports to evaluate the test progress.