ACCELQ is an agile quality management platform that helps users achieve continuous delivery for web, mobile, manual testing, and APIs. It can be used to write and manage manual test cases for the functionality that may be too fluid for automation.
N/A
OpenText Silk Central
Score 7.0 out of 10
N/A
Formerly from Micro Focus and earliler from Borland, unified test management with OpenTextâ„¢ Silk Central drives reuse and efficiency. It gives users the visibility to control application readiness.
N/A
Pricing
ACCELQ
OpenText Silk Central
Editions & Modules
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
ACCELQ
OpenText Silk Central
Free Trial
No
No
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
ACCELQ
OpenText Silk Central
Features
ACCELQ
OpenText Silk Central
Test Management
Comparison of Test Management features of Product A and Product B
ACCELQ can support multiple technologies such as web, mobile, API, and mainframe. It’s also suited for SAAS solutions such as Salesforce and addresses challenges such as dynamic HTML. It’s easy to set up, and onboarding is easy, and overall lead time is comparatively less. The overall execution results are captured with screenshots, and it’s easy to debug errors. It has integrations with leading cloud-based desktop and mobile farm services such as Saucelabs, browser stack, etc.; ACCELQ is not developer friendly, and hence the overall adoption for a continuous integration scenario is very limited. If you are using a different test management solution, the integration between accelQ and that tool needs ti to be built and hence requires additional development effort, and it’s buggy too.
We didn't just select Borland Silk Central randomly. In the selection process, we actually evaluated in total 26 available test management tools in the market. We sent surveys to all potential users in the department to collect their wish list of our next management tool, converted them to a criteria list, and used that list to evaluate all 26 tools. We reduced the possible candidate tools to five and organized a small committee to pick the final three. Top management then checked their price tags and selected Borland Silk Central. Based on this evaluation process, I would say Borland Silk Central is suitable to an organization which has no more than 60 testers; needs both manual tests and automated tests; needs on-line support; needs a low learning curve and has a limited budget. My personal view is that this tool reaches the balance points among ease-of-use, budget and support.
Borland Silk Central is good for the users to associate test requirements, test cases, execution plans and test reports together. Each asset (test case, requirement, etc...) provides links for the users to jump to other assets in a click, and the users can jump back and forth between two assets.
Borland Silk Central is also good in test automation. Although Micro Focus does provide a client tool for test automation, the users don't really need it to automate the tests. In our case, we are using Python to automate the tests and use a batch file to launch tests, and in Borland Silk Central we just call that batch file from server side. The test result is automatically fed back to Silk server.
Micro Focus also publishes the schema of the database behind Borland Silk Central, so it is very easy to extend its function beyond its original design. Moreover, because its schema is published, we can easily retrieve and process its data for business intelligence purpose.
On the other hand, the plugins of Borland Silk Central with third-party tools are programmed poorly. In our case, the plugins for JIRA have a lot of limitations and were almost unusable in our test environment. (They did improve the plugins a little bit later, however.)
The tech support people are located in UK, so frequently it is difficult to get a hold of these guys due to different time zones. Also, most of them obviously don't have enough experience and sometimes drove us nuts in emergency situations.
The last thing I feel is that Micro Focus possibly doesn't provide enough manpower to maintain Borland Silk Central. There are tons of feature requests for Borland Silk Central pending there. Although they have frequent hot fixes every few months, they don't digest these requests quick enough.
When we implemented ACCELQ, we conducted POCs with many similar solutions. Among the tools we pursued at that time, accelQ stood out against Tricentis Tosca and QMetry automation studio. However, subject 7 did better. However, they were still in the nascent stages of building the tool, and hence we did not pick it.
IBM Collaborate Suite - it is way too complicated and the learning curve is too high.
HP Quality Center - it is OK but a little bit expensive.
TestLink, Squash TM and other open source tools: The capabilities of open source tools just can't compare to commercial tools. Although we can modify the source code to improve the tool, we are just test engineers, not developers.
Zephyr: Our testers simply didn't like its UI - too weird.
Borland Silk Central provides a centralized test platform for multiple test departments in the company, so now all of the departments know what each of them is doing. In turn, all departments can coordinate with each other to reduce the duplicated test items and increase the overall test efficiency.
Also, Borland Silk Central enables the users to publish the test procedure (steps) of each test case so all the users can know how each test case is performed. It is not like what we had before, the test procedures resided in difference place from Excel to Google drive or some other weird locations.
Also, because all departments are using Borland Silk Central, all testers of the departments have better communication regarding testing methods. In the past, the department used different test management tools and it was hard for the testers to understand each other's testing methods.
Finally, because all departments share BorlandSilk Central, they also share the same set of reports published to Atlassian Confluence, so now they use the same set of reports to evaluate the test progress.