ACCELQ vs. OpenText Silk Central

Overview
ProductRatingMost Used ByProduct SummaryStarting Price
ACCELQ
Score 8.5 out of 10
N/A
ACCELQ is an agile quality management platform that helps users achieve continuous delivery for web, mobile, manual testing, and APIs. It can be used to write and manage manual test cases for the functionality that may be too fluid for automation.N/A
OpenText Silk Central
Score 7.0 out of 10
N/A
Formerly from Micro Focus and earliler from Borland, unified test management with OpenTextâ„¢ Silk Central drives reuse and efficiency. It gives users the visibility to control application readiness.N/A
Pricing
ACCELQOpenText Silk Central
Editions & Modules
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
ACCELQOpenText Silk Central
Free Trial
NoNo
Free/Freemium Version
NoNo
Premium Consulting/Integration Services
NoNo
Entry-level Setup FeeNo setup feeNo setup fee
Additional Details——
More Pricing Information
Community Pulse
ACCELQOpenText Silk Central
Features
ACCELQOpenText Silk Central
Test Management
Comparison of Test Management features of Product A and Product B
ACCELQ
-
Ratings
OpenText Silk Central
8.0
1 Ratings
1% below category average
Centralized test management00 Ratings10.01 Ratings
Manage test hosts and schedules00 Ratings7.01 Ratings
Map tests to user stories00 Ratings9.01 Ratings
Test execution reporting00 Ratings6.01 Ratings
Best Alternatives
ACCELQOpenText Silk Central
Small Businesses
BrowserStack
BrowserStack
Score 8.6 out of 10
BrowserStack
BrowserStack
Score 8.6 out of 10
Medium-sized Companies
ReadyAPI
ReadyAPI
Score 7.0 out of 10
OpenText ALM/Quality Center
OpenText ALM/Quality Center
Score 8.6 out of 10
Enterprises
ignio AIOps
ignio AIOps
Score 8.1 out of 10
OpenText ALM/Quality Center
OpenText ALM/Quality Center
Score 8.6 out of 10
All AlternativesView all alternativesView all alternatives
User Ratings
ACCELQOpenText Silk Central
Likelihood to Recommend
7.0
(1 ratings)
7.0
(1 ratings)
User Testimonials
ACCELQOpenText Silk Central
Likelihood to Recommend
ACCELQ
ACCELQ can support multiple technologies such as web, mobile, API, and mainframe. It’s also suited for SAAS solutions such as Salesforce and addresses challenges such as dynamic HTML. It’s easy to set up, and onboarding is easy, and overall lead time is comparatively less. The overall execution results are captured with screenshots, and it’s easy to debug errors. It has integrations with leading cloud-based desktop and mobile farm services such as Saucelabs, browser stack, etc.; ACCELQ is not developer friendly, and hence the overall adoption for a continuous integration scenario is very limited. If you are using a different test management solution, the integration between accelQ and that tool needs ti to be built and hence requires additional development effort, and it’s buggy too.
Read full review
OpenText
We didn't just select Borland Silk Central randomly. In the selection process, we actually evaluated in total 26 available test management tools in the market. We sent surveys to all potential users in the department to collect their wish list of our next management tool, converted them to a criteria list, and used that list to evaluate all 26 tools. We reduced the possible candidate tools to five and organized a small committee to pick the final three. Top management then checked their price tags and selected Borland Silk Central. Based on this evaluation process, I would say Borland Silk Central is suitable to an organization which has no more than 60 testers; needs both manual tests and automated tests; needs on-line support; needs a low learning curve and has a limited budget. My personal view is that this tool reaches the balance points among ease-of-use, budget and support.
Read full review
Pros
ACCELQ
  • Scriptless and hence coding is easy.
  • Maintenance of the scripts are easy.
  • Learning curve is small.
Read full review
OpenText
  • Borland Silk Central is good for the users to associate test requirements, test cases, execution plans and test reports together. Each asset (test case, requirement, etc...) provides links for the users to jump to other assets in a click, and the users can jump back and forth between two assets.
  • Borland Silk Central is also good in test automation. Although Micro Focus does provide a client tool for test automation, the users don't really need it to automate the tests. In our case, we are using Python to automate the tests and use a batch file to launch tests, and in Borland Silk Central we just call that batch file from server side. The test result is automatically fed back to Silk server.
  • Micro Focus also publishes the schema of the database behind Borland Silk Central, so it is very easy to extend its function beyond its original design. Moreover, because its schema is published, we can easily retrieve and process its data for business intelligence purpose.
Read full review
Cons
ACCELQ
  • The tool is not developer friendly and hence adoption across developers is low.
  • The tool does not have an admin console to manage the users centrally.
  • Different types of licensing and it’s all user based and hence pricey.
Read full review
OpenText
  • On the other hand, the plugins of Borland Silk Central with third-party tools are programmed poorly. In our case, the plugins for JIRA have a lot of limitations and were almost unusable in our test environment. (They did improve the plugins a little bit later, however.)
  • The tech support people are located in UK, so frequently it is difficult to get a hold of these guys due to different time zones. Also, most of them obviously don't have enough experience and sometimes drove us nuts in emergency situations.
  • The last thing I feel is that Micro Focus possibly doesn't provide enough manpower to maintain Borland Silk Central. There are tons of feature requests for Borland Silk Central pending there. Although they have frequent hot fixes every few months, they don't digest these requests quick enough.
Read full review
Alternatives Considered
ACCELQ
When we implemented ACCELQ, we conducted POCs with many similar solutions. Among the tools we pursued at that time, accelQ stood out against Tricentis Tosca and QMetry automation studio. However, subject 7 did better. However, they were still in the nascent stages of building the tool, and hence we did not pick it.
Read full review
OpenText
We had evaluated, for example:
  • IBM Collaborate Suite - it is way too complicated and the learning curve is too high.
  • HP Quality Center - it is OK but a little bit expensive.
  • TestLink, Squash TM and other open source tools: The capabilities of open source tools just can't compare to commercial tools. Although we can modify the source code to improve the tool, we are just test engineers, not developers.
  • Zephyr: Our testers simply didn't like its UI - too weird.
Read full review
Return on Investment
ACCELQ
  • Overall adoption of an automation tool went up.
  • Migration of existing selenium scripts to ACCELQ was relatively easy and less effort.
  • Lack of overall admin console and hence managing the agents across different execution is difficult.
  • Integration between accelQ and any test management tool can be difficult and buggy in most cases, even though it can be coded.
Read full review
OpenText
  • Borland Silk Central provides a centralized test platform for multiple test departments in the company, so now all of the departments know what each of them is doing. In turn, all departments can coordinate with each other to reduce the duplicated test items and increase the overall test efficiency.
  • Also, Borland Silk Central enables the users to publish the test procedure (steps) of each test case so all the users can know how each test case is performed. It is not like what we had before, the test procedures resided in difference place from Excel to Google drive or some other weird locations.
  • Also, because all departments are using Borland Silk Central, all testers of the departments have better communication regarding testing methods. In the past, the department used different test management tools and it was hard for the testers to understand each other's testing methods.
  • Finally, because all departments share BorlandSilk Central, they also share the same set of reports published to Atlassian Confluence, so now they use the same set of reports to evaluate the test progress.
Read full review
ScreenShots