Formerly from Micro Focus and earliler from Borland, unified test management with OpenText™ Silk Central drives reuse and efficiency. It gives users the visibility to control application readiness.
N/A
BitBar
Score 6.6 out of 10
N/A
BitBar allows users to test applications across the latest and most popular real browsers and devices. Users can scale testing by increasing test coverage and decreasing test execution time by running automated tests in parallel across browsers and devices. BitBar integrates with the user's current tech stack or CI/CD pipeline. Key Features: * BitBar offers one cloud for all testing platforms whether it be web, native, or hybrid applications. * Test an application across real…
$39
per month
Pricing
OpenText Silk Central
BitBar
Editions & Modules
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
OpenText Silk Central
BitBar
Free Trial
No
Yes
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
Enterprise packages are available for larger teams and customers.
More Pricing Information
Community Pulse
OpenText Silk Central
BitBar
Features
OpenText Silk Central
BitBar
Test Management
Comparison of Test Management features of Product A and Product B
We didn't just select Borland Silk Central randomly. In the selection process, we actually evaluated in total 26 available test management tools in the market. We sent surveys to all potential users in the department to collect their wish list of our next management tool, converted them to a criteria list, and used that list to evaluate all 26 tools. We reduced the possible candidate tools to five and organized a small committee to pick the final three. Top management then checked their price tags and selected Borland Silk Central. Based on this evaluation process, I would say Borland Silk Central is suitable to an organization which has no more than 60 testers; needs both manual tests and automated tests; needs on-line support; needs a low learning curve and has a limited budget. My personal view is that this tool reaches the balance points among ease-of-use, budget and support.
CrossBrowserTesting is a great tool to use when you have a new page or new content that you want to test on an array of devices/browsers. In the diverse online world nowadays, it is nearly impossible to ensure optimization for every case. CBT allows you to get closer to that goal.
Borland Silk Central is good for the users to associate test requirements, test cases, execution plans and test reports together. Each asset (test case, requirement, etc...) provides links for the users to jump to other assets in a click, and the users can jump back and forth between two assets.
Borland Silk Central is also good in test automation. Although Micro Focus does provide a client tool for test automation, the users don't really need it to automate the tests. In our case, we are using Python to automate the tests and use a batch file to launch tests, and in Borland Silk Central we just call that batch file from server side. The test result is automatically fed back to Silk server.
Micro Focus also publishes the schema of the database behind Borland Silk Central, so it is very easy to extend its function beyond its original design. Moreover, because its schema is published, we can easily retrieve and process its data for business intelligence purpose.
On the other hand, the plugins of Borland Silk Central with third-party tools are programmed poorly. In our case, the plugins for JIRA have a lot of limitations and were almost unusable in our test environment. (They did improve the plugins a little bit later, however.)
The tech support people are located in UK, so frequently it is difficult to get a hold of these guys due to different time zones. Also, most of them obviously don't have enough experience and sometimes drove us nuts in emergency situations.
The last thing I feel is that Micro Focus possibly doesn't provide enough manpower to maintain Borland Silk Central. There are tons of feature requests for Borland Silk Central pending there. Although they have frequent hot fixes every few months, they don't digest these requests quick enough.
IBM Collaborate Suite - it is way too complicated and the learning curve is too high.
HP Quality Center - it is OK but a little bit expensive.
TestLink, Squash TM and other open source tools: The capabilities of open source tools just can't compare to commercial tools. Although we can modify the source code to improve the tool, we are just test engineers, not developers.
Zephyr: Our testers simply didn't like its UI - too weird.
Selenium: 1. Selenium is Open source tool 2. Needs proper framework development and integration with multiple 3rd party tools 3. Not much secure 4. Needs scripting knowledge for people working on it 5. takes time and effort CrossBrowser Testing: 1. Licensed , so secure 2. Less time , less effort 3. Quick results 4. No scripting language knowledge needed 5. More coverage 6. Without any framework creation also we can test on multiple devices/browsers
Borland Silk Central provides a centralized test platform for multiple test departments in the company, so now all of the departments know what each of them is doing. In turn, all departments can coordinate with each other to reduce the duplicated test items and increase the overall test efficiency.
Also, Borland Silk Central enables the users to publish the test procedure (steps) of each test case so all the users can know how each test case is performed. It is not like what we had before, the test procedures resided in difference place from Excel to Google drive or some other weird locations.
Also, because all departments are using Borland Silk Central, all testers of the departments have better communication regarding testing methods. In the past, the department used different test management tools and it was hard for the testers to understand each other's testing methods.
Finally, because all departments share BorlandSilk Central, they also share the same set of reports published to Atlassian Confluence, so now they use the same set of reports to evaluate the test progress.
By using CrossBrowserTesting we are saving many hours a week in manual work hours.
Our automated Selenium tests run in a fraction of the time it takes to manually test our components, so we can spend more time building great user experiences and less time triaging bugs.