CircleCI is a software delivery engine from the company of the same name in San Francisco, that helps teams ship software faster, offering their platform for Continuous Integration and Continuous Delivery (CI/CD). Ultimately, the solution helps to map every source of change for software teams, so they can accelerate innovation and growth.
$0
for up to 6,000 build minutes and up to 5 active users per month
Ansible
Score 9.2 out of 10
N/A
The Red Hat Ansible Automation Platform (acquired by Red Hat in 2015) is a foundation for building and operating automation across an organization. The platform includes tools needed to implement enterprise-wide automation, and can automate resource provisioning, and IT environments and configuration of systems and devices. It can be used in a CI/CD process to provision the target environment and to then deploy the application on it.
We looked at Puppet and Chef, but Ansible won because it's agentless. You trade some features, for example, someone could manually make a change on the server, and Ansible wouldn't know. But that's not a problem for us, and we needed something that we could run immediately on …
CircleCI is perfect for a CI/CD pipeline for an app using a standard build process. It'll take more work for a complex build process, but should still be up to the task unless you need a lot of integrations with other tools. If you have a big team and can spare someone to focus full time on just the CI/CD tools, maybe something like Jenkins is better, but if you're just looking to get your app built, tested, and delivered without a huge amount of effort, CircleCI is probably your preferred tool.
It is very well suited to configuration en masse, especially if the specific function that you need to maintain has its own official collection, with configurable extra_vars. It can be less useful for implementations across different technologies (i.e., some servers on CentOS, or vendor-locked images) that are out of the box and require custom logic and playbook design. Also, as noted earlier, Ansible is great at pulling information from servers, but not as good at displaying that information en masse.
Debugging is easy, as it tells you exactly within your job where the job failed, even when jumping around several playbooks.
Ansible seems to integrate with everything, and the community is big enough that if you are unsure how to approach converting a process into a playbook, you can usually find something similar to what you are trying to do.
Security in AAP seems to be pretty straightforward. Easy to organize and identify who has what permissions or can only see the content based on the organization they belong to.
The "phases" their config file uses to separate out options seem very arbitrary and are not very helpful for organizing your config file
No way that I know of to configure which version of MongoDB you use. You have to write your own shell script to download and start MongoDB if you want a specific version.
Playbook execution result output can sometimes be very messy and hard to understand. Make JSON output pretty and understandable. Allow disclosure triangles to hide/show content and let the playbook dictate that.
Allow for a pop-up review of a playbook's credentials, inventory, or other sub-components instead of forcing a new window or tab within the browser. Allow for quick review or audit.
Allow for stepping through a playbook, step by step, just like a development IDE or programming environment, inspecting variables and output from plays.
Even is if it's a great tool, we are looking to renew our licence for our production servers only. The product is very expensive to use, so we might look for a cheaper solution for our non-production servers. One of the solution we are looking, is AWX, free, and similar to AAP. This is be perfect for our non-production servers.
We've found many use cases in our environment where this powerful tool was invaluable. It does take quite a bit to set it up initially, though, and it's hard to get other teams onboarded in a way that gets it running. .yml itself is simple, but there are a lot of components (Credentials, templates) that can be overwhelming. And Jinja syntax can confuse folks.
It's pretty snappy, even with using workflows with multiple steps and different docker images. I've seen builds take a long time if it's really involved, but from what I can tell, it's still at least on par if not faster than other build tools.
Great in almost every way compared to any other configuration management software. The only thing I wish for is python3 support. Other than that, YAML is much improved compared to the Ruby of Chef. The agentless nature is incredibly convenient for managing systems quickly, and if a member of your term has no terminal experience whatsoever they can still use the UI.
Unless you have a reasonably large account, you're going to be mainly stuck reading their documentation. Which has improved somewhat over the years but is still extremely limited compared to a platform like Digital Ocean who invested in the documentation and a community to ensure it's kept up to date. If you can't find your answer there, you can be stuck.
There is a lot of good documentation that Ansible and Red Hat provide which should help get someone started with making Ansible useful. But once you get to more complicated scenarios, you will benefit from learning from others. I have not used Red Hat support for work with Ansible, but many of the online resources are helpful.
Circle was the first CI with simple setup, great documentation, and tight integration with GitHub. Using Jenkins was too much maintenance and overhead, TeamCity was limited in how we could customize it and run concurrent builds, TravisCI was not available for private repos when we switched.
We were Puppet users. Red Hat Ansible Automation Platform made more sense to us because of the focus on Ansible content to support our AIX systems and RHEL systems. We have also seen that the learning curve for Red Hat Ansible Automation Platform is better than we experienced in our Puppet deployment.
It has eased the burden of standardizing our testing and deployment, making onboarding new developers much faster, and having to fix deployment mistakes much less often.
It allows us to focus our process around the GitHub workflow, ignoring the details of whatever environment the thing we're working on is actually hosted in. This saves us time.
I'd say positive. It helps us meet our compliance requirements consistently and lets us turn things around faster when we find things that are out of compliance because we normally already, once we develop a playbook, it's there, we pick it back up off the shelf and run the same thing again. We don't have to go through an exercise every time or bring somebody else up to speed. The playbook is already out there and just go run it.