Any large business or organisation that wants to manage their workload effectively and with the least amount of room for error might choose the ActiveBatch Automation tool. Being a consultant I feel that It aids in task automation and has the flexibility to change in response to varying company requirements. It helps to save huge time by doing all the repetitive tasks on daily basis. During the patching activity the schedulers can be stopped. It also help by alerting us if any system/job is down so that SLA can be saved. Overall ActiveBatch Automation stands as a dependable cornerstone for ensuring the seamless operation of our tasks.
Previously, our team used Jenkins. However, since it's a shared deployment resource we don't have admin access. We tried GoCD as it's open source and we really like. We set up our deployment pipeline to run whenever codes are merged to master, run the unit test and revert back if it doesn't pass. Once it's deployed to the staging environment, we can simply do 1-click to deploy the appropriate version to production. We use this to deploy to an on-prem server and also AWS. Some deployment pipelines use custom Powershell script for.Net application, some others use Bash script to execute the docker push and cloud formation template to build elastic beanstalk.
Businesses can use ActiveBatch to plan tasks based on parameters like frequency, dependencies, and the time of day. By automating typical actions like backups and data transfers, businesses can make sure that crucial operations go off without a hitch.
Multiple systems and apps can be used in complicated workflows that ActiveBatch can automate. For instance, it can automate a workflow for processing orders from beginning to end, from the customer order through inventory control and delivery through the processing of invoices and payments.
Files can be sent between many platforms and systems safely with ActiveBatch. Transfers to cloud-based storage systems like Amazon S3 and Microsoft Azure are also included in this. SFTP and FTP transfers are also included.
Pipeline-as-Code works really well. All our pipelines are defined in yml files, which are checked into SCM.
The ability to link multiple pipelines together is really cool. Later pipelines can declare a dependency to pick up the build artifacts of earlier ones.
Agents definition is really great. We can define multiple different kinds of environments to best suit our diverse build systems.
We can easily add new plans/jobs in our batch schedules. Also, coordination with reporting and QA jobs is simple to do. Building schedules, restarting jobs, triggering dependencies is easy to understand. The system is very stable and allows us to easily see overall processing times.
The workload automation solution is based on the specific needs of an organization, as well as the features, capabilities, and costs of various solutions. A thorough evaluation process and consideration of these factors can help ensure the selection of a solution that aligns with overall business objectives and meets the specific needs of the organization.
GoCD is easier to setup, but harder to customize at runtime. There's no way to trigger a pipeline with custom parameters.
Jenkins is more flexible at runtime. You can define multiple user-provided parameters so when user needs to trigger a build, there's a form for him/her to input the parameters.
I have not run numbers to determine hard impact, but a quick estimate is that at least one job is running for a average of about 6 hours per day - that 6 hours, if done by hand, would equate to about 30 - 40 hours per day (and in some cases, could not be duplicated manually, as the job repeats faster than a person could accomplish one cycle.)
Settings.xml need to be backed up periodically. It contains all the settings for your pipelines! We accidentally deleted before and we have to restore and re-create several missing pipelines
More straight forward use of API and allows filtering e.g., pull all pipelines triggered after this date