Blockers to continuous delivery live in the downstream

Whilst we pride ourselves on utilising the latest technology and best practices to deliver maximum efficiency, blockers to this can sometimes live in legacy systems used downstream in a service’s architecture. Here we hear how one enthusiastic team member enabled culture change at one of our customers to improve outcomes for all partners.

Building automated continuous delivery pipelines is second nature for Platform Engineers at BJSS. The benefits are deeply understood throughout the organisation, and every project strives to achieve them. With modern cloud infrastructure, orchestration tools, containerisation, version control and more being the norm on so many projects, the very best experience is expected by the teams on any engagement. However, many of our engagements depend on downstream systems that might be on-prem, legacy or simply not built to cater for modern development practices. It’s into one of these more complicated scenes that our Platform Engineer, John Ellwood found himself in, working with a public sector client.

Part 1: A Pipeline of Complexity

John joined the team a few sprints in, so knew he would have a job on his hands to get up to speed even without bumps in the road. On the surface it looked to be a relatively simple architecture. A Node.js application that served up GDS compliant user journeys with a single service dependency. While it was hosted on-prem, the client had selected and deployed OpenShift to manage their application landscape. The plan, John explains, was to bring all the normal features of a CI/CD pipeline to the project.

“On the surface it looked to be a relatively simple architecture.”

The downstream service the team were dependent on was owned by another team, a Salesforce instance as it turned out. It was the first time John (and the rest of the team) had encountered Salesforce. The architecture had their Node.js application speaking to Salesforce over its API, acting like the relational datastore for the project.

John wanted to build fully dynamic environments, so that whenever a developer pushed a new branch all the components they needed would be spun up in that separate new environment. These don’t need to be carbon copies of production, and John explains that cost can be kept down by, for example, partitioning if you’re using an expensive database or a large-scale piece of infrastructure such as CloudFront. But they should be automatic and allow developers to build, test and deploy in isolation, and in parallel.

In order to understand Salesforce and the development process, John spent time with the team working on it. He discovered that there were two separate approaches happening simultaneously. Some members of the team worked within the Salesforce UI, configuring it via forms and wizards while others delivered custom Java code in the style required for the platform.

The changes were being pushed immediately into one shared Salesforce instance for the whole team. Digging deeper John discovered that Salesforce had a concept of Sandboxes. On the surface this appeared to have the properties of separate environments, so there looked to be a good chance that they could be moved to a more automated and controlled method.

Part 2: Remember that change isn't always embraced first time

However, at this point John was concerned as he discovered that the team weren’t using version control. They worked inside a single Sandbox, each making their own changes, and when they felt they were ready they would push the changes they had made up to production and continue editing.

Even if John figured out how to leverage the Sandbox API (which he still wasn’t sure he could do) the team would need to change their working practices in order for it to work. Builds would need to be repeatable, and changes through environments managed by the pipeline. John explains that there was some resistance from the team. Automation and version control were new concepts to them, there was a feeling that it would introduce more work, being overkill for needs.

In the meantime, John had set up the CI/CD pipelines as best as he could for the BJSS team building the Node.js application. They were busy delivering features and leveraging all of the benefits of Git, automation testing and automated deployments. However, they were dependent on the Salesforce instance acting as the data store. John told me that the team was starting to get frustrated as each time they would point at the latest set of Salesforce changes they would experience failing tests for features that had already been delivered. Time was being wasted, and quality was suffering.

Part 3: Change from the ground up

With the teams beginning to tread on each other's toes, it was clear that an automated solution was much needed. John undertook further investigation and discovered that Sandboxes were both expensive, and slow to create (from 24 hours to multiple days), so any solution was going to have to be sympathetic to those challenges.

At this point the team working on Salesforce recognised the issue and brought in their UK expert in Salesforce automation, Callum. John explains that Callum was a real enabler, bringing the niche knowledge required along with the positivity their team needed to believe that it could work.

John was able to provide the scripts for Bamboo (the CI platform the programme was using), while Callum configured the Salesforce environments and surfaced the specific API and workflow changes needed for the Salesforce team.

Both Callum and John set about helping the team move to a new workflow, with a GitFlow model they designed to accommodate the limitations inherent in the Sandbox API. The team now worked on a feature branch within Git that was tied to specific Sandbox within the Salesforce instance. When they completed the dev and test of that feature, the CI pipeline called the APIs within Salesforce to create a “metadata” package that could be deployed into the next Sandbox, which the downstream Node.js team were leveraging.

“Bringing people on the journey to a better way of working is often as much of a challenge as the technology involved.”

As the new process began to take shape, the new workflow and pipelines started to enforce repeatability and control. Learning Git and continuous delivery while being challenged to deliver features was not easy for the team, and John admits there were times he and Callum took heat for problems that surfaced when the pressure was on.

Bringing people on the journey to a better way of working is often as much of a challenge as the technology involved. John spent a good chunk of time educating the team on how to leverage Git and continued to evangelise the benefits of continuous delivery, by this point however, the benefits were starting to speak for themselves.

Part 4: Done is better than perfect

“The end state isn't perfect , but it's so much better.”

John points out that even though they had come a huge way at this point, they still had one thorny issue to tackle, the data. Getting test data into an environment always surfaces challenges. Whether it’s providing sanitised production like data or getting the volumes necessary to exercise the system.

The client already had a data team with experience leveraging Talend. While there are solutions that are more native to Salesforce, Talend was selected to ensure the client could easily pick up the development and support once the project was complete.

Trying to get all those moving parts working harmoniously was a complex task. Proprietary APIs, designing and teaching custom Git workflows, spinning up environments while balancing the sandbox budget, evangelising continuous delivery and all that before you consider Salesforce updating the Sandbox API mid development or having to build features!

Conclusion

The end state John describes in not perfect, but it’s so much better. Things are repeatable, reliable, and the teams are making releases confidently and continually. Good engineering practices can, and should, be brought to any solutions, COTS integration or bespoke software.