Skip to content
Home » Blog » Deployment Frequency Tracking With Harness-Jellyfish Integration

Deployment Frequency Tracking With Harness-Jellyfish Integration

  • by

Our engineering crew at Iterable strives to allow entrepreneurs to ship prime quality buyer experiences at scale. Releasing updates and new options with none failures (equivalent to service impairment or outage) is a vital half in engaging in this aim, and the Change Failure Rate of the DORA metrics helps us measure this.

DevOps Research and Assessment (DORA) metrics discuss with the 4 key metrics that DevOps groups use to measure efficiency of supply follow of a corporation. They are deployment frequency (DF), lead time for adjustments (LT), imply time to restoration (MTTR), and alter failure charge (CFR).

Why We Integrated Jellyfish

Change Failure Rate is the proportion of deployments that end in degraded companies in manufacturing. Calculating this metric is straightforward—take the variety of failures and divide them by the full variety of deployments. What will not be so easy, although, is getting the full variety of deployments—or the Deployment Frequency of the DORA.

This information is normally accessible in your group’s pipeline instrument, however oftentimes the instrument doesn’t include an out-of-the-box resolution so that you can mechanically retrieve them. Harness, the CI/CD pipeline instrument used at Iterable, was no exception. We had this information in Harness, however the one strategy to pull them was to create a customized dashboard and carry out a handbook verify each month. This was not scalable, nor did now we have a centralized place to report the quantity and analyze the developments.

That’s why we determined to combine Harness with Jellyfish. Jellyfish gives a strategy to acquire and arrange the Deployment Frequency information by way of a easy POST API endpoint that may be added to every pipeline. The problem right here was that there was no article accessible that exhibits us methods to implement this step-by-step. So with the assistance of our website reliability crew and their experience, we determined to place collectively one ourselves. In this text we’ll stroll you thru the steps we took to combine Harness with Jellyfish to mechanically acquire deployment frequency.

How We Integrated Jellyfish

This integration was accomplished in three major steps. First, we constructed a shell script, then, we configured Harness, and, lastly, we examined and launched.

Building a Shell Script

The core a part of the combination was fairly easy: writing a shell script that calls Jellyfish’s deployment POST endpoint. There have been a couple of items of data to collect earlier than writing the precise script.

Understanding Jellyfish Requirements

To ship information from Harness to Jellyfish, we first wanted to retailer the API key of Jellyfish in Harness. This was accomplished by way of Secrets Management in Harness UI. (Note: solely the customers with admin permissions are in a position so as to add secrets and techniques.) After including the key, you possibly can reference the important thing utilizing the next expression: ${secrets and techniques.getValue("secret_name")}

After storing the API key, we took a take a look at the fields required by the endpoint. Jellyfish lists the minimal API specs for Deployment POST Endpoint here, specifically:

  • reference_id (string)
  • deployed_at in ISO 8601 datetime string (YYYY-MM-DDThh:mm:ss, in UTC) 
  • repo_name (string)
  • commit_sha or prs (array)

Optional (however helpful) fields we determined to make use of:

  • title
  • labels
  • is_successful
  • source_url

Understanding Built-In Harness Variables

To go these minimal necessities to Jellyfish, we used the built-in Harness variables as a lot as potential. The following checklist is what we ended up utilizing. (You can discover the total checklist of built-in variables here.)

  • workflow.variables.githash (Used for commit_sha)
  • workflow.startTs (Used for producing distinctive reference_id for every deploy, mixed with commit sha)
  • service.title (Passed as labels)
  • deploymentUrl (Passed as source_url. It gives a direct hyperlink to the deploy within the Harness Deployments web page)

The Shell Script

After we safely secured the Jellyfish API key, nailed down the necessities, and took Harness variables into consideration, we needed to write the script. Here’s what we landed on, after going by means of many alternative variations alongside the way in which:

The final shell script

Final shell script.

To write this script, we took the sample curl request from Jellyfish and constructed on it. Since we’re passing extra metadata than the pattern, we separated out the request payload in its personal generate_post_data() definition for simpler studying. In addition, we discovered it simpler to declare variables for sure fields like apiKey and sha earlier than passing them into the payload. Doing this additionally proved to be a greater strategy to simplify the syntax—in any other case the script could be more durable to keep up.

Additional Metadata Explained

Although the title subject will not be required, this subject is seen within the Jellyfish’s Deployment desk (screenshot within the Results part). We wished this subject to be distinctive sufficient to establish every deploy and likewise human readable, therefore mixed service.title and the deployed_at timestamp.

is_successful and X-jf-api-backfill-commits

is_successful tag was added in to point whether or not a deploy was profitable or not (extra on this within the Configuring Harness part under). X-jf-api-backfill-commits tag was added to allow Jellyfish’s Commit “backfilling” feature that calculates lead time for adjustments extra precisely. This primarily finds and sends in all of the commits related to every deployment, together with the HEAD commit being deployed. is_successful and X-jf-api-backfill-commits tags are latest additions, and are non-obligatory.

Configuring Harness

Once we had a remaining model of the shell script, we utilized it to workflows in Harness. In doing so, we made certain to experiment with our adjustments on a take a look at workflow—a simplified model of essentially the most incessantly used workflow.

Adding the Script within the Right Place

Finding the proper workflow step for the script to run inside took some trial and error. At first, we positioned the script on the Post-Deployment step. This was most rational on the time, as we solely wished to trace and ship the deploy information solely when the deployment is absolutely full, and the Post-Deployment step is the concluding step of the workflow.

Post-deployment at the end of a workflow

The Post-Deployment step is the concluding step of the workflow.

error message

The shell script failed in an error, when positioned on the Post-Deployment step

But, that didn’t work fairly as anticipated. It resulted in “a bad substitution” error at pulling in ${service.title} as proven above. As it seems, at this step of the workflow, ${service.title} variable was not in scope to be referenced. This made us transfer the script to happen earlier within the workflow the place the variable remains to be in scope, proper earlier than it reaches the Post-Deployment part.

Script moved earlier in the workflow

Moved the script step to happen earlier than the Post-Deployment.

And voila! That change resolved the dangerous substitution error.

Adding an Execution Condition

After we added the script, we wished to make sure solely the manufacturing deploy is distributed to Jellyfish, and staging deploys aren’t. We did this by setting an execution situation on the script utilizing the ${env.title} variable. If the deployment is for a staging surroundings, it’s going to skip the script execution.

Skip conditions modal

Detailed view of the skip circumstances modal.

The Deployment to Jellyfish tile with the skip logic (yellow icon):

skip logic

Execution skip logic is indicated with yellow icon.

Adding Failure Strategy

One remaining remaining configuration was Failure Strategy. It was important for the script step to not intrude with the continued deployment in any manner, even when it errors out. This was achieved by specifying which kind of script failures might be ignored within the workflow.

failure strategy

Detailed view of the failure technique modal.

Repeat for Failed Deployment Scenario

Once the failure technique was added, we have been all set to ship profitable deployments to Jellyfish. Since we wished to seize failed deployments as nicely, we adopted the identical course of above yet another time, with a few adjustments.

First, we cloned the script (name it a “failed” model) and changed is_successful=true with is_successful=false. Jellyfish can filter between profitable versus failed deployments based mostly on this tag.

Then, we needed to search for the perfect step for this failed model of the script to run. It wouldn’t make sense to put the script on the identical step as profitable deploy. Failed deploy takes a rollback route in a workflow, profitable deploy doesn’t. This made us select to put the script on the finish of the rollback step.

workflow showing failed version at the end of rollback step

The failed model of the script was positioned on the finish of the rollback step.

Lastly, the identical set of execution circumstances and failure technique was utilized. The workflow’s major deploy tile regarded one thing like this on the finish:

View of the principle deploy tile on the finish:

Main deploy tile

View of the principle deploy tile on the finish.

Test and Launch

Testing is our third step on this article, however on no account do you have to skip testing till later within the undertaking. We advocate operating a take a look at deploy with a simplified model of a workflow in every step of the undertaking, ensuring to make use of a non-production surroundings, dummy service, and construct. We have been in a position to catch most errors and errors (syntax error, information kind mismatch, and many others) early on by testing incessantly.

Some of the errors we have been in a position to deal with embody:

  • 400 Bad request
  • 403 Unauthorized
  • Bad substitution
  • Failed to parse request information as JSON
  • Syntax error

After we verified that our take a look at deploy runs to Jellyfish persistently returned success standing, we felt assured and expanded the identical script and configuration to all dwell workflows in Harness.

The Results

We did it! We efficiently launched this integration for 11 workflows in whole. Below is a sneak peek of our deployments in November 2022 in Jellyfish.

successful launch

From left to proper: Deployment = title, Deployed At = deployed_at, Successful = is_successful, Teams (lists of groups contributed to the deploy – not lined on this weblog), Source = source_url.

We can even assessment our deployment charge breakdown by day, week and month, and gauge the place we stand in keeping with the supply efficiency metric from the DORA report:

Jellyfish deployment rate

Daily view of Deployment Rate In November 2022 in Jellyfish.

delivery performance metric

Delivery efficiency metric from the DORA report.

By integrating Harness with Jellyfish to automate Deployment Frequency reporting, we not have to carry out any handbook checks to rely what number of deployments we had in a month. Having extra correct information on each kinds of deployments (successes versus. failures) additionally gave us a greater perception into deployment developments and effectivity. This Deployment Frequency information will function a foundation to measuring Change Failure Rate —the principle goal that began this undertaking—to enhance the general high quality of product supply of our engineering org in the long term. Jellyfish helped us streamline and visualize the method, and, at Iterable, we plan to proceed using it for the remainder of the DORA metrics.

To be taught extra about Iterable’s capabilities, schedule a demo as we speak.

Leave a Reply

Your email address will not be published. Required fields are marked *