How canary releases enable continuous deployment

The deployment paradigm of development, testing, production, and disaster recovery from the pre-cloud, pre-microservices days still lingers in my mind. In those days, buying, configuring, and maintaining infrastructure was expensive and complex, so you were unlikely to get the data center ops team to sign off on creating more environments and deployment flexibilities. By today’s standards, deployment frequencies were low, so agile development teams found ways to make do with the available infrastructure.

Today, devops organizations use CI/CD (continuous integration and continuous delivery) to automate delivery, Infrastructure as Code to configure infrastructure, Kubernetes to containerize environments, continuous testing to improve quality, and AIops to centralize monitoring and observability alerts. These practices make it possible to increase deployment frequency while minimizing security, performance, and reliability risks from the changes.

But when you need to lower risk and control releases based on what’s changing and who gets access to the changes, you need additional techniques to configure and manage the application or deployment.

Release pipelines require flexible controls and deployment options

One way to create controlled deployments is to use feature flags to toggle, A/B test, and segment access to features by user. Devops team can use feature flags to validate reengineered features when moving apps to the cloud or refactoring monolithic apps into microservices.

Feature flags are useful for development teams and product managers who want to control who sees what capabilities in an app or microservice. But sometimes, devops or IT operations teams need flexibility in managing deployments and controlling multiple versions of an application running in production. Some examples:

  • Validating apps and microservices when patching or upgrading critical system components such as operating systems, databases, or middleware
  • Testing apps and services when deploying to multiple clouds
  • Monitoring user experiences after releasing new versions of machine learning models

There are several different deployment strategies for microservices and cloud-native applications. In the early days of developing apps for the cloud, we employed blue-green deployments where there were two production environments: a green live one and a blue one on standby. The deployment went to the blue environment, and then load balancers controlled the cutover, effectively making the blue environment with the latest deployment live and the new green environment the backup.

Copyright © 2021 IDG Communications, Inc.

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *