Rethink platform engineering and developer impact in the age of AI. Tune in to our webinar on Thursday, May 22.

Continuous Delivery within Kubernetes

Shifting Responsibilities from Ops to Dev. Tools, practices, and configuration for developers creating and integrating a build pipeline for deploying applications to Kubernetes.

As Humble and Farley stated in their book Continuous Delivery, "Getting software released to users is often a painful, risky, and time-consuming process." However, there is clear value in rapidly releasing functionality in a safe and reliable manner.

Adopting cloud native architectures means developers create more moving parts with differing runtime requirements and dependencies. This means that although initially a sysadmin or platform concern, the knowledge and responsibility developers now have means that implementing cloud native continuous delivery falls firmly on their shoulders.

The goal has always been establishing a fast application build-release-verification feedback loop. Traditionally, developers committed their code to version control, and operations managed the rest of the delivery lifecycle. But, to borrow a phrase from Netflix, full stack developers are now becoming full lifecycle developers, responsible for ensuring that the code they write delivers value to users. This increase in operational responsibilities means that developers must be able to configure applications' continuous deployment and release.

This guide applies to a developer simply experimenting with Kubernetes and a new engineer onboarding a team deploying onto Kubernetes. Although the fast feedback goal remains the same when working with cloud native technologies, when adopting containers and Kubernetes, there are a few more tools to install and configurations to tweak.

From pre-Cloud CI/CD to Kubernetes continuous deployment

Before Kubernetes

Before cloud native architecture became the dominant approach to designing, deploying, and releasing software, the continuous delivery story was much simpler. Typically, a sysadmin would create a build server and install a version control system and continuous integration tools such as Jenkins, TeamCity, or GoCD.

In addition to continually building and integrating code, these tools could be augmented via plugins to perform rudimentary continuous deployment operations, such as FTPing binaries to VMs or uploading an artifact to a remote application server via a bespoke SDK/API.

This approach worked well when dealing with a few applications and a relatively static deployment environment. The initial configuration of a delivery pipeline was typically challenging and involved much trial and error. When a successful configuration was discovered, this was used as a template and then copy-pasted as more build jobs were added. Debugging a build failure often requires specialist support.

After Kubernetes

The rise in popularity of containers and Kubernetes has meant that roles and responsibilities about continuous delivery have changed. Operators may still set up the initial continuous integration and deployment tooling, but developers now want to self-service as they release and operate what they build.

This means that the scope of the infrastructure a developer needs to understand and manage has expanded from pure development tools (e.g., IDE, libraries, and APIs) to deployment infrastructure (e.g., container registries and deployment templates) to runtime infrastructure (e.g., API gateways and observability systems).

 

Traditional
Cloud-native
Number of services
1 large service
Many small (micro)services
Deployment artifact
Small number of language-specific packages or binaries
Large number of container images
Artifact manifests
Low-medium complexity (language/platform specific). A limited amount of small-medium scripts
High complexity (language, OS, and framework). Potentially a large amount of long configuration files
CI infrastructure required
Bare metal or VMs
Bare metal, VMs, Docker / containers, Kubernetes
Deployment mechanisms
Custom (imperative) scripts run via SSH or FTP etc., or proprietory SDKs/APIs
Well-defined declarative configuration applied via standardised APIs
Release mechanisms
Deployment and release implicitly coupled. Verification managed by eyeballing metrics systems and dashboards.
Release controlled via traffic shifting e.g. canaries (north-south and east-west) and verification managed automatically with observability system integrations
Environment management
Small number of manually curated test and staging environments. Artifacts and configuration relatively static
Large number of bespoke. Artifacts and configuration highly dynamic and transient

Creating an effective Kubernetes continuous deployment pipeline.

Being able to configure an effective Kubernetes deployment pipeline is not dependent on a single tool or technique. A combination of technologies are required:

Container build tools

The ability to quickly and repeatedly build containers either in the pipeline or within a cluster is vital when changing code and configuration. Many teams want to adopt industry-approved container build standards or don’t want the hassle of assembling their own containers, and here the use of CNCF buildpacks is popular.

YAML templating and package managers

The vast majority of microservice-based applications require dynamic (deployment time) configuration and consist of multiple services and containers, which requires customizable templating and a package manager to declaratively specify this.

Continuous integration tooling

All code within an application (and its subsystems) must be continuously integrated at the code level. Typically in a cloud native system this requires building a language-specific package, artifact, or binary, and then building this into a container image.

Continuous deployment tooling

In addition to continuous integration of code, cloud native applications and their dependencies must also be continually deployed and verified at the component level. This requires the combination of the package management configuration and associated artifacts that were built via continuous integration being deployed into an environment (Kubernetes cluster). Features included within these deployments can be hidden from users, via feature flags or traffic shadowing, or incrementally released via canaries and traffic shifting.

Environment management

With applications being continually deployed into environments for release or testing it is vital to be able to create, manage, and understand multiple environments (Kubernetes clusters) and the configuration of the services deployed onto them.

Ready to investigate and debug your own Kubernetes woes?

This learning journey walks you through the primary concepts and hands-on activities required to debug issues across your cluster and multi-service applications.

Skill level

Kubernetes beginner or experienced user

Time to complete

40 minutes • 10 lessons

What you’ll need

Nothing, we’ll walk through learning the concepts and installing the tools you’ll need as we go

What you’ll use

What you'll learn

  • Annotating services to quickly identify key debugging information
  • How distributed tracing helps follow requests across multiple services
  • Debugging your cluster when things go wrong
  • Using Telepresence to debug services locally

Learning journey