Argo Rollouts: Setup and Strategies | Hokstad Consulting

Argo Rollouts: Setup and Strategies

Argo Rollouts: Setup and Strategies

Argo Rollouts is a Kubernetes-native tool designed to improve how you deploy applications by enabling progressive delivery strategies like blue-green and canary deployments. These approaches allow you to release updates gradually, monitor their performance, and roll back quickly if issues arise. This reduces downtime and minimises risks during deployments.

Here’s why Argo Rollouts is worth considering:

  • Progressive Delivery: Roll out updates to a small user group first, then scale up if stable.
  • Advanced Deployment Strategies: Supports blue-green and canary deployments for better control.
  • Automation: Includes automated rollbacks based on metrics like error rates and response times.
  • Kubernetes Integration: Works seamlessly within Kubernetes, requiring no additional tools.
  • Traffic Management: Integrates with ingress controllers (e.g. NGINX) or service meshes (e.g. Istio) to manage traffic during rollouts.
  • Real-time Monitoring: Offers a dashboard and CLI tools to track deployment progress and health.

Quick Comparison: Argo Rollouts vs Kubernetes Native Deployments

Argo Rollouts

Feature Kubernetes Native Deployments Argo Rollouts
Traffic Shifting Basic Granular
Deployment Strategies Rolling updates only Blue-green, canary
Automated Rollbacks Manual Metric-driven
Service Mesh Support No Yes
Real-time Visualisation No Yes (Dashboard)

Argo Rollouts simplifies deployment processes while reducing risks and costs. Whether you're managing small updates or large-scale changes, it provides tools to make deployments safer and more efficient.

Setting Up Argo Rollouts in Kubernetes

Prerequisites for Installation

To get started with Argo Rollouts, ensure you have a functioning Kubernetes cluster, with kubectl properly set up to communicate with it. Argo Rollouts requires Kubernetes v1.14 or newer. If your cluster is running an older version, you'll need to apply CRD manifests using the --validate=false option[8]. These steps are essential to prepare your environment for progressive delivery.

You'll also need administrative access to the cluster to create namespaces and apply manifests. If you plan to use advanced features like traffic shifting or canary analysis, you’ll need to deploy an ingress controller or a service mesh. These components allow Argo Rollouts to manage traffic more effectively, enabling smoother progressive delivery[4].

For those using Helm, make sure you have Helm 3.x installed. Additionally, integrating with GitOps tools like Argo CD can further streamline deployment automation.

Installing the Argo Rollouts Controller

There are three main ways to install the Argo Rollouts controller. Among these, Helm with init containers is often recommended because of its flexibility and compatibility with various setups[7]. However, if you're looking for a simpler method, especially for testing or smaller deployments, the kubectl approach is a good starting point.

To install with kubectl, first create a dedicated namespace to keep Argo Rollouts isolated from other cluster components:

kubectl create namespace argo-rollouts

Next, deploy the controller by applying the stable manifest in the namespace:

kubectl apply -n argo-rollouts -f https://raw.githubusercontent.com/argoproj/argo-rollouts/stable/manifests/install.yaml

This installs the latest stable version of the controller. After installation, download the appropriate binary for the Argo Rollouts kubectl plugin and move it to /usr/local/bin/[2].

Verify the installation by running:

kubectl argo rollouts version

Finally, ensure the controller has the necessary permissions by applying the required RBAC (Role-Based Access Control) CRD[7]. With the controller and plugin ready, you can move on to configuring your first rollout.

Configuring Your First Rollout

A Rollout resource is similar to a Kubernetes Deployment but includes a strategy section tailored for progressive delivery. Below is an example of a YAML configuration for a canary rollout strategy, which gradually shifts traffic to a new version.

Example: nginx Rollout configuration:

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: nginx-rollout
  namespace: default
spec:
  replicas: 3
  strategy:
    canary:
      steps:
      - setWeight: 20
      - pause: {duration: 10s}
      - setWeight: 50
      - pause: {duration: 10s}
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.24
        ports:
        - containerPort: 80

Apply the rollout configuration to your cluster:

kubectl apply -f nginx-rollouts.yaml

Once deployed, you can monitor the rollout in real time using the kubectl plugin:

kubectl argo rollouts get rollout nginx-rollout --watch

This command provides live updates on deployment status and traffic shifting. For a more detailed view, you can access the Argo Rollouts dashboard:

kubectl argo rollouts dashboard

The dashboard is available at localhost:3100, offering a visual breakdown of the rollout process. It shows traffic distribution between versions, pod statuses, and deployment metrics, helping you track progress and identify any issues[2].

Blue-Green Deployment Strategy with Argo Rollouts

Understanding Blue-Green Deployments

Blue-green deployment involves maintaining two separate environments: the blue environment, which handles live production traffic, and the green environment, which hosts the new version of the application. This setup allows for seamless updates without any downtime. If something goes wrong with the new version, you can instantly switch back to the blue environment, avoiding the risks of partial outages often associated with rolling updates. This approach ensures users experience uninterrupted service during the deployment process.

Configuring Blue-Green Rollouts

To set up a blue-green deployment using Argo Rollouts, you'll need a Rollout manifest that specifies the blue-green strategy. This includes defining selectors for the active and preview services, which Argo Rollouts uses to manage traffic routing between the two environments.

Here’s an example configuration:

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: webapp-rollout
  namespace: production
spec:
  replicas: 5
  strategy:
    blueGreen:
      activeService: webapp-active
      previewService: webapp-preview
      autoPromotionEnabled: false
      scaleDownDelaySeconds: 30
      prePromotionAnalysis:
        templates:
        - templateName: success-rate
          args:
          - name: service-name
            value: webapp-preview
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: webapp:v2.1
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 5

In this setup:

  • activeService represents the current production version.
  • previewService exposes the new version for testing.
  • autoPromotionEnabled: false ensures that traffic is not switched automatically, requiring manual approval to promote the new version.

Once you've defined your configuration, apply it using:

kubectl apply -f webapp-rollout.yaml

Argo Rollouts uses Kubernetes Services (and optionally ingress controllers) to manage traffic routing. It dynamically updates service selectors to redirect traffic between the blue and green environments. After configuration, keep an eye on traffic flow and system health to ensure everything is working as expected.

Monitoring and Rollback Processes

After setting up your blue-green deployment, monitoring becomes crucial. Use the kubectl plugin to track the rollout status in real-time:

kubectl argo rollouts get rollout webapp-rollout --watch

For a more visual approach, access the Argo Rollouts dashboard:

kubectl argo rollouts dashboard

The dashboard provides insights into traffic distribution and health metrics. You can also integrate tools like Prometheus to automate promotions or rollbacks based on live metrics.

Health checks play a key role in this strategy. By defining readiness and liveness probes in your Rollout manifest, you can ensure the new version meets performance and stability standards before directing traffic to it. If any issues arise, rolling back is straightforward. Argo Rollouts supports both automated rollback triggered by health checks and manual rollback via the dashboard or kubectl:

kubectl argo rollouts abort webapp-rollout
kubectl argo rollouts undo webapp-rollout

These commands allow you to revert to the previous version quickly, minimising user impact. Compared to Kubernetes' native rolling updates, Argo Rollouts offers finer control over traffic management, promotion, and rollback - helping you align deployments with your operational needs and risk tolerance.

Canary Deployment Strategy with Argo Rollouts

Understanding Canary Deployments

Canary deployments provide a gradual way to introduce new application versions, allowing you to test updates on a small group of users before rolling them out to everyone [2][4]. This method stands out from Kubernetes' built-in rolling updates by offering finer control over traffic distribution and rollout speed.

With a canary deployment, you can direct a specific percentage of traffic to the new version while keeping the bulk of users on the stable version. For example, using Argo Rollouts, you might start by routing 20% of traffic to the updated version while monitoring its performance. If everything looks good, traffic can be increased incrementally until the new version is fully adopted.

This phased approach minimises risk, as it allows early detection of issues. If a problem arises, you can quickly roll back to the previous version without disrupting most users. A major advantage of this strategy is its ability to manage traffic intelligently. Argo Rollouts integrates with ingress controllers and service meshes to enable precise traffic shaping - something traditional Kubernetes deployments lack.

Configuring Canary Rollouts

To implement a canary deployment with Argo Rollouts, you replace standard Kubernetes Deployment objects with Rollout custom resources. These resources define the details of your canary strategy, such as traffic percentages and timing.

Here’s an example configuration:

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: webapp-canary
  namespace: production
spec:
  replicas: 10
  strategy:
    canary:
      steps:
      - setWeight: 20
      - pause: {duration: 10s}
      - setWeight: 50
      - pause: {duration: 30s}
      - setWeight: 100
      canaryService: webapp-canary
      stableService: webapp-stable
      analysis:
        templates:
        - templateName: error-rate-check
          args:
          - name: service-name
            value: webapp-canary
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: webapp:v3.0
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3

In this setup, traffic is shifted in stages. Initially, 20% of traffic is directed to the new version, followed by a 10-second pause to observe performance. Next, 50% of traffic is routed for 30 seconds before the rollout reaches 100%. The canaryService and stableService parameters ensure traffic is correctly routed between versions.

To deploy this configuration, use:

kubectl apply -f webapp-canary.yaml

When updating to a newer version, simply update the image in the manifest (e.g., change webapp:v3.0 to webapp:v3.1) and reapply the configuration. Argo Rollouts will handle the canary process according to the defined steps. Traffic shifting is managed through integrations with ingress controllers or service meshes, ensuring precise control.

Monitoring and Automated Rollbacks

Monitoring is critical during canary deployments to catch issues early. Argo Rollouts excels here, offering real-time monitoring and automated rollback features to keep deployments safe and efficient.

To monitor progress in real time, use this command:

kubectl argo rollouts get rollout webapp-canary --watch

This provides a terminal view of the rollout, showing which pods are running the old version, which are using the canary, and how traffic is distributed.

For a more visual representation, access the Argo Rollouts dashboard:

kubectl argo rollouts dashboard

The dashboard, available at http://localhost:3100, displays details like traffic percentages, rollout phases, and pod statuses.

Argo Rollouts also integrates with metric providers like Prometheus, Datadog, and New Relic to analyse performance data. It tracks metrics such as error rates, latency, and resource usage to determine if the rollout should continue or be reverted. For instance, if error rates spike during the initial 20% phase, the rollout can halt and roll back automatically, protecting users from widespread issues.

Automated rollbacks are configured by defining health checks and analysis templates in the Rollout manifest. These templates monitor performance metrics and set failure thresholds, ensuring unstable updates are quickly reverted. Additionally, Kubernetes liveness and readiness probes ensure traffic is only sent to healthy instances.

For teams that prefer manual oversight, Argo Rollouts supports manual promotion, requiring explicit approval to move to the next phase. Many teams combine manual and automated approaches - starting with manual checks for the initial phase and switching to automated rollouts once confidence in the update grows.

These features make it easier to manage complex deployments while reducing risks and ensuring a smooth user experience.

Need help optimizing your cloud costs?

Get expert advice on how to reduce your cloud expenses without sacrificing performance.

Best Practices and Advanced Features

Traffic Shifting and Service Mesh Integration

By integrating a service mesh like Istio, Linkerd, or Consul, you can take traffic management to the next level with features like subset-based, percentage-based, or header-based routing [4]. This transforms Argo Rollouts into a powerful tool for managing traffic flows in ways that standard Kubernetes services can't match. Subset-based routing, in particular, allows for fine-tuned control over how traffic moves between different application versions.

Take, for instance, a UK-based e-commerce platform that used Istio to gradually shift traffic from 5% to 100% for a new application version. Throughout this process, Prometheus kept an eye on performance metrics, and if any issues popped up, Argo Rollouts automatically reverted to the earlier version. This approach minimised downtime and ensured a seamless experience for customers [4][2].

Service mesh integration also opens the door to advanced routing strategies. For example, you can direct traffic based on user location, device type, or specific headers. This makes it possible to test new features with targeted user groups without overcomplicating your deployment setup.

Automating Rollouts with Metrics

Argo Rollouts can automate the decision to promote or roll back a deployment by using AnalysisTemplates that evaluate key metrics like error rates, response times, and resource usage [4]. During a rollout, these metrics are continuously monitored, allowing the system to pause, promote, or abort the deployment based on whether pre-set thresholds are met.

To make this work, you'll need to configure your metrics provider, create AnalysisTemplates tailored to your application's goals, and link these templates to your Rollout objects. Common metrics include keeping HTTP error rates below 1%, ensuring response times meet the 95th percentile, and tracking custom business KPIs. The key is to choose metrics that reliably reflect the health of your application.

However, there are some pitfalls to watch out for. Misconfigured metrics or delays in data collection can lead to false alarms, triggering unnecessary rollbacks. To avoid this, thoroughly test your analysis templates in staging environments and regularly update them as your application evolves [4]. With these automation features in place, you can not only make deployments more reliable but also cut down on costs.

Cost Optimisation and Deployment Cycle Efficiency

Argo Rollouts doesn't just make deployments smoother - it also helps UK businesses save money and work more efficiently. By reducing resource waste and enabling quick rollbacks, it prevents costly infrastructure duplication and minimises the financial impact of deployment failures [2].

The benefits go beyond cost savings. One tech startup, for example, slashed its deployment time from six hours to just 20 minutes by refining its DevOps practices with automation and robust monitoring tools [1]. This kind of efficiency boost can lead to deployments that are up to 75% faster and result in up to 90% fewer errors [1], offering clear advantages in both time and cost for UK organisations.

For businesses in the UK, faster deployments mean less time spent on manual fixes and lower costs for incident management. Automation also reduces the need for out-of-hours deployment windows, improving work-life balance for teams and cutting down on overtime expenses.

Hokstad Consulting, a firm with expertise in DevOps transformation and cloud cost management, helps UK organisations optimise these processes while adhering to local standards, including GBP-based reporting.

To measure the impact of these improvements, track metrics like deployment frequency, mean time to recovery (MTTR), change failure rate, and resource utilisation. These data points can guide decisions for further refinement and optimisation [2].

Conclusion and Next Steps

Key Takeaways

Argo Rollouts offers a fresh way to manage Kubernetes deployments using blue-green and canary strategies. Its ability to handle both methods gives teams fine-tuned control over traffic distribution, allowing for gradual rollouts that minimise risk. Automated rollbacks, triggered by real-time metrics, ensure quick recovery if issues are detected during deployment[4].

When paired with tools like Istio for service mesh management and Prometheus for monitoring, Argo Rollouts becomes a powerful tool for progressive delivery. Teams can gradually shift traffic - starting at 20%, then 50%, and eventually 100% - while continuously monitoring performance. If any problems arise, traffic can be automatically rolled back, reducing downtime and improving reliability[2][4].

Beyond technical benefits, Argo Rollouts also delivers measurable value by optimising costs and streamlining deployment cycles. By cutting down on resource waste, avoiding expensive deployment failures, and enabling swift rollbacks, organisations can save money and improve their DevOps processes[2].

Further Learning Opportunities

To expand your knowledge of progressive delivery, begin with the official Argo Rollouts documentation and GitHub repository. These resources provide in-depth guides for advanced configurations, troubleshooting, and integration patterns[5][6]. For hands-on experience, the Argo Rollouts YouTube channel offers practical video tutorials showcasing deployment scenarios in action[2][3][7].

You can also join community forums to exchange ideas and gain insights from peers. For those looking to maximise the potential of automated, data-driven deployment strategies, exploring integration guides for service meshes like Istio or Consul, as well as metrics tools like Prometheus, is a great next step[4].

Working with Professional Services

While Argo Rollouts is highly capable on its own, implementing it in complex environments can be challenging. Expert guidance can make all the difference, especially for tasks like integrating with ingress controllers, setting up automated rollbacks, or ensuring compatibility with monitoring tools across multi-cloud setups[1][2][4].

Hokstad Consulting, a UK-based firm, specialises in DevOps transformation and cloud cost engineering. They’ve helped organisations reduce infrastructure costs by 30–50% while optimising deployment cycles. Their services include assessing current cloud infrastructure, setting up automated CI/CD pipelines, and adopting cost-saving strategies that align with progressive delivery practices. With their no savings, no fee model - where fees are based on the savings achieved - UK businesses can confidently invest in professional services, knowing they’ll see tangible results.

Consider working with professional services to accelerate cost savings, improve deployment speed, and enhance your deployment strategies.

Canary Deployment Using Argo Rollouts | Canary Deployments on Kubernetes | Argo Rollouts Tutorial

FAQs

How does Argo Rollouts enhance the deployment process compared to standard Kubernetes methods?

Argo Rollouts simplifies the deployment process by incorporating progressive delivery strategies like blue-green and canary deployments. These methods enable you to roll out updates step by step, minimising the chances of downtime or interruptions to your services.

Key features such as automatic traffic shifting, real-time monitoring, and rollback options give you more control and insight into your deployment workflows. This allows you to test updates in a managed setting, ensuring a seamless experience for your users as changes are introduced.

What are the main advantages of combining Istio with Argo Rollouts for managing traffic?

Using Istio together with Argo Rollouts creates a powerful setup for managing traffic during progressive delivery. Istio’s advanced service mesh features bring precise traffic control, enhanced observability, and added security to the deployment process, complementing Argo Rollouts perfectly.

With Istio, traffic can be dynamically directed between different application versions, making deployment strategies like blue-green and canary deployments much more straightforward. It also provides in-depth metrics and logs, enabling quicker identification and resolution of issues. This combination helps ensure smoother deployments, greater reliability, and minimised risks when rolling out updates to your applications.

How can I set up automated rollbacks with Argo Rollouts to minimise downtime?

To set up automated rollbacks with Argo Rollouts while keeping downtime to a minimum, it's crucial to configure health checks for your application. These checks track essential metrics like response times and error rates, helping to identify potential issues early in the deployment process.

Leverage analysis templates to establish clear success criteria for your rollouts. These templates enable Argo Rollouts to automatically pause or roll back a deployment if certain thresholds are exceeded. Pair this with progressive delivery strategies such as canary or blue-green deployments, which roll out changes incrementally, reducing the chance of widespread issues.

By integrating thorough monitoring, well-defined success parameters, and gradual deployment methods, you can create a rollback process that's both smooth and minimally disruptive.