Kubernetes GitOps with ArgoCD

  • Cloud
  • Press

Oreon Lothamer
July 1, 2021 112 views

ArgoCD

While Kubernetes provides a robust platform for managing container workflows it can become cumbersome to manage, especially at scale. How do you manage deployments? How do you ensure consistency across environments? How do you rollback/revert changes? How do you make it easy on developers? More and more frequently these days the answer is GitOps (https://www.weave.works/technologies/gitops/). GitOps is a methodology for managing Kubernetes where Git is the single source of truth. In this post I will go into some of our experiences implementing GitOps.

The first decision is what tool(s) to use to implement GitOps. There are a variety of them out there with different pros/cons. Some of the big names are ArgoCD  (https://argoproj.github.io/argo-cd/), Flux (https://fluxcd.io/),  and Jenkins X (https://jenkins-x.io/).

Flux: A super simple tool, but by the same token is somewhat limiting in functionality. The biggest drawback for what we were looking to do is that each implementation can only monitor a single repo. So, for each repo you want to deploy you would need a separate installation of Flux. This is desirable for certain situations, but not what we are looking for.

Jenkins X: Where Flux went simple Jenkins X veers the complete opposite direction providing an entire CI/CD platform for managing build, test, packaging, image storage, and deployment utilizing Cloud Native projects. If you are looking for one tool to handle the entire pipeline, then Jenkins X is worth a look. Yet for all its components it is still lacking in multi-tenancy and may require multiple installations.

ArgoCD: It has a nice combination of features without being overly complex and restrictive. You can add multiple repos with different levels of automation. One installation can even control deployments to multiple clusters.

ArgoCD seemed like a good fit for us. They provide manifests for installing https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml so we used those as a template and with a few modifications added them to our IaC  for standing up our K8s clusters through Terraform. The modifications we did were mainly to the argocd-cm ConfigMap that has the settings for ArgoCD. First, we included the ability to specify repo credentials so we can access our private repositories.

apiVersion: v1

kind: ConfigMap

metadata:

  labels:

    app.kubernetes.io/name: argocd-cm

    app.kubernetes.io/part-of: argocd

  name: argocd-cm

data:

  repository.credentials: |

%{ for credential in repository_credentials ~}

    - url: ${credential.url}

      passwordSecret:

        name: ${credential.secret}

        key: password

      usernameSecret:

        name: ${credential.secret}

        key: username

%{ endfor ~} 

As part of our core components we wanted to install with ArgoCD, we wanted to install Istio Operator for a service mesh. For Istio we needed to first install the Istio Operator helm chart and then deploy an IstioOperator Kubernetes object. We were having trouble getting things to run in the correct order since the IstioOperator CRD wasn’t available until after the Istio Operator helm chart was applied.

ArgoCD has the ability to specify sync waves https://argoproj.github.io/argo-cd/user-guide/sync-waves/#how-do-i-configure-waves to control the order syncs are run. We were able to use that to specify the correct order of deployment. But even with sync waves specified we noticed that ArgoCD would progress to the next sync wave before all the Istio resources were created by the Operator. This would cause issues when we tried to specify Istio Gateways for ingress in following sync waves. After doing some research we stumbled upon  https://nemo83.dev/posts/argocd-istio-operator-bootstrap/ which had our answer. We needed to setup a custom health check so that ArgoCD wouldn’t mark the IstioOperator healthy until all the Istio resources were deployed.

apiVersion: v1

kind: ConfigMap

metadata:

  labels:

    app.kubernetes.io/name: argocd-cm

    app.kubernetes.io/part-of: argocd

  name: argocd-cm

data:

  repository.credentials: |

%{ for credential in repository_credentials ~}

    - url: ${credential.url}

      passwordSecret:

        name: ${credential.secret}

        key: password

      usernameSecret:

        name: ${credential.secret}

        key: username

%{ endfor ~} 

 

  resource.customizations: |

    install.istio.io/IstioOperator:

      health.lua: |

        hs = {}

        if obj.status ~= nil then

          if obj.status.status == "HEALTHY" then

            hs.status = "Healthy"

            hs.message = "IstioOperator Ready"

            return hs

          end

        end

 

        hs.status = "Progressing"

        hs.message = "Waiting for IstioOperator"

        return hs      

That got us past the order issues. Now we could get everything deployed through GitOps. But we kept having MutatingWebhookConfiguration immediately go out of sync because caBundle was getting changed outside of Git by Istio. This behavior is expected, but ArgoCD didn’t know that. Luckily, ArgoCD provides a way to handle these situations (https://argoproj.github.io/argo-cd/user-guide/diffing/) so we were able to update the ConfigMap to ignore the differences.

 

apiVersion: v1

kind: ConfigMap

metadata:

  labels:

    app.kubernetes.io/name: argocd-cm

    app.kubernetes.io/part-of: argocd

  name: argocd-cm

data:

  repository.credentials: |

%{ for credential in repository_credentials ~}

    - url: ${credential.url}

      passwordSecret:

        name: ${credential.secret}

        key: password

      usernameSecret:

        name: ${credential.secret}

        key: username

%{ endfor ~} 

 

  resource.customizations: |

    admissionregistration.k8s.io/ValidatingWebhookConfiguration:

      # List of json pointers in the object to ignore differences

      ignoreDifferences: |

        jsonPointers:

        - /webhooks/0/clientConfig/caBundle     

        - /webhooks/0/clientConfig/failurePolicy 

    admissionregistration.k8s.io/v1beta1/ValidatingWebhookConfiguration:

      # List of json pointers in the object to ignore differences

      ignoreDifferences: |

        jsonPointers:

        - /webhooks/0/clientConfig/caBundle     

        - /webhooks/0/clientConfig/failurePolicy 

    admissionregistration.k8s.io/MutatingWebhookConfiguration:

      # List of json pointers in the object to ignore differences

      ignoreDifferences: |

        jsonPointers:

        - /webhooks/0/clientConfig/caBundle

        - /webhooks/0/clientConfig/failurePolicy

    install.istio.io/IstioOperator:

      health.lua: |

        hs = {}

        if obj.status ~= nil then

          if obj.status.status == "HEALTHY" then

            hs.status = "Healthy"

            hs.message = "IstioOperator Ready"

            return hs

          end

        end

 

        hs.status = "Progressing"

        hs.message = "Waiting for IstioOperator"

        return hs      

We finally had a working implementation of ArgoCD, but now what? We wanted to make it as easy as possible to onboard new applications to the cluster and for each group to have their own repos for their applications. The app of apps (https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/#app-of-apps) methodology seemed perfect.

We are able to specify a repo during bootstrapping of the cluster that stores app manifests. ArgoCD monitors the repo and automatically applies any changes. So, if we want to add a new application onto the cluster, we just create the app manifest in the repo and ArgoCD deploys it. The developers only have to worry about the manifests for their applications which can be stored within the same repo as their code.

Having Git be the true source of truth for the cluster means we don’t have to worry about manual changes being done that are undocumented and not included in automation. ArgoCD will immediately identify any sync issues, and if we have specified auto sync, revert to what is in Git. Continuous Deployments are also super easy without having to rewrite pipelines for each application we want to onboard. So far, our experience with GitOps has been a good one and we look forward to exploring ways to leverage the benefits more fully.



More Stories

  • JIRA Test Management Tools

    Jillian Flinspach
    June 23, 2021

    Here at Softrams, Jira is an indispensable part of workflow productivity. In the software development and testing industry, completing our routine tasks of quality inspection for software products, we usually conduct problems and project tracking via Jira Test

  • Automating User Journey Tests

    Murali M
    April 9, 2021

    The need to ensure accessibility of user journey tests extends to each team member. Our open-source steps library enables everybody on the team, irrespective of their programming background, to be able to contribute to user journey tests. 

  • Reflections of a Softrams Security Intern

    Nitya Parasuramuni
    July 26, 2021

    Online identity thefts, phishing attempts, ransomware attacks, and much more are at an all-time high. Our Softrams Security Intern provides an insight of translating her expectations into a career and provides some wisdom she's learned along the way.