GoCD

I've installed GoCD into my Kubernetes cluster over this weekend, the system came with some high praise from a new coworker, initial thoughts were positive, it looks ok and install instructions via a Helm chart into Kubernetes were easy enough to follow, though a number of problems arise from the install process. The most severe of which involves open access to all users. By default the containers spin up a sample project and leaves everything open, including the configuration of an ingress endpoint that goes ahead and provisions an IP address in cloud services like GCP. Essentially allowing anyone to create pipelines in the cluster.

  • The GoCD chart provides no easily understood way to add users, default or otherwise to the system.

  • In order to checkout private Github code, an SSH key must be generated for the GoCD server/clients and added to Github in order to pull code. This was then configured on host and workers as private key to facilitate this communication.

  • The default user system, which is none at all, gives full access to everyone visiting from the outside. This is a really bad idea, especially when the chart installs a load balancer by default.

  • Adding a user is a manual process that involved creating a file within a pod with the users credentials. This login is one of two types I tried to get working with GoCD, the other was LDAP using my Synology as server. This turned out to get into a weird state where no user could log in because a user needs to be known before they can log in, thus leading back to the file based user accounts.

  • There is no easy way to deploy to kubernetes, not installed by the chart, a Cluster Role needs to be added to an appropriate service account to grant deployment access, and no container came with kubectl to use it properly, instead a call has to be made to the kubernetes RESTful API to deploy.

To take inspiration from a system I actually think is awesome: Google Cloud Build still feels simpler, but being an external system I would have to jump through some hoops in order to get it to talk to my cluster since I don't export those ports to the internet at large.

Pipeline steps is defined in code and executed within given container images or building their own container from scratch is a great idea but what I really feel I need by this point is the ease to deploy to Kubernetes wrapped with these containers, as currently these things feel a little obtuse.

Comments

Comments powered by Disqus