Bitbucket Pipelines runner autoscaler on MicroK8s
Setting up a self-hosted Bitbucket Pipelines Runner Autoscaler using Microk8s.
This guide assumes that you are using MicroK8s for your Kubernetes cluster and will cover steps to configure the Bitbucket Pipelines self-hosted Docker runner autoscaler using on-prem Kubernetes infrastructure. (in this case, MicroK8s).
Learn more about MicroK8s here: MicroK8s - Zero-ops Kubernetes for Developers, Edge, and IoT.
Prerequisites
-
Access to Bitbucket Cloud Premium version (You need a Premium account for runner autoscaler specifically) and necessary credentials for the Runner Autoscaler. Please Refer to the setup guide here: Autoscaler for Runners on Kubernetes : Bitbucket Cloud Docs.
Note: I suggest the use of OAuth Tokens as they are easy to create, scale, and maintain. OAuth credentials must be Base64-encoded for Kubernetes secrets -
At least two VMs with network access to each other:
- 1 Master Node
- 1 Worker Node (you can have multiple worker nodes, but ensure they have network access to the master).
Step 1: Prepare the Master Node
Recommended: Go through the official MicroK8s documentation and understand how MicroK8s works before proceeding.
1. Update and Install Necessary Tools
sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt install -y apt-transport-https curl gnupg lsb-release
2. Install MicroK8s
sudo snap install microk8s --classic
sudo microk8s status --wait-ready
3. Configure Network Settings (Optional)
sudo vi /etc/netplan/00-installer-config.yaml
sudo netplan apply
4. Set the Hostname
hostnamectl set-hostname kubemaster # or any other name you prefer
5. Add Worker Nodes
Generate a join command on the master:
sudo microk8s add-node
Follow the on-screen instructions to join worker nodes using the generated token.
6. Install Helm
You need Helm to install the Runner Autoscaler as it is packaged as a Helm chart.
sudo snap install helm --classic
Step 2: Prepare the Worker Nodes
1. Update and Install MicroK8s
sudo apt-get update -y
sudo apt install -y vim
sudo snap install microk8s --classic
sudo microk8s status --wait-ready
2. Join the Worker Node to the Master Node
sudo microk8s join <MASTER_NODE_IP>:<PORT>/<TOKEN> --worker
Step 3: Clone and Configure the Runner Autoscaler
1. Clone the Runner Autoscaler Repository
git clone git@bitbucket.org:bitbucketpipelines/runners-autoscaler.git
cd runners-autoscaler/kustomize
git checkout 3.7.0 # Make sure to check out the latest available version
2. Configure Runner Autoscaler
Edit the runners_config.yaml and kustomization.yaml files to include your Bitbucket OAuth credentials and other necessary configurations as described here: Autoscaler for Runners on Kubernetes: Bitbucket Cloud Docs.
sudo vi values/runners_config.yaml
sudo vi values/kustomization.yaml
Example runners_config.yaml
constants:
default_sleep_time_runner_setup: 10 # value in seconds
default_sleep_time_runner_delete: 5 # value in seconds
runner_api_polling_interval: 600 # value in seconds
runner_cool_down_period: 300 # value in seconds
groups:
- name: "Linux Docker Runners"
workspace: "YOURWORKSPACENAME" # workspace name
# repository: "{repository_uuid}"
# optional, only needed if you want repository runners - include the curly braces
labels: # Labels assigned to each runner
- "self.hosted"
- "linux"
- "runner.docker"
namespace: "default" # target namespace, where runner will be created.
strategy: "percentageRunnersIdle"
# Set up the parameters for runners to create/delete via Bitbucket API.
parameters:
min: 4 # recommended minimum 1 must be in UI because it fails when a new build is starting.
max: 8 # maximum runners allowed to be deployed. Please take into consideration resources available before updating this.
scale_up_threshold: 0.5 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up.
scale_down_threshold: 0.2 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale down.
scale_up_multiplier: 1.5 # scale_up_multiplier > 1.
scale_down_multiplier: 0.5 # 0 < scale_down_multiplier < 1.
# Set up the resources for the Kubernetes job template.
# This section is optional. If not provided, the default values for memory "4Gi" and CPU "1000m" in requests and limits will be used.
resources:
requests:
memory: "4Gi"
cpu: "2000m"
limits:
memory: "4Gi"
cpu: "2000m"
Example kustomization.yaml
This can vary based on the Authentication method you use to connect your K8s Cluster to Bitbucket Cloud Infrastructure.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
# Review the ./runners_config.yaml file, especially workspace UUID and labels.
configMapGenerator:
- name: runners-autoscaler-config
files:
- runners_config.yaml
options:
disableNameSuffixHash: true
# The namespace for the runners autoscaler resources.
# It is not the same namespace for runner pods, which can be specified in the runners_config.yaml.
namespace: bitbucket-runner-control-plane
commonLabels:
app.kubernetes.io/part-of: runners-autoscaler
images:
- name: bitbucketpipelines/runners-autoscaler
newTag: 3.7.0 # Ensure this matches the version you checked out earlier.
patches:
- target:
version: v1
kind: Secret
name: runner-bitbucket-credentials
# There are 2 options.
# Choose one of them, uncomment and specify the values.
# PS: Values must be encoded in base64.
# 1) OAuth - Specify the OAuth client ID and secret.
# 2) App password - Specify the Bitbucket username and Bitbucket app password.
patch: |-
### Option 1 ###
- op: add
path: /data/bitbucketOauthClientId
value: "ENTER_BITBUCKET_OAUTH_CLIENT_ID_HERE_WITHIN_QUOTES"
- op: add
path: /data/bitbucketOauthClientSecret
value: "ENTER_BITBUCKET_OAUTH_CLIENT_SECRET_HERE_WITHIN_QUOTES"
### Option 2 ###
# - op: add
# path: /data/bitbucketUsername
# value: ""
# - op: add
# path: /data/bitbucketAppPassword
# value: ""
- target:
version: v1
kind: Deployment
labelSelector: "inject=runners-autoscaler-envs"
# Uncomment the same option you've chosen for the Secret above.
patch: |-
### Option 1 ###
- op: replace
path: /spec/template/spec/containers/0/env
value:
- name: BITBUCKET_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
key: bitbucketOauthClientId
name: runner-bitbucket-credentials
- name: BITBUCKET_OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: bitbucketOauthClientSecret
name: runner-bitbucket-credentials
### Option 2 ###
# - op: replace
# path: /spec/template/spec/containers/0/env
# value:
# - name: BITBUCKET_USERNAME
# valueFrom:
# secretKeyRef:
# key: bitbucketUsername
# name: runner-bitbucket-credentials
# - name: BITBUCKET_APP_PASSWORD
# valueFrom:
# secretKeyRef:
# key: bitbucketAppPassword
# name: runner-bitbucket-credentials
Apply Kustomization Files:
Apply the Kustomize configurations to deploy the Runner Autoscaler:
sudo microk8s kubectl apply -k values
Monitor Runner Logs:
Monitor the logs to ensure the Runner Autoscaler is functioning correctly:
sudo microk8s kubectl logs -f -l app=runner-controller -n bitbucket-runner-control-plane
Step 4: Validate the Setup
1. Check Node Status (On Workers)
sudo microk8s kubectl get nodes
2. Check Pod Status
sudo microk8s kubectl get pods -n bitbucket-runner-control-plane --field-selector=status.phase=Running
3. Inspect Runner Logs
sudo microk8s kubectl logs -f runner-controller-<pod-name> -n bitbucket-runner-control-plane
Additional Tips
- Scaling Runners: The autoscaler should handle scaling runners dynamically based on job demand. Ensure your runner configurations are correct in
runners_config.yaml. -
Resource Monitoring: Use the following command to monitor resource usage and adjust configurations if necessary:
sudo microk8s kubectl top nodesTroubleshooting
- Failed Pod Scheduling: If runners fail to schedule, check node taints and pod events.
- Logs Not Displaying: Ensure
--max-log-requestsis not limiting the logs excessively.
Additional Improvements
- Use MicroCeph or just Ceph to manage storage that backs the MicroK8s nodes for better flexibility and scalability. Refer to the MicroCeph Multi-Node Install Guide.