Migrating Ghost Blog from AKS to LKE
My ghost blog (ccrowell.com) has been running on AKS. If you are cost concious, I wouldn't recommend doing this, but I had some Azure credits and was feeling adventurous. Also, I thought it would be good practice for running "real" workloads on. The pieces to this puzzle were:
- Creating a custom ghost image and pushing to a container registry (I chose ACR)
- Installing cert-manager and nginx-ingress in my cluster
- DNS hosting of some sort (not as important as the rest)
- A deployment, a ClusterIP service, a tls secret, and an ingress resource.
- A persistent volume claim using the default storage class
Backing up the Ghost Data from AKS
My ghost data was inside a PersistentVolume in Kubernetes, but the underlying storage was an Azure file share. To back up the data, I had to mount the azure file share locally and copy the data to my machine locally.
# find the name of the secret starting with 'azure-storage-account'
kubectl -n ghost get secret
# get the storage account name
kubectl -n ghost get secret azure-storage-account-* -o jsonpath='{.data.azurestorageaccountname}' | base64 --decode
# get the storage account key
kubectl -n ghost get secret azure-storage-account-* -o jsonpath='{.data.azurestorageaccountkey}' | base64 --decode
# get the name of the pvc used with your ghost deployment (mine was just called 'ghost-pvc')
kubectl -n ghost get pvc
# get the azure fileshare name (you can also get this from azure-cli)
kubectl -n ghost get pvc <pvc-name> -o jsonpath="{.spec.volumeName}"
# install cifs-utils, and make a backup directory
sudo mkdir -p ~/ghost-backup
# mount the azure file share
sudo mount -t cifs //<storage-account-name>.file.core.windows.net/<file-share-name> ~/ghost-backup -o vers=3.0,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777
# copy the ghost data to the dir I just created
cp -r ~/ghost-backup ~/ghost-backup-local
# unmount the azure file share
sudo umount ~/ghost-backupBacking up the Ghost Database
The next step was to back of the sqlite database. Since I was using sqlite instead of SQL, the backup process was just a matter of backing up the sqlite file directly. This file is commonly named 'ghost.db'.
# find the database name
kubectl -n ghost exec -it <ghost-pod-name> -- cat /var/lib/ghost/config.production.json
# copy the 'ghost.db' file to your local machine using kubectl cp
kubectl cp <ghost-pod-name>:/var/lib/ghost/content/data/ghost.db ./ghost.db -n ghost
# verify the file was copied successfully
ls -lh ghost.dbBacking up the TLS secret
The TLS secret is used with cert-manger for HTTPS communication to your domain (Let's Encrypt certificate requested from cert-manager). Cert-manager also renews the certificate before it expires. We can generate a new certificate when we create the new cluster, but I decided to transfer the SSL certificates directly.
# get the tls secrets in your AKS cluster
kubectl -n ghost get secrets
# backup the TLS secret from AKS
kubectl -n ghost get secret my-tls -o yaml > my-tls.yamlCreating an LKE cluster
- Log into Linode Cloud Manager.
- Navigate to Kubernetes > Create a Cluster.
- Choose a Kubernetes Version and Region.
- Select the number and type of nodes.
- Click Create Cluster and download your
kubeconfigfile.
export KUBECONFIG=~/path/to/linode-kubeconfig
kubectl get nodesSet Up Persistent Storage in LKS
Since Linode does not have Azure File Shares, you need to use Linode Block Storage:
- Create a Persistent Volume Claim (PVC):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-pvc
namespace: ghost
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: linode-block-storage-retainkubectl apply -f ghost-pvc.yaml
Deploy Ghost on LKS
- Update Your Ghost Deployment YAML to Use Linode Block Storage:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost
namespace: ghost
spec:
replicas: 1
selector:
matchLabels:
app: ghost
template:
metadata:
labels:
app: ghost
spec:
containers:
- name: ghost
image: cmcrowell.azurecr.io/cmcrowell-ghost:v5.6
imagePullPolicy: IfNotPresent
env:
- name: url
value: https://cmcrowell.com
ports:
- containerPort: 2368
name: http
protocol: TCP
resources:
limits:
cpu: "1"
memory: 256Mi
requests:
cpu: 100m
memory: 64Mi
volumeMounts:
- name: ghost-content
mountPath: /var/lib/ghost/content
volumes:
- name: ghost-content
persistentVolumeClaim:
claimName: ghost-pvc- Apply the Deployment:
kubectl apply -f ghost-deploy.yaml
Restore Ghost Data to LKS
- Transfer Data to Linode Block Storage:
- If using
scpto move backup data:
scp -r ghost-backup-local <linode-server>:/mnt/ghost-data
- Or use
kubectl cpto copy data into the new pod:
k -n ghost cp ghost-backup-local ghost-799fc9988-2bzsz:/var/lib/ghost/content
- Restore the Database:
# Copy the `ghost.db` file to the new Ghost pod
kubectl -n ghost cp ghost.db ghost-799fc9988-2bzsz:/var/lib/ghost/content/data/ghost.db
# Ensure proper file permissions inside the pod
kubectl exec -it ghost-799fc9988-2bzsz -n ghost -- chmod 644 /var/lib/ghost/content/data/ghost.dbUpdate DNS and Test the Migration
- Expose the Ghost Service via ClusterIP
apiVersion: v1
kind: Service
metadata:
name: ghost
namespace: ghost
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 2368
selector:
app: ghostkubectl apply -f ghost-service.yaml
- Update Your Domain’s DNS
- Point your domain (e.g.,
blog.example.com) to the new LKS LoadBalancer IP.
- Verify the Blog is Running
kubectl get pods -n <ghost-namespace>kubectl logs -f <ghost-pod-name> -n <ghost-namespace>
Install NGINX Ingress Controller in LKS
Linode Kubernetes Service (LKS) does not come with an Ingress Controller by default, so we need to install it.
Option 1: Install via Helm
- Add the Helm repo for NGINX Ingress:
helm repo add ingress-nginx <https://kubernetes.github.io/ingress-nginx>
helm repo update
- Install the NGINX Ingress Controller in the
ghostnamespace:
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ghost --set controller.service.type=LoadBalancer
you will get the following output:
$ helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ghost --set controller.service.type=LoadBalancer
NAME: nginx-ingress
LAST DEPLOYED: Mon Feb 24 16:08:19 2025
NAMESPACE: ghost
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ghost nginx-ingress-ingress-nginx-controller --output wide --watch'- An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: foo
spec:
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: exampleService
port:
number: 80
tls:
- hosts:
- www.example.com
secretName: example-tls- verify the installation:
kubectl get pods -n ghost
kubectl get svc -n ghost
- Get the IP of the LoadBalancer:
kubectl get svc -n ghost
Use EXTERNAL-IP (e.g. 45.79.240.133) to update your DNS (cmcrowell.com).
Deploy Cert-Manager in LKS
Cert-Manager is used to manage SSL/TLS certificates automatically.
Install Cert-Manager using Helm
helm repo add jetstack <https://charts.jetstack.io>
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
you will get the following ouptput:
$ helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
NAME: cert-manager
LAST DEPLOYED: Mon Feb 24 16:11:36 2025
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:⚠️ WARNING: `installCRDs` is deprecated, use `crds.enabled` instead.
cert-manager v1.17.1 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).More information on the different types of issuers and how to configure them
can be found in our documentation:
<https://cert-manager.io/docs/configuration/For> information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`documentation:<https://cert-manager.io/docs/usage/ingress/>
verify:
k -n cert-manager get po
Backup and Restore Your TLS Secret
Your TLS secret cmc-tls stores SSL certificates. We’ll transfer it from AKS to LKS.
backup the TLS secret from AKS:
kubectl get secret cmc-tls -n ghost -o yaml > cmc-tls.yaml
Restore the TLS Secret to LKS
Modify the cmc-tls.yaml file:
- Remove
metadata.resourceVersion - Remove
metadata.uid
Then apply it in LKS:
kubectl apply -f cmc-tls.yaml -n ghost
# verifyk -n ghost get secret
4️⃣ Migrate Your Ingress Configuration
Now, we’ll recreate the Ingress resource in LKS.
Backup Ingress from AKS
kubectl get ingress ingress-ghost -n ghost -o yaml > ingress-ghost.yaml
Modify ingress-ghost.yaml
- Remove
metadata.resourceVersionandmetadata.uid - Ensure
ingress.classis stillnginx - Update
cmcrowell.comDNS record to point to your LKS LoadBalancer EXTERNAL-IP - Ensure the
tls.secretNameis set tocmc-tls
Apply the Ingress to LKS
kubectl apply -f ingress-ghost.yaml -n ghost
# verifyk -n ghost get ing
5️⃣ Test Your Migration
- Check if NGINX Ingress Controller is Running
k -n ghost get po
Verify Cert-Manager Issued the TLS Certificate
k -n ghost describe certificate cmc-tls
Test Your Website
- Visit
https://cmcrowell.comin a browser.