Ansible AWX is a free and open source Ansible Management tool created to provide System Administrators and Developers with an easy to use, intuitive and powerful web-based user interface for managing Ansible playbooks, Secrets, inventories, and Automation cron jobs scheduled to run at certain intervals. This guide explains how to install Ansible AWX on Debian 12/11/10 Linux system.

For Vanilla Ansible installation: How To Install and Use Ansible on Debian

Step 1: Update your system

Update and upgrade your Debian System before you install Ansible AWX:

sudo apt update && sudo apt -y full-upgrade

If a reboot is required the system should be restarted.

[ -f /var/run/reboot-required ] && sudo reboot -f

Step 2: Install Single Node k3s Kubernetes

We will deploy a single node kubernetes using k3s lightweight tool. K3s is a certified Kubernetes distribution designed for production workloads in unattended, resource-constrained environments. The good thing with k3s is that you can add more Worker nodes at later stage if need arises.

Install K3s Kubernetes on your Debian system by running the following command:

curl -sfL https://get.k3s.io | bash -s - --write-kubeconfig-mode 644

Expected installation output – The process should complete in few seconds / minutes.

[INFO]  Finding release for channel stable
[INFO]  Using v1.27.7+k3s2 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.27.7+k3s2/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.27.7+k3s2/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

After installation kubectl is configured for you, use it to check cluster details:

$ kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
debian   Ready    control-plane,master   33s   v1.27.7+k3s2

Step 3: Deploy AWX Operator on Kubernetes

The AWX Operator is used to manage one or more AWX instances in any namespace within the cluster.

Install git and make tools:

sudo apt update
sudo apt install git vim build-essential apparmor apparmor-utils -y

Install Kustomize:

curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"  | bash
sudo mv kustomize /usr/local/bin

Confirm installation of Kustomize by checking the version:

$ kustomize version
v5.2.1

Save the latest version from AWX Operator releases as RELEASE_TAG variable then checkout to the branch using git.

sudo apt update
sudo apt install curl jq -y
RELEASE_TAG=`curl -s https://api.github.com/repos/ansible/awx-operator/releases/latest | grep tag_name | cut -d '"' -f 4`
echo $RELEASE_TAG

Create a file called kustomization.yaml with the following content:

tee kustomization.yaml<<EOF
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # Find the latest tag here: https://github.com/ansible/awx-operator/releases
  - github.com/ansible/awx-operator/config/default?ref=$RELEASE_TAG

# Set the image tags to match the git version from above
images:
  - name: quay.io/ansible/awx-operator

# Specify a custom namespace in which to install AWX
namespace: awx
EOF

Install the manifests by running this:

$ kustomize build . | kubectl apply -f -
namespace/awx created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
serviceaccount/awx-operator-controller-manager created
role.rbac.authorization.k8s.io/awx-operator-awx-manager-role created
role.rbac.authorization.k8s.io/awx-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/awx-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/awx-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding created
rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator-proxy-rolebinding created
configmap/awx-operator-awx-manager-config created
service/awx-operator-controller-manager-metrics-service created
deployment.apps/awx-operator-controller-manager created

Set current context to value set in NAMESPACE variable:

# export NAMESPACE=awx
# kubectl config set-context --current --namespace=$NAMESPACE 
Context "default" modified.

After a few minutes  awx-operator should be in a running status:

# kubectl get pods -n awx
NAME                                               READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-54787fcf67-swcbr   2/2     Running   0          96s

Uninstalling AWX Operator (just for reference)

You can always remove the operator and all associated CRDs by running the command below:

kustomize build . | kubectl delete -f -

Step 4: Deploy AWX on Debian on K3s Kubernetes

We need to persist Web Application data by creating a PVC – Reference AWX data persistence. Execute below commands in the terminal to initiate PersistentVolumeClaim creation:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: static-data-pvc
  namespace: awx
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 5Gi
EOF

Create AWX deployment instance deployment YAML file:

tee awx-deploy.yml<<EOF
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx
spec:
  service_type: nodeport
  projects_persistence: true
  projects_storage_access_mode: ReadWriteOnce
  web_extra_volume_mounts: |
    - name: static-data
      mountPath: /var/lib/projects
  extra_volumes: |
    - name: static-data
      persistentVolumeClaim:
        claimName: static-data-pvc
EOF

Update the Kustomize file:

RELEASE_TAG=`curl -s https://api.github.com/repos/ansible/awx-operator/releases/latest | grep tag_name | cut -d '"' -f 4`
tee kustomization.yaml<<EOF
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # Find the latest tag here: https://github.com/ansible/awx-operator/releases
  - github.com/ansible/awx-operator/config/default?ref=$RELEASE_TAG
  # Add this extra line:
  - awx-deploy.yml
# Set the image tags to match the git version from above
images:
  - name: quay.io/ansible/awx-operator

# Specify a custom namespace in which to install AWX
namespace: awx
EOF

Apply configuration to create required objects:

$ kustomize build . | kubectl apply -f -
namespace/awx unchanged
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com unchanged
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com unchanged
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com unchanged
serviceaccount/awx-operator-controller-manager unchanged
role.rbac.authorization.k8s.io/awx-operator-awx-manager-role configured
role.rbac.authorization.k8s.io/awx-operator-leader-election-role unchanged
clusterrole.rbac.authorization.k8s.io/awx-operator-metrics-reader unchanged
clusterrole.rbac.authorization.k8s.io/awx-operator-proxy-role unchanged
rolebinding.rbac.authorization.k8s.io/awx-operator-awx-manager-rolebinding unchanged
rolebinding.rbac.authorization.k8s.io/awx-operator-leader-election-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/awx-operator-proxy-rolebinding unchanged
configmap/awx-operator-awx-manager-config unchanged
service/awx-operator-controller-manager-metrics-service unchanged
deployment.apps/awx-operator-controller-manager configured
awx.awx.ansible.com/awx created

Wait a few minutes then check AWX instance deployed:

$ kubectl -n awx get pods -l "app.kubernetes.io/managed-by=awx-operator"
NAME                       READY   STATUS    RESTARTS   AGE
awx-postgres-13-0          1/1     Running   0          3m34s
awx-task-58cbc7bdc-s7dfq   4/4     Running   0          2m49s
awx-web-56cdd7bdcf-mczsg   3/3     Running   0          102s

Listing deployments.

$ kubectl get deployments -n awx
NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
awx-operator-controller-manager   1/1     1            1           7m54s
awx-task                          1/1     1            1           5m22s
awx-web                           1/1     1            1           4m15s

If you experience any issues with the Pods starting check deployment logs:

kubectl logs -f deployments/awx-operator-controller-manager -c awx-manager -n awx

The database data will be persistent as they are stored in a persistent volume:

# kubectl get pvc
NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
postgres-13-awx-postgres-13-0   Bound    pvc-998f2911-fa1d-4d84-acbb-445bf6837292   8Gi        RWO            local-path     11s

Volumes are created using local-path-provisioner and host path

$ ls /var/lib/rancher/k3s/storage/
pvc-998f2911-fa1d-4d84-acbb-445bf6837292_awx_postgres-13-awx-postgres-13-0

List all available services and check awx-service Nodeport

$ kubectl get svc -l "app.kubernetes.io/managed-by=awx-operator"
NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
awx-postgres   ClusterIP   None           <none>        5432/TCP       2m5s
awx-service    NodePort    10.43.182.53   <none>        80:30080/TCP   116s

You can edit the Node Port and set to figure of your preference

$ kubectl edit svc awx-service
....
ports:
  - name: http
    nodePort: <value>
    port: 80
    protocol: TCP
    targetPort: 8052

If you have an Ingress controller in the Cluster you can create a route for AWX application to access the app over Domain name.

Access AWX Container’s Shell

List deployments.

$ kubectl get deploy -n awx
NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
awx-operator-controller-manager   1/1     1            1           6m47s
awx-task                          1/1     1            1           5m39s
awx-web                           1/1     1            1           3m37s

Listing containers

Here is how to access each container’s shell:

kubectl exec -ti deploy/awx-web -c redis -- /bin/bash
kubectl exec -ti deploy/awx-web  -c  awx-web -- /bin/bash
kubectl exec -ti awx-postgres-13-0  -c  postgres -- /bin/bash

Checking AWX Container’s logs

List deployments.

# kubectl get deploy -n awx
NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
awx-operator-controller-manager   1/1     1            1           13m
awx-task                          1/1     1            1           12m
awx-web                           1/1     1            1           10m

List Pods.

# kubectl get pods  -n awx
NAME                                               READY   STATUS    RESTARTS   AGE
awx-operator-controller-manager-77d4cc4746-phx7l   2/2     Running   0          16m
awx-postgres-13-0                                  1/1     Running   0          15m
awx-task-56599458d6-5rb8m                          4/4     Running   0          15m
awx-web-75dfc8f8d7-4mgsk                           3/3     Running   0          13m

List containers in each pod.

$ kubectl -n awx get pod awx-task-56599458d6-5rb8m -o jsonpath='{.spec.containers[*].name}';echo
redis awx-task awx-ee awx-rsyslog

$ kubectl -n awx get pod awx-web-75dfc8f8d7-4mgsk -o jsonpath='{.spec.containers[*].name}';echo
redis awx-task awx-ee awx-rsyslog

The awx-xxx-yyy pod will have containers, namely:

  • redis
  • awx-task
  • awx-ee
  • awx-rsyslog

As can be seen from below command output:

# kubectl -n awx  logs deploy/<deployment-name>

Syntax for checking container logs.

kubectl -n awx  logs deploy/<deployment> -c <container>
# OR kubectl -n awx logs pod/<podName> -c <containerName>

See below examples.

kubectl -n awx  logs deploy/awx-web -c redis
kubectl -n awx  logs deploy/awx-web -c awx-web
kubectl -n awx  logs deploy/awx-web -c awx-rsyslog
kubectl -n awx  logs deploy/awx-task -c redis

Upgrading AWX Operator and instance

We have created a dedicated guide for upgrading the Operator and AWX instance:

Step 5: Access Ansible AWX Dashboard

Ansible AWX web portal is now accessible on http://hostip_or_hostname:30080.

install ansible awx ubuntu using operator 01

Obtain admin user password by decoding the secret with the password value:

kubectl  -n awx get secret awx-admin-password -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'

Sample output:

password: LkyWUKDwKdnhiEcvFe0zRQ9jOJCz7eMS

Or run:

 kubectl -n awx get secret awx-admin-password -o jsonpath=”{.data.password}” | base64 –decode

Login with user admin and decoded password:

install ansible awx ubuntu using operator 02

There you have AWX Administration interface. Start adding inventory, importing Ansible roles and automate your Infrastructure and Applications deployment.

install ansible awx ubuntu using operator 03

Step 6: Configure Ingress for AWX

If you would like to access your AWX using domain names and SSL, check out our ingress articles:

Related guides.

19 COMMENTS

  1. Hello,
    on step4 i don’t have the directory awx/installer/ when a type the command “cd”
    pwd indicate i’m in /root
    i have awx directory in /root
    i have another awx directory in awx : /root/awx/awx

    in this path i don’t have “installer”….

    could you help me please?
    i try on latest release of Debian 10

    Regards

  2. Hello,
    I have a problem at step 4. When I execute the command “kubectl get pvc”, only postgres-awx is with Bound Status. Static-data-pvc and awx-projects-claim are in Pending status forever. Do you know what can be wrong ?

  3. Getting following error when logging in:
    CSRF verification failed. Request aborted.

    You are seeing this message because this site requires a CSRF cookie when submitting forms. This cookie is required for security reasons, to ensure that your browser is not being hijacked by third parties.

    All cookies are allowed… Any ideas?

  4. Hello,

    I create a playbook in AWX to get the running config of a Cisco device.

    I have some problems to save the output of this task to the persistent volume. Here the playbook task:

    – name: show running
    cisco.ios.ios_command:
    commands : show run
    register: config
    – name: debug
    ansible.builtin.debug:
    var: config
    – name: save output to configs/
    ansible.builtin.copy:
    content: “{{ config.stdout[0] }}”
    dest: “/tmp/{{ ansible_net_hostname }}_{{ ansible_host }}.txt”

    The configuration in AWX only save the config in the PVC but not in the PV, so, the question is: how can I change the configuration to create a PV and persist the data?

    Thanks

  5. I’ve followed this process several times, and keep getting one of the pods in a CrashLoopBackOff state:

    NAME READY STATUS RESTARTS AGE
    awx-operator-controller-manager-7f89bd5797-m7nn6 2/2 Running 0 17m
    awx-postgres-13-0 1/1 Running 0 13m
    awx-b8bff96bf-q9nm2 2/4 CrashLoopBackOff 10 (71s ago) 12m

    Have you seen this behaviour? I’m looking at the logs but having a hard time deciphering what might be the problem.

    Thanks.

  6. Hello

    Thanks for the good tutorial.

    But, the pods after the postgres pod dosn’t start.

    I get an error:
    awx-6b57998ddc-67b9n 0/4 Init:CrashLoopBackOff 6 (2m33s ago) 9m28s

    And if i check with kubectl describe pod awx-6b57998ddc-67b9n
    it shows me on the Events, that it stucks on init:

    Events:
    Type Reason Age From Message
    —- —— —- —- ——-
    Normal Scheduled 10m default-scheduler Successfully assigned awx/awx-6b57998ddc-67b9n to ansible-test
    Normal Pulling 10m kubelet Pulling image “quay.io/ansible/awx-ee:latest”
    Normal Pulled 9m16s kubelet Successfully pulled image “quay.io/ansible/awx-ee:latest” in 44.496250886s
    Normal Pulled 7m28s (x4 over 9m2s) kubelet Container image “quay.io/ansible/awx-ee:latest” already present on machine
    Normal Created 7m28s (x5 over 9m16s) kubelet Created container init
    Normal Started 7m28s (x5 over 9m16s) kubelet Started container init
    Warning BackOff 5m (x19 over 9m1s) kubelet Back-off restarting failed container

    How can i fix this?

    Thanks and best regards
    AD

  7. Hi Josphat
    Thank you so much, this works flawlessly!

    How would you proceed to get AWX to run over https? I dont mind about having correct certs as its just running in our LAN, I just need the https:// URL to make the Azure AD Authentication work properly. Could you help me with this?

  8. Hi,

    I encountered several problems.
    First, I used a VmWare Workstation Debian 12 VM with 1 CPU and 2 GB of memory
    -> It needs more! 3 CPUs and 4 GB of RAM work fine.

    * An error occurred when attempting to retrieve the administrator password.
    -> Server error (NotFound): secrets “awx-admin-password” not found.
    I used this and it works very well.
    -> kubectl get secret –namespace awx awx-admin-password -o jsonpath=”{.data.password}” | base64 –decode

    * The AWX port was on… 30505
    -> I used “Advanced Port Scanner” to find it.

    Thanks for sharing this guide.
    It’s my first AWX installation with K3 without any major issues!

  9. HI, I will try your tutorial, but it’s not work 🙁
    My configuration:
    Debian 12
    Kustomize: v5.3.0
    AWX Operator: 2.12.1

    It seems a problems with postgres pods:

    kubectl -n awx get pods -l “app.kubernetes.io/managed-by=awx-operator”
    NAME READY STATUS RESTARTS AGE
    awx-postgres-13-0 0/1 Pending 0 10s

    kubectl get deployments -n awx
    NAME READY UP-TO-DATE AVAILABLE AGE
    awx-operator-controller-manager 0/1 1 0 7m5s

    kubectl logs -f deployments/awx-operator-controller-manager -c awx-manager -n awx
    Found 2 pods, using pod/awx-operator-controller-manager-7f44c57b4d-dj29t
    unable to retrieve container logs for containerd://4b6f68e15b999888a588a257637533d33252cef2f67fd7fcecac0f2316e70798

    kubectl get svc -l “app.kubernetes.io/managed-by=awx-operator”
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    awx-postgres-13 ClusterIP None 5432/TCP 5m35s

    kubectl edit svc awx-service
    Error from server (NotFound): services “awx-service” not found

    NAME READY STATUS RESTARTS AGE
    awx-task-f79b4ffdc-k7dzm 0/4 Pending 0 3m15s
    awx-task-f79b4ffdc-tmsqp 0/4 Init:ContainerStatusUnknown 0 4m1s
    awx-postgres-13-0 0/1 Pending 0 3m1s

LEAVE A REPLY

Please enter your comment!
Please enter your name here