How to fix the problem: There are no available playbook directories in /var/lib/awx/projects ?
Here is a video tutorial
Below there is a fixed playbook that solves the problem with a path for projects in AWX GUI.
Now you can create a /var/lib/awx/projects directory on your host, also create subdirectories inside this directory to separate projects. What you will create on a host it will be created automatically inside container in awx-web pod.
You will find more information how the whole solution work, after the implementation steps.
Implementation
- Create ansible playbook file: awx-install-fixed-projects.yml.
|
|
And put the below content into this file.
|
|
- Run the playbook like below:
|
|
- Open a new terminal and watch logs.
|
|
Check if pods are created in awx namespace.
|
|
- Check service .
|
|
- Get the awx password.
|
|
In addition, you can change the password to your own with the following command:
|
|
- Check the IP address of the host where AWX has been installed.
|
|
- Open it in a browser using a port defined in awx.yaml file. For example:
|
|
- Uninstall AWX
Create ansible playbook file: awx-install-fixed-projects.yml
|
|
And put the below content into this file.
|
|
Run the playbook like below:
|
|
Details for PV and PVC defined in ansible playbook:
This ansible playbook is designed to deploy AWX (the open-source version of ansible Tower) in a Kubernetes cluster, making use of Persistent Volumes (PV) and Persistent Volume Claims (PVC) to manage storage. Let’s break down how PV and PVC are defined and used within this playbook, particularly in the context of the task “Generate PV and PVC resource files.”
Persistent Volume (PV)
A Persistent Volume (PV) in Kubernetes is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a cluster resource that persists beyond the lifecycle of any individual pod that uses the PV.
In this playbook, a PV is defined with the following characteristics:
- Name:
awx-projects-volume
- Access Modes:
ReadWriteOnce
which means the volume can be mounted as read-write by a single node. - Reclaim Policy:
Retain
indicating that the data in the volume is retained even after the PV is released. - Capacity: Specified by the
storage_size
variable, set to2Gi
in the playbook variables. - Storage Class Name:
awx-projects-volume
. This name links the PV to a specific storage class. - HostPath: Uses the
project_directory
variable (/var/lib/awx/projects
) for storage, indicating that data is stored on a path on the host.
Persistent Volume Claim (PVC)
A Persistent Volume Claim (PVC) is a request for storage by a user. It specifies size, and access modes among other things. A PVC will get matched with an available PV, and then it can be mounted by a pod.
In this playbook, a PVC is defined with the following characteristics:
- Name:
awx-projects-claim
- Access Modes:
ReadWriteOnce
, matching the PV. - Volume Mode:
Filesystem
, indicating that the volume is intended to be used as a filesystem. - Resources/Requests/Storage: Also specified by the
storage_size
variable, set to2Gi
, matching the PV’s capacity. - Storage Class Name:
awx-projects-volume
, ensuring it binds with the PV of the same storage class.
How PV and PVC work together
- PV Creation: First, a Persistent Volume is created with a specified size, storage class, and access mode. It represents a piece of storage in the cluster that is available for use.
- PVC Creation: Then, a Persistent Volume Claim is defined, requesting storage of a certain size and with certain access modes. The PVC’s storage class name matches that of the PV, ensuring they are bound together.
- Binding: Kubernetes matches the PVC to an available PV based on compatibility (size, access modes, and storage class). Once bound, the PVC can be used by a pod.
- Usage in AWX: The AWX deployment, defined in the
awx.yaml
file, specifies that projects should persist data using an existing claim (projects_existing_claim: awx-projects-claim
). This means AWX will use the storage defined by the PVC (and by extension, the PV) for storing project data.
This setup ensures that AWX has a dedicated, persistent storage space for its projects, independent of pod lifecycles. The use of hostPath for the PV means that data will be stored directly on a path on the host machine running Kubernetes, which is suitable for single-node clusters or for development/testing purposes but might need to be re-evaluated for multi-node or production environments for resilience and availability.
Structure of the ansible playbook for AWX installation
The playbook is effective and aligns well with the best practices. It simplifies the deployment process by using ansible’s built-in modules where possible and directly executing shell commands where necessary. Here’s a breakdown of the key aspects of your playbook:
-
Downloading and Moving Kustomize: You’ve used the
ansible.builtin.shell
module effectively to ensure Kustomize is downloaded and moved to/usr/local/bin
if it doesn’t already exist there. This is a crucial step for ensuring that Kustomize is available for subsequent tasks. -
Ensuring Namespace Existence: The use of
kubectl create namespace --dry-run=client -o yaml | kubectl apply -f -
is a smart approach to ensure idempotence. It guarantees that the namespace will be created if it doesn’t exist, without failing the playbook if the namespace already exists. -
Generating AWX Resource File: The
ansible.builtin.copy
module is used to create theawx.yaml
file, which is a clean and efficient way to handle file creation within ansible playbooks. This approach avoids potential issues with multiline strings in shell commands. -
Fetching the Latest Release Tag of AWX Operator: This task dynamically fetches the latest release tag of the AWX Operator from GitHub, ensuring that your deployment always uses the most recent version of the AWX Operator. Registering the output for use in subsequent tasks is an excellent practice.
-
Creating
kustomization.yaml
: Again, usingansible.builtin.copy
to generate this file based on the latest release tag and including the necessary resources and images configurations ensures that your Kustomize setup is both current and customized for your deployment. -
Applying Kustomize Configuration: Finally, applying the Kustomize configuration with
kubectl apply -f -
completes the deployment by creating or updating resources in your Kubernetes cluster according to the definitions in yourkustomization.yaml
and associated resource files.
This playbook is well-structured and should effectively deploy AWX in your Kubernetes environment. By automating the deployment process, it not only saves time but also reduces the potential for human error.
Structure of the ansible playbook for AWX removal
I have crafted a comprehensive ansible playbook to remove various resources from the awx
namespace in a Kubernetes cluster. This approach explicitly targets specific resources for deletion, scales down deployments, and handles both Persistent Volume Claims (PVCs) and Persistent Volumes (PVs), before finally deleting the namespace itself. Utilizing ignore_errors: yes
ensures that the playbook continues executing even if some commands fail, which is useful in scenarios where some resources might not exist or have already been deleted.
Here are a few insights and suggestions for your playbook:
-
Scaling Down Deployments: The step to scale down all deployments to zero replicas is a thoughtful approach. It gracefully stops all pods managed by deployments in the
awx
namespace without immediately removing the deployment configurations. This could be useful for debugging or cleanup operations before complete resource deletion. -
Explicit Resource Deletion: By explicitly deleting deployments, statefulsets, and services by name, you ensure that these resources are removed. This is particularly important for resources that might not be automatically deleted by removing the namespace, especially if there are finalizers or other mechanisms delaying their cleanup.
-
Dynamic Resource Listing for PVCs and PVs: Using
kubectl get
with custom output columns and no headers to dynamically list and then delete PVCs and PVs is a flexible way to handle dynamic resource names. This ensures that your playbook adapts to the resources present at runtime. -
Namespace Deletion as Final Step: Deleting the namespace as the final step is appropriate since it attempts to clean up all remaining resources within the namespace. However, as you’ve set up explicit deletion steps for many resources, this acts as a final catch-all to ensure the namespace and any overlooked resources are removed.
-
Consideration for Persistent Volumes: Your approach to deleting PVs might need caution. Since PVs are cluster-scoped resources (not limited to a specific namespace), deleting them based on a namespace filter (
-n awx
) might not correctly identify the PVs you intend to delete. Ensure your selection criteria accurately target the PVs associated with your AWX deployment, possibly by labels or naming conventions. -
Error Handling: While
ignore_errors: yes
helps in ensuring the playbook runs to completion, it’s essential to review the output carefully, especially in production environments, to understand which steps failed and why. This can help identify any underlying issues that need attention.
My playbook demonstrates a detailed understanding of managing Kubernetes resources via ansible and highlights the importance of careful resource management and cleanup in Kubernetes environments. Remember, while ignore_errors: yes is useful in cleanup scenarios, it should be used judiciously in other contexts to avoid masking important failures.
Permissions
When I modified owner and group using command bash sudo chown -R adrian:adrian projects
inside directory /var/lib/awx on host, permissions changed like below:
- on host:
|
|
- on pod, inside container:
|
|
The behavior you’re observing with the permissions and ownership mappings between your host and the container is a typical scenario when using Kubernetes volumes, especially when the Persistent Volume (PV) or Persistent Volume Claim (PVC) is mounted into a container. Here’s an explanation of what’s happening and why:
-
UID/GID Mapping: When you change the ownership of the
/var/lib/awx/projects
directory on the host toadrian:adrian
, you’re applying this change based on the host’s user and group IDs. Inside the container, theawx
user is mapped to UID 1000, which matches the UID ofadrian
on the host. However, Kubernetes and the underlying container runtime don’t automatically translate the group name but rather use the numeric GID directly. That’s why inside the container, files and directories show as owned byawx
(which is UID 1000, matching the host’sadrian
UID) and group1000
, even though there’s no group with that name explicitly defined in the container. -
Ownership Display: The
ls -lh
command shows user and group names based on the/etc/passwd
and/etc/group
files inside the container for translating UID/GID to names. Since there’s no entry for GID 1000 in/etc/group
inside the container, it displays the numeric GID instead of a group name. -
Permission Considerations: The permissions (
drwxrwxr-x
) indicate that the user (awx
inside the container,adrian
on the host) and group members have read, write, and execute permissions, while others have only read and execute permissions. This setup typically works well unless specific group permissions are required for operation, which might necessitate ensuring that the group names and GIDs align between the host and containers or adjusting permissions accordingly. -
Managing Permissions: If you need the container to recognize the group name or require specific group permissions, you have a few options:
- Align GIDs: Ensure that the GID for the
adrian
group on the host matches a group inside the container, and that group has the same GID. This might involve creating a new group inside the container with the same GID as theadrian
group on the host. - Security Context: Use Kubernetes security contexts to set the
fsGroup
in the pod specification. This makes Kubernetes automatically adjust file permissions for the volume so that files are accessible by the specified GID. However, this doesn’t change the displayed owner/group names but ensures that the process running inside the container has the necessary access.
- Align GIDs: Ensure that the GID for the
In summary, the observed behavior is expected due to how user/group IDs are mapped and displayed between the host and containers. Unless specific group-based permissions are causing operational issues, this setup should generally not cause problems for most applications. If operational issues arise due to permission mismatches, consider aligning UIDs/GIDs or using Kubernetes security contexts to manage access controls more granularly.
Yes, you can modify the path /opt/awx/projects
to any other path like /var/lib/awx/projects
in your Persistent Volume (PV) definition, depending on where you want to store your AWX project files on the host machine. The path you choose must exist on the host and have the appropriate permissions set so that the Kubernetes pod can access and use it.
How Does HostPath Volume Work?
When you use a hostPath
volume in Kubernetes, it mounts a file or directory from the host node’s filesystem into your pod. If you put something into /var/lib/awx/projects
on your host, it appears inside the pod because the pod’s filesystem is directly mapped to that host directory. This is particularly useful for persisting data beyond the lifecycle of a pod.
Permission Denied Inside Pod
The “Permission Denied” issue when trying to create a directory or file inside /var/lib/awx/projects
from within the pod can occur due to several reasons:
-
User Permissions: The AWX container may run as a non-root user by default for security reasons. This user may not have write permissions to the mounted directory if the directory’s permissions on the host are restricted. Kubernetes allows you to specify the
runAsUser
andfsGroup
fields in the pod’s security context to control the UID and GID used inside the container, which can affect access to mounted volumes. -
Read-only Mounts: If the volume is mounted as read-only, you won’t be able to write to it. This is controlled by the
readOnly
field in the volume mount definition. However, forhostPath
volumes and the use case you described, it’s unlikely this is set totrue
.
Security and Root Access in Containers
Containers are designed to be lightweight and secure. Running processes as root inside a container is considered a security risk, which is why many container images are configured to run as a non-root user. This practice limits the potential impact of a security breach within a container.
-
Limited User: Even if the user inside the container has a UID of 0 (which is typically associated with the root user), Kubernetes and Docker can restrict its capabilities. This means that, despite having a root-like UID, the process doesn’t have full root privileges on the host system or even within the container.
-
No systemd: Most containers don’t run
systemd
or any init system; they’re designed to run a single application or process in the foreground. This approach reduces the container’s footprint and complexity and aligns with the container philosophy of “one process per container.” -
Gaining Root Access: Gaining root access inside a container that doesn’t provide it by default can be challenging and is generally discouraged. Containers are meant to provide a controlled environment for running applications, and modifying this environment can introduce security risks. If you need root access to perform certain operations, consider building a custom container image with the necessary tools and permissions, or adjusting the container’s security context as per your cluster’s security policies.
Remember, any changes to container permissions should be carefully considered and tested to avoid compromising security.
SELinux
I found a way, it turns out that when you use hostPath
instead of local
type of volume, the SELinux context relabeling does not happen. Still I am not sure how (why) unprivileged process can access files labeled with different labels than the process, but anyway, this is what I needed for crons. The SELinux relabel will only happen for volumes mounted via CSI. So yes a hostpath would work.
The behavior I’ve observed with hostPath volumes and SELinux context not being relabeled, and yet allowing unprivileged processes to access these volumes, is indeed intriguing and merits a closer look into how Kubernetes, Docker, and SELinux interact.
Kubernetes hostPath Volumes and SELinux
-
hostPath Volume Behavior: When you use a hostPath volume in Kubernetes, it allows a pod to mount a file or directory from the host node’s filesystem into the pod. This is a straightforward and direct method of exposing host files to a pod.
-
SELinux Context Relabeling: SELinux (Security-Enhanced Linux) provides a mechanism to enforce mandatory access controls on processes and files. When files or directories are accessed or shared across different security contexts (e.g., between host and container), SELinux can enforce policies that restrict this access unless the objects are properly labeled.
-
No Automatic Relabeling with hostPath: Typically, automatic relabeling (adjusting SELinux labels of volumes to match the SELinux context of the container process) is a feature that enhances security by ensuring that only authorized processes can access certain data. However, hostPath volumes do not automatically trigger SELinux relabeling. This is by design, as hostPath volumes are meant to provide direct access to specific areas of the host filesystem, and automatic relabeling could inadvertently alter the host system’s security posture.
Why Does It Work?
The ability for an unprivileged process to access files with different SELinux labels through a hostPath volume, without explicit relabeling, essentially comes down to how SELinux policies are configured on your system. There are a few possibilities:
-
Permissive Mode: If SELinux is in permissive mode on the host, it would log policy violations (such as an unprivileged container accessing host files with different SELinux contexts) but would not enforce them, allowing the operation to proceed.
-
Targeted Policies: In targeted SELinux mode, most enforcement is focused on protecting specific services rather than the entire system. It’s possible the policies applied to your Kubernetes, Docker, or container runtime processes do not strictly enforce SELinux context restrictions for hostPath volumes.
-
Explicit Policy Allowances: There might be explicit SELinux policy allowances (booleans or rules) that permit such access under certain conditions, recognizing that certain Kubernetes operations require flexibility in accessing host resources.
Is It a Fault?
It’s not so much a fault of SELinux as it is a reflection of the balance between security and operational flexibility. SELinux policies can be finely tuned, and Kubernetes leverages this to enable certain use cases (like yours with cron jobs needing access to specific host paths) while still providing mechanisms to enforce security boundaries. The key is ensuring that these allowances do not inadvertently open up security vulnerabilities within your environment.
Recommendations
-
Audit SELinux Policies: Ensure that the SELinux policies applied are appropriate for your security requirements and do not inadvertently expose sensitive parts of the host filesystem to unprivileged containers.
-
Use with Caution: Given the direct access hostPath provides to the host filesystem, use it sparingly and only when absolutely necessary, as it can potentially create security risks if not managed carefully.
-
Consider Kubernetes Security Best Practices: Always follow Kubernetes security best practices, including using least privilege for pod access to host resources and considering alternative volume types or mechanisms (like PVCs and PVs) that provide better isolation and security.
In summary, the behavior you’re seeing with hostPath and SELinux is by design, and whether it’s suitable or not depends on your specific security posture and requirements. It’s crucial to understand the implications and manage the associated risks appropriately.
Question: Do you know if and how can I disable SELinux context relabeling on volumes (local-storage) for privileged container? in Podman it is --security-opt label:disable
, but don’t know how to do the same on k8s (I am using k3s).
Answer: In Kubernetes, specifically when using distributions like k3s, the direct control over SELinux context relabeling at the container runtime level (similar to what you can do with Podman’s --security-opt label:disable
) is not exposed through standard Kubernetes Pod specifications. This is because Kubernetes abstracts much of the container runtime specifics to ensure portability and consistency across different environments and container runtimes however, there are a couple of approaches you might consider to achieve a similar effect within a Kubernetes (or k3s) environment, especially when dealing with local storage and privileged containers.
- Using Security Context in Pod Specifications
While Kubernetes does not directly offer an option to disable SELinux context relabeling for volumes, you can specify certain security contexts at the pod or container level. For a privileged container, your pod specification might include a securityContext that looks something like this:
|
|
The spc_t
SELinux type is used by super privileged containers and might provide the level of access you’re looking for, although it’s not the same as disabling SELinux labeling entirely. It essentially allows the container to run with almost no SELinux restrictions, but this approach requires that your system and SELinux policies support the spc_t
type.
- Modifying SELinux Policies on the Host
Since Kubernetes and k3s don’t expose a way to disable SELinux relabeling for volumes directly, another approach is to manage SELinux policies on the host system to allow the access patterns you need without relabeling. This involves:
- Identifying the SELinux contexts used by your k3s containers and the local storage volumes.
- Creating custom SELinux policies that allow the required access between these contexts.
This approach requires a good understanding of SELinux policy management and might involve tools like audit2allow to generate custom policy modules based on audit logs of denied operations.
- Utilizing hostPath Carefully
As I’ve noticed with hostPath, it doesn’t perform SELinux relabeling, which might serve your needs in certain cases. However, using hostPath requires careful consideration due to the security implications of exposing host filesystems to pods.
Limitations and Considerations
-
Security Implications: Running containers in privileged mode and altering SELinux policies can have significant security implications. It’s essential to understand the risks and apply such configurations only when absolutely necessary, following the principle of least privilege.
-
Compatibility and Support: Any modifications to SELinux policies or Kubernetes security contexts should be tested thoroughly to ensure they don’t inadvertently break other aspects of your cluster’s operation or security.
Conclusion
While Kubernetes doesn’t offer a direct equivalent to Podman’s –security-opt label:disable, through careful management of pod security contexts and SELinux policies on the host, you can achieve similar outcomes. Always balance the operational requirements with security best practices to maintain the integrity and security of your cluster.
Question: How exactly hostPath is technically different under the hood from local storage volume and also how it differs in in SELinux context we talked previously?
Answer: The differences between hostPath volumes and local storage volumes in Kubernetes, especially in the context of SELinux, involve both their intended use cases and how they interact with the underlying system’s security policies. Here’s a deeper dive into the technical distinctions:
hostPath Volumes
-
Direct Access: hostPath volumes provide direct access to the filesystem of the node on which the pod is running. When you specify a hostPath volume, you’re essentially telling Kubernetes to mount a specific file or directory from the host into the pod. This bypasses the more managed and abstracted storage provisioning processes of Kubernetes.
-
SELinux Context: When using hostPath, the SELinux context of the files or directories being accessed is determined by their existing labels on the host filesystem. Kubernetes does not perform any automatic SELinux relabeling of hostPath volumes. This means the pod must have the appropriate SELinux context to interact with the data as it exists on the host, or the operation must be permitted by the current SELinux policy. This behavior can lead to access denials if the pod’s SELinux context doesn’t allow for interaction with the hostPath’s SELinux context.
Local Storage Volumes
-
Dynamic Provisioning: Local Persistent Volumes (LPVs) are intended to provide a more durable and portable way to use local storage in Kubernetes. Unlike hostPath, which directly specifies a path on the host, local volumes are provisioned and managed through the Persistent Volume (PV) and Persistent Volume Claim (PVC) mechanisms. This allows for dynamic provisioning and more fine-grained control over storage properties, including capacity, access modes, and, to some extent, security settings.
-
SELinux Context Relabeling: For dynamically provisioned volumes, including those provisioned as local storage through PVs, Kubernetes supports automatic SELinux relabeling based on the pod’s SELinux context. This relabeling ensures that the mounted volume is accessible to the pod under SELinux enforcement, aligning the volume’s SELinux label with the pod’s security requirements. This automatic relabeling is part of Kubernetes’ approach to handling volume security in a dynamic, multi-tenant environment.
Key Differences in Context of SELinux
-
Automatic Relabeling: The most significant difference under the hood, in the context of SELinux, is that hostPath volumes do not undergo automatic SELinux context relabeling, while local storage volumes provisioned through the PV/PVC mechanism can be automatically relabeled to match the pod’s SELinux context. This distinction is crucial for security and access control in SELinux-enforced environments.
-
Use Case and Flexibility vs. Security: hostPath is more flexible but less secure, as it allows pods to directly access and potentially modify host filesystems without the mediation of Kubernetes’ storage abstractions. Local storage volumes, meanwhile, are designed to integrate with Kubernetes’ storage and security models, providing a balance between utilizing local storage and maintaining security and portability.
Conclusion
The choice between hostPath and local storage volumes in Kubernetes should be informed by the specific requirements of your application and the security posture of your environment. hostPath offers direct, unmediated access to host filesystems, suitable for specific cases where such access is absolutely necessary. In contrast, local storage volumes provide a more managed, secure way to utilize local disk resources, aligning with Kubernetes’ dynamic provisioning and access control mechanisms, including SELinux policy enforcement and relabeling.
Yes, your understanding is correct. The conversation about hostPath
volumes, SELinux context, and their interaction is indeed crucial, especially in environments where SELinux is enabled and enforcing. Let me summarize and expand on how this applies to your scenario with AWX running on Kubernetes, and the broader implications of using hostPath
for volumes in such contexts.
SELinux and Kubernetes Volumes:
-
SELinux Contexts: SELinux uses contexts to enforce security policies on files and processes. The context of a process determines its permissions for interacting with files or other processes. In a Kubernetes environment, these contexts also apply to containers and the volumes they access.
-
hostPath Volumes: When you use a
hostPath
volume in Kubernetes, the volume is mounted into the pod directly from the host’s filesystem without changing its SELinux context. This means that for the pod to access thehostPath
volume, the SELinux policy must allow the container’s SELinux context to interact with the host’s filesystem context where thehostPath
points.
Interaction with SELinux:
-
Direct Access with hostPath: The direct access provided by
hostPath
can be a double-edged sword. It’s beneficial for use cases where specific files or directories on the host need to be available to a pod. However, this can introduce security risks if not carefully managed, especially in SELinux-enforced environments, because it bypasses the Kubernetes abstraction layer designed to manage access controls and resource allocation more securely. -
Why Access Works in Some Cases: In environments where SELinux is set to permissive mode or where policies are configured to allow the specific access patterns required by your Kubernetes pods, you might not run into access denials. This flexible access is not a fault of SELinux but rather an indication of SELinux policies being correctly configured to allow the necessary interactions between your pods and the host system.
-
Permission Denied Issue: The “Permission Denied” issue arises when the SELinux context of the directory or file on the host does not permit access by the SELinux context under which the container runs. Even if the container runs as a user with UID 0 (root), SELinux policies can still restrict access based on contexts.
Security Considerations and Best Practices:
-
Minimizing Risk: To minimize security risks, it’s recommended to use Kubernetes-managed volumes (like PVCs) that are designed to work seamlessly with Kubernetes’ security model, including support for automatic SELinux context relabeling when needed.
-
SELinux Policies: If
hostPath
must be used, ensure that SELinux policies on the host are carefully crafted to allow only the necessary access by Kubernetes pods, without opening up broader permissions that could be exploited. -
Container Security: The fact that containers run with limited permissions and without a full init system like systemd is part of the container security model. Containers are meant to be lightweight and run a single application or process in a secure and isolated manner. Gaining root access within a container or running systemd inside a container goes against these security practices and is generally discouraged.
In summary, when working in environments with SELinux enabled, understanding the interaction between SELinux contexts and Kubernetes volumes is crucial for maintaining security while ensuring the necessary access for your applications. Configuring SELinux policies and Kubernetes security contexts correctly can help achieve the desired functionality without compromising on security.
Summarize:
Let’s summarize the problem.
-
Issue with AWX projects directory: I noticed that creating a project inside AWX was not possible because the project directory did not exist in the AWX web pod. To work around this, I considered manually creating the project directory by accessing the pod via
kubectl exec
. -
Persistent solution with Persistent Volumes (PVs) and Persistent Volume Claims (PVCs): Recognizing that manual changes would be lost if the pod were deleted, I explored a more durable solution involving modifying an existing ansible playbook. This solution entailed creating PV and PVC resources to ensure that the AWX project directory persists across pod restarts and deletions.
-
Ansible playbook modifications: I aimed to add tasks to the ansible playbook to create PV and PVC resources. Additionally, I wanted to modify the
awx.yaml
file within the playbook to includeprojects_persistence: true
andprojects_existing_claim: awx-projects-claim
, ensuring AWX uses the PVC for project storage. -
SELinux and
hostPath
volumes: The discussion also touched upon howhostPath
volumes work with SELinux, emphasizing the importance of SELinux contexts and permissions when usinghostPath
volumes in Kubernetes. It was noted thathostPath
volumes do not automatically trigger SELinux context relabeling, which could lead to permission issues unless the SELinux policies are appropriately configured. -
Final ansible playbook solution: I shared a final version of my ansible playbook designed to install and remove AWX and its associated resources from Kubernetes. This AWX remove playbook includes steps to delete specific deployments, service accounts, role bindings, roles, and scale down deployments before removing PVCs, PVs, and ultimately the namespace. The playbook employs
ignore_errors: yes
to ensure the playbook execution continues even if some resources are already deleted or not found. -
Summary: The discussion provided insights into managing Kubernetes resources through ansible, especially for maintaining persistent storage with AWX deployments and handling Kubernetes resources in a way that respects SELinux policies. The final playbook shared by you offers a comprehensive approach to cleanly removing AWX and its resources from a Kubernetes cluster, emphasizing meticulous resource management and cleanup practices.
Sources: