Using the access information/credentials provided make sure you can log into to your:
Let’s start, as the docs for this are distributed over some places we’ll give some more instructions.
In this lab your automation controller was already configured during installation to fetch collections from PAH and Ansible Galaxy. The first task is to disable direct retrieval of collections from Ansible Galaxy so only collections on PAH could be used.
default
Organization in automation controller and click Edit.You could add more credentials here, the order of these credentials sets precedence for the sync and lookup of the content.
It’s important to note that the PAH credentials are added automatically during an installation of automation controller and PAH in one installer run. If you install PAH separately, you have to configure the credentials yourself!
Before we configure content synchronization, we want to add a demo project:
https://github.com/ansible-learnfest/ee-flow.git
collections/requirements.yml
fileYou will notice this project will fail to sync (Click on the Jobs menu on the left) with an error “ERROR! Failed to resolve the requested dependencies map. Could not satisfy the following requirements: containers.podman”. This is because the requirements.yml
file lists a dependency for a collection which is not yet available to your automation controller (because we disabled Galaxy and didn’t sync anything to PAH yet).
To solve this issue, we have to configure your PAH to sync the necessary collections, and to configure your automation controller to use the content from your private automation hub. We did already configure automation controller, but we haven’t synced any content yet.
Most of this is well documented in the Managing Red Hat Certified and Ansible Galaxy collections in automation hub documentation.
If you never modified your sync settings in Automation Hub before, all collections will be synchronized. To speed up the sync time, we recommend to disable as many as you can - but not all of them.
If you do not have organization admin privileges, you will not able able to turn on/off the sync button. In this case, reach out to your instructor and an API token and a sync URL for the next task will be provided to you.
rh-certified
remote:
You can navigate to Task Management in the menu on the left to see your sync progressing.
Galaxy is configured as the remote community
out of the box. Follow the instructions to configure the synchronization.
requirements.yml
file pointing to the containers.podman
collection:collections:
# Install a collection from Ansible Galaxy.
- name: containers.podman
community
remoterequirements.yml
file from your local machine.community
remoteVerify the sync of the collections in Collections -> Collections, switch the repository filter with the dropdown at the top. There should be a lot of content in the Red Hat Certified
repo and one collection in the Community
repo. The ‘published’ filter will not find anything, since we haven’t uploaded any collections we created ourselves.
You can navigate to Task Management in the menu on the left to see your sync progressing.
Now check that automation controller can actually use the content from your PAH. Let’s first sync our project again and the error message should disappear, because now automation controller can download and install the containers.podman
collection from your private automation hub.
Before we can test with an actual Playbook, we have to create an inventory in automation controller. To create a dynamic inventory for AWS, we first have to create the necessary credentials.
By default the EC2 dynamic inventory plugin will use the FQDN of the instance as a host name. These FQDN’s are very long and not very useful. To make your inventory nice and clean, add the following settings to your inventory source variables:
hostnames:
- tag:Name
compose:
ansible_host: public_dns_name
After an inventory sync, you should see three nodes (node1 to node3) in the Hosts menu (and localhost from the Demo Inventory).
For a proper end to end test, let’s create a Job Template that uses the containers.podman
collection, which is by the way not part of any of the included Execution Environments:
Sync the project you created earlier again and check it runs successfully. You should notice from the job output that the task which installs collections from the requirements.yml
is now succeeding. You can even see in the JSON output that controller installed the collection from your private automation hub.
Create a new Job Template:
LearnFest Inventory
deploy-container.yml
LearnFest Credentials
Launch the Template
It should now run and deploy an httpd container on node1
that is hosting a small website. Test it from the terminal in VS Code Server:
# you can find the FQDN of the instance in your automation controller inventory in the **Hosts**, search for the public_dns_name.
$ curl <node 1 FQDN>
Welcome to Ansible LearnFest!
So recap what happened:
default
) is configured in a way it can only download Collections from your private automation hubSince this collection is not part of the Execution Environment the Playbook uses, how did it work? In this case is it was dynamically “added” to the Execution Environment at runtime. This behavior did already exist in Ansible Tower 3.8, and it still does work in automation controller. This means, you only have to build your own execution environment if your collection has additional Python or package dependencies. You can double check by looking at the details of the “source control update” job of your project and click on the “fetch galaxy collections from collections/requirements” task.
Be able to manually configure private automation hub to synchronize content from Red Hat’s automation hub and Ansible Galaxy.
You should also better understand that, although it is beneficial to create custom execution environments, it’s not always necessary and automation controller can still load and install collections during runtime.