In recent releases (since the Pike release) we've made some major changes to the TripleO architecture - we makes more use of Ansible "under the hood", and we now support deploying containerized environments. I described some of these architectural changes in a talk at the recent OpenStack Summit in Sydney.
In this post I'd like to provide a refreshed tutorial on typical debug workflow, primarily focussing on the configuration phase of a typical TripleO deployment, and with particular focus on interfaces which have changed or are new since my original debugging post.
We'll start by looking at the deploy workflow as a whole, some heat interfaces for diagnosing the nature of the failure, then we'll at how to debug directly via Ansible and Puppet. In a future post I'll also cover the basics of debugging containerized deployments.
The TripleO deploy workflow, overview
A typical TripleO deployment consists of several discrete phases, which are run in order:
Provisioning of the nodes
- A "plan" is created (heat templates and other files are uploaded to Swift running on the undercloud
- Some validation checks are performed by Mistral/Heat then a Heat stack create is started (by Mistral on the undercloud)
- Heat creates some groups of nodes (one group per TripleO role e.g "Controller"), which results in API calls to Nova
- Nova makes scheduling/placement decisions based on your flavors (which can be different per role), and calls Ironic to provision the baremetal nodes
- The nodes are provisioned by Ironic
This first phase is the provisioning workflow, after that is complete and the nodes are reported ACTIVE by nova (e.g the nodes are provisioned with an OS and running).
Host preparationThe next step is to configure the nodes in preparation for starting the services, which again has a specific workflow (some optional steps are omitted for clarity):
- The node networking is configured, via the os-net-config tool
- We write hieradata for puppet to the node filesystem (under /etc/puppet/hieradata/*)
- We write some data files to the node filesystem (a puppet manifest for baremetal configuration, and some json files that are used for container configuration)
Service deployment, step-by-step configurationThe final step is to deploy the services, either on the baremetal host or in containers, this consists of several tasks run in a specific order:
- We run puppet on the baremetal host (even in the containerized architecture this is still needed, e.g to configure the docker daemon and a few other things)
- We run "docker-puppet.py" to generate the configuration files for each enabled service (this only happens once, on step 1, for all services)
- We start any containers enabled for this step via the "paunch" tool, which translates some json files into running docker containers, and optionally does some bootstrapping tasks.
- We run docker-puppet.py again (with a different configuration, only on one node the "bootstrap host"), this does some bootstrap tasks that are performed via puppet, such as creating keystone users and endpoints after starting the service.
Note that these steps are performed repeatedly with an incrementing step value (e.g step 1, 2, 3, 4, and 5), with the exception of the "docker-puppet.py" config generation which we only need to do once (we just generate the configs for all services regardless of which step they get started in).
Below is a diagram which illustrates this step-by-step deployment workflow:
|TripleO Service configuration workflow|
The most common deployment failures occur during this service configuration phase of deployment, so the remainder of this post will primarily focus on debugging failures of the deployment steps.
Debugging first steps - what failed?
Ok something failed during your TripleO deployment, it happens to all of us sometimes! The next step is to understand the root-cause.
My starting point after this is always to run:
openstack stack failures list --long <stackname>
We can tell several things from the output (which has been edited above for brevity), firstly the name of the failing resource
- The error was on one of the Controllers (ControllerDeployment)
- The deployment failed during the per-step service configuration phase (the AllNodesDeploySteps part tells us this)
- The failure was during the first step (Step1.0)
With a little more digging we can see which node exactly this failure relates to, e.g we copy the SoftwareDeployment ID from the output above, then run:
Ok so puppet failed while running via ansible on overcloud-controller-0.
Debugging via Ansible directlyHaving identified that the problem was during the ansible-driven configuration phase, one option is to re-run the same configuration directly via ansible-ansible playbook, so you can either increase verbosity or potentially modify the tasks to debug the problem.
Since the Queens release, this is actually very easy, using a combination of the new "openstack overcloud config download" command and the tripleo dynamic ansible inventory.
Here we can see there is a "deploy_steps_playbook.yaml", which is the entry point to run the ansible service configuration steps. This runs all the common deployment tasks (as outlined above) as well as any service specific tasks (these end up in task include files in the per-role directories, e.g Controller and Compute in this example).
We can run the playbook again on all nodes with the tripleo-ansible-inventory from tripleo-validations, which is installed by default on the undercloud:
Here we can see the same error is reproduced directly via ansible, and we made use of the --limit option to only run tasks on the overcloud-controller-0 node. We could also have added --tags to limit the tasks further (see tripleo-heat-templates for which tags are supported).
If the error were ansible related, this would be a good way to debug and test any potential fixes to the ansible tasks, and in the upcoming Rocky release there are plans to switch to this model of deployment by default.
Debugging via Puppet directlySince this error seems to be puppet related, the next step is to reproduce it on the host (obviously the steps above often yield enough information to identify the puppet error, but this assumes you need to do more detailed debugging directly via puppet):
Firstly we log on to the node, and look at the files in the /var/lib/tripleo-config directory.
The puppet_step_config.pp file is the manifest applied by ansible on the baremetal host
We can debug any puppet host configuration by running puppet apply manually. Note that hiera is used to control the step value, this will be at the same value as the failing step, but it can also be useful sometimes to manually modify this for development testing of different steps for a particular service.
Here we can see the problem is a typo in the /etc/puppet/modules/tripleo/manifests/profile/base/docker.pp file at line 181, I look at the file, fix the problem (ugeas should be augeas) then re-run puppet apply to confirm the fix.
Note that with puppet module fixes you will need to get the fix either into an updated overcloud image, or update the module via deploy artifacts for testing local forks of the modules.
That's all for today, but in a future post, I will cover the new container architecture, and share some debugging approaches I have found helpful when deployment failures are container related.