In this post, I'm aiming to dig into the next level of template implementation, specifically how a role is implemented behind the provider resource alias used in the top-level template.
I'm only going to cover one role type for now OS::TripleO::Controller, because the patterns described are directly applicable to all other role types. I'm also going to focus on the puppet based implementation (because that's what I'm most familiar with), but again most concepts apply to the element/container/etc based implementations too.
Throughout this post, I'll be referring to templates in the tripleo-heat-templates repo, so if you haven't already, now might be a good time to clone that so you can follow along looking at the templates themselves.
Recap - the controller group definition
So, as described in my previous post, the top-level TripleO heat template defines an OS::Heat::ResourceGroup called "Controller", which contains a group of OS::TripleO::Controller resources.
This OS::TripleO::Controller resource type is mapped to another heat template via the resource registry in the heat environment, like this:
For clarity, I've removed the mappings not related to the controller here, and I've also not shown the resources related to configuring the cluster after initial deployment via the ResourceGroup (that will be covered in the next installment! :)
I'm going to take these pieces step by step to show how the first part of the deployment flow works, starting with building one OS::TripleO::Controller.
Initial deployment flow, step by stepCreating a OS::TripleO::Controller resource creates a heat nested stack, using the template defined in the resource_registry.
The deployment flow will be familiar to anyone who has tried out Heat SoftwareConfig resources, as I covered in a previous post:
The deployment sequence looks like this:
- An OS::TripleO::NodeUserData resource is created, by default this does nothing, but it provides a hook where deployers can easily plug in site specific "firstboot" configuration, e.g some special cloud-config to pass to cloud-init, or some script to run (more on this in a future post).
- We create an OS::Nova::Server resource (confusingly called "Controller", the same as the ResourceGroup in the parent template..), using the flavor and size passed in to the template via parameters. Typically the "baremetal" flavor will be used, configured so the deployment happens via Ironic to enable deployment to baremetal servers.
- An OS::TripleO::SoftwareDeployment is created, which applies an OS::TripleO::Net::SoftwareConfig SoftwareConfig resource to the server - as indicated by the names, these abstractions configure the network on the node, using the exact same method described in the primer on SoftwareConfig resources - the resources are named differently to enable abstractions which cleanly support different network configurations (and in future topologies), e.g in the resource_registry above we'll be applying the config defined in net-config-bridge.yaml.
- Last but not least, we use another OS::TripleO::SoftwareDeployment to apply ControllerConfig, which is simply passing a large map of data to os-apply-config, which is then stored as heiradata, (to be consumed later by puppet when it's configuring the services on the deployed cluster of nodes)
Phew, is that all?Well, as the eagle-eyed amongst you will have spotted, it's not - but it is the end of the initial deployment phase prior to configuring the cluster.
- We've deployed a node
- Optionally performed some "firstboot" configuration
- Configured the network on the node
- Performed some preliminary configuration of the services on the node
The next step is to perform a series of post-deployment configuration passes on the whole ResourceGroup, or in other words configure the cluster of controllers so you have a group of fully functional OpenStack controller nodes - more on this in my next post! :)