For a while now, TripleO has supported a "DeployArtifacts" interface, aimed at making it easier to deploy modified/additional files on your overcloud, without the overhead of frequently rebuilding images.
This started out as a way to enable faster iteration on puppet module development (the puppet modules are by default stored inside the images deployed by TripleO, and generally you'll want to do development in a git checkout on the undercloud node), but it is actually a generic interface that can be used for a variety of deployment time customizations.
Ok, how do I use it?
Lets start with a couple of usage examples, making use of some helper scripts that are maintained in the tripleo-common repo (in future similar helper interfaces may be added to the TripleO CLI/UI but right now this is more targetted at developers and advanced operator usage).
First clone the tripleo-common repo (you can skip this step if you're running a packaged version which already contains the following scripts):
[stack@instack ~]$ git clone https://git.openstack.org/openstack/tripleo-common
There are two scripts of interest, firstly a generic script that can be used to deploy any kind of file (aka artifact) tripleo-common/scripts/upload-swift-artifacts and a slightly modified version which optimizes the flow for deploying directories containing puppet modules called tripleo-common/scripts/upload-puppet-modules
To make using these easier, I append this to my .bashrc
export PATH="$PATH:/home/stack/tripleo-common/scripts"
Example 1 - Deploy Artifacts "Hello World"
So, let's start with a really simple example. First lets create a tarball containing a single /tmp/hello file:
[stack@instack ~]$ mkdir tmp
[stack@instack ~]$ echo "hello" > tmp/hello
[stack@instack ~]$ tar -cvzf hello.tgz tmp
tmp/
tmp/hello
Now, we simply run the upload-swift-artifacts script, accepting all the default options other than to pass a reference to hello.tgz
[stack@instack ~]$ upload-swift-artifacts -f hello.tgz
Creating heat environment file: /home/stack/.tripleo/environments/deployment-artifacts.yaml
Uploading file to swift: hello.tgz
hello.tgz
Upload complete.
There are currently only two supported file types:
- A tarball (will be unpacked from / on all nodes)
- An RPM file (will be installed on all nodes)
Taking a look inside the environment file the script generated, we can see it's using the DeployArtifactURLs parameter, and passing a single URL (the parameter accepts a list of URLs). This happens to be a swift tempurl, created by the upload-swift-artifacts script but it could be any URL accessible to the overcloud nodes at deployment time.
[stack@instack ~]$ cat /home/stack/.tripleo/environments/deployment-artifacts.yaml
# Heat environment to deploy artifacts via Swift Temp URL(s)
parameter_defaults:
DeployArtifactURLs:
- 'http://192.0.2.1:8080/v1/AUTH_e9bcd2a11af94c319b164eba73c59a28/overcloud/hello.tgz?temp_url_sig=96ae277d85c3ee38dd61234b8c99351e64c8bd45&temp_url_expires=1502273853'
This environment file is automatically generated by the upload-swift-artifacts script, and put into the special ~/.tripleo/environments directory. This directory is read by tripleoclient and any environment files included here are always included automatically (no need for any -e options), but you can also pass a --environment option to upload-swift-artifacts if you prefer some different output location (e.g so it can be explicitly included in your overcloud deploy command).
Testing this example, you simply do an overcloud deployment, no additional arguments are needed if you use the default .tripleo/environments/deployment-artifacts.yaml environment path:
[stack@instack ~]$ openstack overcloud deploy --templates
Then check on one of the nodes for the expected file (note the tarball is unpacked from / in the filesystem):
[root@overcloud-controller-0 ~]# cat /tmp/hello
hello
Note the deploy artifact files are written to all roles, currently there is no way to deploy e.g only to Controller nodes. We might consider an enhancement that allows role specific artifact URL parameters in future should folks require it.
Hopefully despite the very simple example you can see that this is a very flexible interface - you can deploy a tarball containing anything, e.g even configuration files such as policy.json files to the nodes.
Note that you have to be careful though - most service configuration files are managed by puppet, so if you attempt using the deploy artifacts interface to overwrite puppet managed files it will not work - puppet runs after deploy artifacts are created (this is deliberate, as you will see in the next example) so you must use puppet hieradata to influence any configuration managed by puppet. (In the case of policy.json files, there is a puppet module that handles this, but currently TripleO does not use it - this may change in future though).
Example 2 - Puppet development workflow
There is coupling between tripleo-heat-templates and the puppet modules it interfaces with (and in particular with the puppet profiles that exist in puppet-tripleo, as discussed in my composable services tutorial recently), so a common pattern for a developer is:- Modify some puppet code
- Modify tripleo-heat-templates to match the new/modified puppet profile
- Deploy an overcloud
- *OH NO* it doesn't work!
- Debug the issue (hint, "openstack stack failures list overcloud" is a super-useful new heatclient command which helps a lot here, as it surfaces the puppet error in most cases)
- Make coffee; goto (1) :)
- Rebuild the image every time (this is really slow)
- Use virt-customize or virt-copy-in to copy some modifications into the image, then update the image in glance (this is faster, but it still means you must redeploy the nodes every time and it's easy to lose track of what modifications have been made).
- Use DeployArtifactUrls to update the puppet modules on the fly during the deployment!
This works a little bit differently to the previous upload-swift-artifacts script, it takes the directory, creates a tarball using the --transform option, so we rewrite the prefix from /somewhere/puppet-modules to /etc/puppet/modules
The process after we create the tarball is exactly the same - we upload it to swift, get a tempurl, and create a heat environment file which references the location of the tarball. On deployment, the updated puppet modules will be untarred and this always happens before puppet runs, which makes the debug workflow above much faster, nice!
NOTE: There is one gotcha here -
So how does it work?
Deploy Artifacts Overview |
So, it's actually pretty simple, as illustrated in the diagram above
- A tarball is created containing the files you want to deploy to the nodes
- This tarball is uploaded to swift on the undercloud
- A Swift tempurl is created, so the tarball can be accessed using a signed URL (no credentials needed in the nodes to access)
- A Heat environment passes the Swift tempurl to a nested stack "deploy-artifacts.yaml", which defines a DeployArtifactUrls parameter (which is a list)
- deploy-artifacts.yaml defines a Heat SoftwareConfig resource, which references a shell script that can download files from a list of URLs, check the file type and do something (e.g in the case of a tarball, untar it!)
- The deploy-artifacts SoftwareConfig is deployed inside the per-role "PostDeploy" template, which is where we perform the puppet steps (5 deployment passes which apply puppet in a series of steps).
- We use the heat depends_on directive to ensure that the DeployArtifacts deployment (ControllerArtifactsDeploy in the case of the Controller role) always runs before any of the puppet steps.
- This pattern is replicated for all roles (not just the Controller as in the diagram above)
Awesomeness here, Steve. I was playing around with this for policy as you alluded to in the article, but great to see the full story, including why you built it in the first place. Using for RPM management will be useful for post-GA Federation work.
ReplyDelete"openstack stack failures list overcloud" best ever command , thanks for the article, this is was very useful
ReplyDelete`openstack stack failures list overcloud --long` is great for getting a full trace too!
ReplyDeleteSalt lamp Most of the time I don’t make comments on websites, but I'd like to say that this article really forced me to do so. Really nice post!
ReplyDeleteDocker Online Training
ReplyDeleteKubernetes Online Training
Kubernetes Training in Hyderabad
ReplyDeleteit is really wonderful and awesome thus it is very much useful for me to understand many concepts and helped me a lot. DevOps Training in Bangalore | Certification | Online Training Course institute | DevOps Training in Hyderabad | Certification | Online Training Course institute | DevOps Training in Coimbatore | Certification | Online Training Course institute | DevOps Online Training | Certification | Devops Training Online